Social engineering is advancing fast, at the speed of generative AI. This is offering bad actors multiple new tools and techniques for researching, scoping, and exploiting organizations. In a recent communication, the FBI pointed out: ‘As technology continues to evolve, so do cybercriminals’ tactics.’
This article explores some of the impacts of this GenAI-fueled acceleration. And examines what it means for IT leaders responsible for managing defenses and mitigating vulnerabilities.
More realism, better pretexting, and multi-lingual attack scenarios
Traditional social engineering methods usually involve impersonating someone the target knows. The attacker may hide behind email to communicate, adding some psychological triggers to boost the chances of a successful breach. Maybe a request to act urgently, so the target is less likely to pause and develop doubts. Or making the email come from an employee’s CEO, hoping the employee’s respect for authority means they won’t question the message.
If using voice, then the attacker may instead pretend to be someone that the target hasn’t spoken to (and would recognize the voice). Maybe pretending to be from another department or external partner.
Of course, these methods often fall apart when the target wants to verify their identity in some way. Whether that’s wanting to check their appearance, or how they write in a real-time chat.
However, now that GenAI has entered the conversation, things have changed.
The rise in deepfake videos means that adversaries no longer need to hide behind keyboards. These blend genuine recordings to analyze and recreate a person’s mannerisms and speech. Then it’s simply a case of directing the deepfake to say anything, or using it as a digital mask that reproduces what the attacker says and does in front of the camera.
The rise in digital-first work, with remote workers used to virtual meetings, means it’s easier to explain away possible warning signs. Unnatural movements, or voice sounding slightly different? Blame it on a bad connection. By speaking face-to-face this adds a layer of authenticity that supports our natural instinct to think that ‘seeing is believing’.
Voice cloning technology means attackers can speak in any voice too, carrying out voice phishing, also known as vishing, attacks. The growing capability of this technology is reflected in Open AI’s recommendation for banks to start ‘Phasing out voice based authentication as a security measure for accessing bank accounts and other sensitive information.’
Text-based communication is also transformed with GenAI. The rise of LLMs allows malicious actors to operate at near-native speaker level, with outputs able to be trained on regional dialects for even greater fluency. This opens the door to new markets for social engineering attacks, with language no longer a blocker when selecting targets.
Bringing order to unstructured OSINT with GenAI
If someone’s ever been online, they’ll have left a digital footprint somewhere. Depending on what they share, this can sometimes be enough to reveal enough information to impersonate them or compromise their identity. They may share their birthday on Facebook, post their place of employment on LinkedIn, and put pictures of their home, family, and life on Instagram.
These actions offer ways to build up profiles to use with social engineering attacks on the individuals and organizations they’re connected to. In the past, gathering all this information would be a long and manual process. Searching each social media channel, trying to join the dots between people’s posts and public information.
Now, AI can do all this at hyperspeed, scouring the internet for unstructured data, to retrieve, organize and classify all possible matches. This includes facial recognition systems, where it’s possible to upload a photo of someone and let the search engine find all the places they appear online.
What’s more, because the information is available publicly, it’s possible to access and aggregate this information anonymously. Even when using paid-for GenAI tools, stolen accounts are for sale on the dark web, giving attackers another way to hide their activity, usage, and queries.
Turning troves of data into troves of treasure
Large-scale data leaks are a fact of modern digital life, from over 533 million Facebook users having details (including birthdays, phone numbers, locations) compromised in 2021, to more than 3 billion Yahoo users having sensitive information exposed in 2024. Of course, manually sifting through these volumes of data troves isn’t practical or possible.
Instead, people can now harness GenAI tools to autonomously sort through high volumes of content. These can find any data that could be used maliciously, such as for extortion, weaponizing private discussions, or stealing Intellectual Property hidden in documents.
The AI also maps the creators of the documents (using a form of Named Entity Recognition), to establish any incriminating connections between different parties including wire transfers and confidential discussions.
Many tools are open source, allowing users to customize with plugins and modules. For example, Recon-ng can be configured for use cases such as email harvesting and OSINT gathering. Other tools aren’t for public use, such as Red Reaper. This is a form of Espionage AI, capable of sifting through hundreds of thousands of emails to detect sensitive information that could be used against organizations.
The GenAI genie is out of the bottle – is your business exposed?
Attackers can now use the internet as a database. They just need a piece of data as a starting point, such as a name, email address, or image. GenAI can get to work, running real-time queries to mine, uncover, and process connections and relationships.
Then it’s about choosing the appropriate tool for exploits, often at scale and running autonomously. Whether that’s deepfake videos and voice cloning, or LLM-based conversation-driven attacks. These would have been limited to a select group of specialists with the necessary knowledge. Now, the landscape is democratized with the rise of ‘hacking as a service’ that does much of the hard work for cybercriminals.
So how can you know what potentially compromising information is available about your organization?
We’ve built a threat monitoring tool that tells you. It crawls every corner of the internet, letting you know what data is out there and could be exploited to build effective attack pretexts, so you can take action before an attacker gets to it first.