There has been much excitement and buzz around generative AI in recent months. New tools pop up every hour that will undoubtedly make us humans far more efficient. At the same time, hackers have the same tools available to them and they are some of the most innovative people on the planet. They spend whole days thinking about how to use the latest and most advanced tools at their disposal to bypass security controls of nation states, corporations and individuals.
In this post we look at how hackers are using generative AI technologies to modify their attack vectors and amplify their success rates. We also discuss the importance of identity security controls to prevent AI attacks, and the increasing urgency around adopting them.We primarily focus on how deterministic methods of verifying humans are critical in this new age when artificial intelligence can mimic a human interaction with extreme accuracy.
Generative AI is brilliant at making repeatable tasks extremely fast and effortless. Let’s put ourselves into the shoes of a hacker and consider the “Before AI” and “After AI” scenarios of the typical attack vectors that one would consider.
Before generative AI, hackers created generic emails with very minimal context and used mass mailing systems to deliver email, SMS or app-based messages to individuals. These emails often contain a malicious link that would prompt the victim to provide their password as well as a 2nd factor such as an OTP code or approving a push notification. The attacker would then get access to their account and wreak havoc.
This method is a “spray and pray” approach that yields limited results but is still highly profitable. Many hackers have moved to “spear phishing” attacks where the message is more tailored to the individual. This yields a much higher likelihood of success but is far more tedious and time consuming to execute. For example, spear phishing attacks may reference a friend or family member or recent event that a victim may have posted about on social media to increase the likelihood of them clicking the malicious link that the message contains.
With generative AI, hackers can make every single phishing message a highly tailored spear phishing message. The AI can search millions of potential victims' social media profiles, their company’s recent news and updates, and then craft a message that is uniquely tailored to that individual that they are very likely to engage with. The AI can even mask themselves to first have a “real” conversation with a victim that builds trust and then execute the malicious payload after a baseline of trust is established.
This approach is already making phishing attacks far more sophisticated and successful. We have seen ample evidence that successful phishing attacks can have a major negative business impact and the AI-strengthened attacks are a major concern for company CEOs and boards of directors everywhere.
Hackers who excel at social engineering tend to make the big bucks. Social engineering can be done via many mediums such as email, in-person, over the phone or via video. During the COVID-19 pandemic, businesses had to fundamentally change their working practices and remote work became the norm. As a result, the number of social engineering attacks skyrocketed because a much higher percentage of our interactions (particularly ones that needed elevated assurance) became fully digital.
Hackers took advantage of overloaded IT service desk employees and convinced them (over the phone) to reset passwords or get access to accounts. They would search the Internet to find out basic information about an employee and use that to social engineer their way into countless organizations. We also saw many examples of consumers defrauded by scammers through fraud schemes that cost the public millions.
With generative AI, a hacker can use an existing model to execute attacks using deep fakes through chat, video or phone calls. These deep fakes will have full context and knowledge about their victim and be able to convince them to provide credentials, send money or divulge sensitive information that can be used for profit or political gain. For example, an AI can fully recreate a company CFO’s voice and use it to call into an IT service desk to reset their credentials. Once they have access to the account, they can send fictitious emails to affect the stock price of the business. Scary!
Password cracking is a go-to for hackers. Most people have terribly insecure passwords and almost everyone has had some form of their password leaked in one of the high-profile breaches, such as LinkedIn, that everyone has read about. Hackers who are good at cracking passwords understand that most people use simple variations of the same password and can generally crack it with a little research. Unfortunately, this is a tedious process. At the same time, hackers are often limited in their password cracking due to cultural differences. If a hacker in Russia wants to hack into the account of a victim in Saudi Arabia, they would have to do significant research on this person’s hobbies, history, etc. to reliably be able to guess their passwords. As a result, hackers focus their time on higher value targets because time is money!
With generative AI, hackers can use a simple prompt to have the artificial intelligence learn everything about a person on the Internet and then use the contextual information about their culture, language and background to quickly crack passwords. Furthermore, AI can be trained to mimic a human and input these credentials into websites that will soon be able to bypass the most sophisticated bot detection mechanisms.
Hackers play a never ending game of whack-a-mole against the endpoint detection and response companies such as CrowdsStrike. A hacker creates a new piece of malware, they deploy it using a 0-day, unpatched system vulnerability, or similar attack vector and do the damage they can (such as encrypting the system or stealing credentials). The security community detects this malware, tags it, and blocks this malware’s signature moving forward (they whack the mole).
Hackers spend significant time modifying their malware code to avoid signature detection. This is tedious and time consuming and the perfect place for AI to be helpful. Using generative AI, hackers can create infinite versions of the same piece of malware and avoid detection. The malware now becomes completely malleable and the game of whack-a-mole starts to become impossible to deal with. A hacker can now make every single payload installed look completely unique. By the time that this malware is caught in the wild, triaged and tagged, the hacker has made a million more variations of it as they sleep using artificial intelligence. To make things worse, when a new 0-day is discovered by a hacker, they will be able to use AI to generate exploit chains in record time. Instead of spending days or weeks building a piece of malware, the AI will be able to use previous knowledge of other exploit kits and modify them in real time to create entirely new and unique viruses or ransomware within seconds of an initial vulnerability being discovered.
With this new tool, hackers will be able to deploy malware and persist on systems much longer. They will use that time to intercept credentials by using keyloggers, extracting cached credentials, and then use these stolen credentials to spread to other systems in the organization to cause major problems for businesses.
Phishing-resistant authentication is whenever a human is removed from the equation. A false perception is that passwordless authentication is always phishing resistant. Just like anything else, the details of implementation are critical to understand.
Let’s look at what isn’t phishing resistant:
The above methods are now considered “legacy MFA.” They were originally developed to create an additional barrier for hackers and to prevent password spraying attacks that were causing major issues for a number of companies. These legacy methods cannot be relied upon given that fully automated bypass methods are widely available and utilized by hackers.
Phishing-resistant methods largely take the human out of the equation when they are accessing systems. The legacy methods discussed above always involve a human reacting to an authentication event by typing something in or accepting a notification. This approach is fundamentally flawed because humans are busy and ultimately just want to move on with their day. Phishing-resistant authentication always puts intent behind any authentication request.
To use an example, imagine every time any person walked past your car in the parking lot while you’re inside grocery shopping, you received a push notification to unlock your vehicle. How many people do you think would accidentally unlock their car? This is basically how legacy 2FA methods work. Phishing-resistant methods require a human to have possession of a device (such as your car key FOB or phone) and the system is only accessed when they intentionally push the unlock button.
Based on the strengthened attack vectors above, let’s look at how phishing-resistant authentication controls can help us against these sophisticated attacks.
The exciting news is that moving from legacy methods of authentication to a phishing-resistant method is entirely possible for any size business today.
Here are some guidelines on how to move forward with a phishing-resistant method of authentication.
Don’t let perfect be the enemy of great. You will have a few use cases where people may still use a password and that’s OK! These are typically very legacy apps that don’t support any federation or authentication standard such as Mainframe, or thick client apps that were written 20 years ago. In all likelihood, the number of users on these apps are minimal and you can use other controls (such as a PAM system) to better secure the access to these.