The Battle of the Bots: Safeguarding Identity in the Age of AI

Identity has long been experts’ strongest weapon against threats in cyberspace, but in the age of generative AI, identity is being exploited by hackers as a weakness in the security perimeter. How are they doing it? Artificial identity.

With the help of generative AI, bad actors are leveraging artificial identities to deceive individuals and systems, compromising trust and security. What do these generative AI-powered attacks look like? Well … they look just like you.

Targeted deep fake attacks can mimic a person’s likeness via images, videos, voice audio and even text. Personalized phishing campaigns such as these are bolstered by context and a lack of the typical tell-tale signs of threat most people have been trained to spot. So, what does all of this mean for identity practitioners? Identity strategies need to be stronger and smarter than the AI knocking at our doors; since you don’t bring a knife to a gunfight, we’re using AI to combat these newest threats to security.

The Threat in Action

A UK-based energy company became the first case of generative AI fraud four years ago in 2019. The case involved a call made to the organization’s CEO from a man whose voice he quickly recognized. It was his boss, the CEO of his organization’s parent company in Germany, and he was calling with an urgent request. This executive needed to send a quarter of a million dollars to a Hungarian supplier right away. “Of course,” his boss assured; the money would be reimbursed.

In reality, this was the first ever case of “vishing”, or voice phishing, and since the first attack was a successful one, the floodgates for targeted generative AI phishing were opened.

In a more recent case, an interview with Zscaler CEO Jay Chaudhry revealed how attackers successfully impersonated him in a targeted attack against one of the organization’s employees this year. The employee received both voice calls and text messages asking him to purchase gift certificates for some of the organization’s customers, and since the employee had an existing rapport with Chaudhry, text messages such as these didn’t seem out of the ordinary. The employee purchased 10 certificates before becoming suspicious when the attackers asked him to make additional purchases.

People believe what they can see and hear; attackers know that. They use generative AI to exploit the innate trust people have in their ability to interpret the world around them, preying on context and precedent. For these reasons, this kind of attack poses an incredibly relevant threat. This is happening in real life right now; stories like this one underscore the urgency of ensuring identity at all costs.

Fighting Back Against AI

Fighting back against AI-enabled attacks is a mix of strengthening our identity solutions with AI and advocating for employee education.

Having a strong identity-centric security strategy in place can greatly hinder bad actors. If your door is locked tight, attackers are more likely to try someone else’s. Lock down your organization with AI using:

  • Zero-trust. This approach to identity authentication is made better, faster and smarter by AI. Every service request is verified and authenticated, and AI is on guard for indications a user may not be legitimate.
  • Proactive policies. Using tools like multifactor authentication (MFA), we can make logging in with stolen credentials much more difficult. If a user is who they say they are, log-in with MFA is a breeze. Users posing as someone else likely won’t have access to the range of proof MFA will ask for.
  • Language Models. Attackers use language models to map out an organization’s security perimeter and identify weak points. We can use that same process to identify those weaknesses first and make sure hackers aren’t able to exploit them.
  • Passwordless authentication. Passwords are guess-able with AI, but passwordless authentication requires proof only you can provide. This makes passwordless log-in incredibly difficult to bypass.

Researchers at Stanford University have been able to attribute 88% of organizational breaches to human error. Since humans have long been considered the security perimeter’s weakest link, this is no surprise. The only way to mitigate the risk of targeted phishing attacks against your employees is through education. Consider:

  • Implementing a cybersecurity policy.
  • Making cybersecurity a part of your company culture and an ongoing conversation.
  • Investing in a cybersecurity training program and holding periodic training sessions for employees. During these lessons, you can train your staff to:
    • Spot suspicious activity
    • Respond appropriately to suspicious activities/requests
    • Practice device safety
    • Maintain a high level of confidentiality

Remember, education is key to ensuring employees don’t click that link, transfer those funds or buy those gift cards. Never trust, always verify. And always be wary of urgent requests.

AI Ethics

When it comes to AI, the conversation of ethical boundaries can’t be ignored. Responsible uses of AI reinforce user trust. Start this conversation in your organization by:

  • Discussing the potential for bias in AI algorithms. These are often implicit, unintentional and the product of irresponsible use of narrow training data. Nevertheless, bias in this area must always be evaluated, investigated and corrected. Without expert oversight, organizations run the risk of deploying discriminatory AI.
  • Advocating for policies that don’t exploit data privacy
  • Taking responsibility for the AI you create or employ
  • Understanding autonomous decisions. When AI makes an autonomous decision, administrators should be able to obtain a transparent understanding of the system’s reasoning process. This goes hand-in-hand with responsible monitoring for bias.

These are just some of many ethical considerations; part of an ongoing conversation in the cybersecurity space. When it comes to AI, always do your research. Organizations and decision-makers should always understand the implications of the technology they deploy.

We can fight back against generative AI-equipped bad actors. Convincing threats are here, but strong AI strategies and education can give us the best chance to defend ourselves. As we continue to develop these advanced strategies to combat AI, ethics become an incredibly important part of innovation. In this case, AI looks to be the best medicine against AI fraud.

Avatar photo

Arun Shrestha

Arun Shrestha has over 20 years of building and leading enterprise software and services companies. As CEO, Arun is committed to building a world class organization with its mission to help our customers build secure, agile and future-proof business. Arun prides in partnering with customers to strategize and deploy cutting edge technology that delivers top business results. Prior to co-founding BeyondID, Arun held executive positions at Oracle, Sun Microsystems, SeeBeyond and most recently Okta, which went public in 2017. At Okta, Arun was responsible for delighting customers and for building world class services and customer success organizations. At Oracle and Sun Microsystems, Arun led global services and support organizations for systems and software including Java, SOA, Identity Management platforms. Arun brings years of delivering modern IT solutions related to Identity, API and Cloud to global customers across Americas, EMEA and APAC regions. Arun earned his BS in Computer Engineering and Computer Science from Graceland University, Iowa. ​

arun-shrestha has 3 posts and counting.See all posts by arun-shrestha