Using IAM Solutions to Beat Deepfakes and Fraud

AI and ML technologies have made great strides in helping organizations with cybersecurity, as well as with other tasks like chatbots that help with customer service.

Cybercriminals have also made great strides in using AI and ML for fraud.

“Today, fraud can happen without stealing someone else’s identity because fraudsters can create ‘synthetic identities’ with fake, personally identifiable information (PII),” explained Rick Song, co-founder and CEO of Persona, in an email interview. And fraudsters are leveraging new tricks, using the latest technologies, that allow them to slip past security systems and do things like open accounts where they rack up untraceable debt, steal Bitcoin holdings without detection, or simply redirect authentic purchases to a new address.

Some increasingly popular fraud tricks using AI and ML include:

  • Deepfakes that mimic live selfies in an attempt to circumvent security systems
  • Replicating a template across a dozen or more accounts to create fake IDs (these often use celebrity photos and their public data)
  • Mimicking the voice of high-level officials and corporate executives to extort personal information and money
  • Chatbots as phishing tools to gather personal information

“With this pace of evolution, companies are left at risk of holding the bag — they are not only losing money directly through things like loans and fees they can’t recoup and any restitution to impacted customers, but they’re also losing trust and credibility. Fraud costs the global economy over $5 trillion every year, but the reputational costs are hard to quantify,” said Song.

How IAM Tools Can Spot and Prevent High Tech Fraud

For anyone who has had to tell the difference between an AI-generated fraudulent account and a real one, you know how difficult it can be. If even experts can falter, less IT-savvy employees and customers will struggle to tell fact from fiction. IAM solutions will help organizations recognize these newer fraudster tricks.

A sound identity verification infrastructure leverages automation to verify and cross-reference multiple identification data points instantly, enabling companies to identify fraud before it happens, Song suggested. Other signals, such as the IP, email and browser fingerprint, can be used to identify the true owner, and that allows companies to determine whether to approve, block or flag the user for manual review before granting account access.

“In addition to instantly blocking sophisticated fraud, leveraging automated identity verification systems also eliminates the need for humans, such as contractors or offshore agents, to review sensitive data and fulfill consumers’ data requests, ultimately diminishing the risk of fraud and ensuring data access is shared with only those who need it,” Song added.

Taking a More Holistic Approach

Because there is no one-size-fits-all approach when it comes to detecting fraud (or proving a user’s identity), Song said that organizations need to take a holistic approach, as bad actors bump up their level of sophistication and make use of new technologies. Even before the pandemic, most organizations and consumers were shifting everyday activities online, from banking to buying groceries, and each activity needs its own custom verification flow designed for its own user base, regulatory requirements and even a risk tolerance level.

“There’s this idea in the industry that there is a silver bullet to identity verification, and there’s debate about what that silver bullet is, from facial recognition to SSN to a fingerprint,” Song said. “However, I strongly believe you can’t use one single factor to evaluate a person.”

Collecting and combining multiple data points helps to catch instances of identity fraud that point solutions cannot. For example, a fraudster might upload a valid ID, but using it to apply for accounts from multiple different IP locations thousands of miles away on the same day would strongly suggest fraudulent activity.

“Businesses need to make it difficult for fraudsters to pass verification with a thorough approach — it is death by a thousand cuts,” said Song. “There must be a robust set of verification factors that are run through trusted databases and checked for authenticity, including personal information, reverse phone, government ID, selfie and knowledge-based authentication.”

Passive signals, such as device signals, location signals and user behavior, offer identity analysis tools, as do third-party reports like adverse media, watchlists, phone risk and email. “This multi-faceted approach helps catch both individual bad actors and fraud rings, because it can identify connections between similar fraud activities,” said Song. “Instead of playing whack-a-mole, the whole nest can be rooted out at once.”

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba