Online Scams in the Age of AI

As artificial intelligence continues to evolve, so do the threats exploiting it. Cybercriminals are now harnessing AI-powered tools to craft highly sophisticated scams, bypass traditional security measures and target individuals and organizations with unprecedented precision. The FBI has issued warnings about AI-generated text, noting its role in creating convincingly deceptive phishing emails, authentic-looking fake profiles and scam websites that are virtually indistinguishable from their legitimate counterparts. Meanwhile, Deloitte predicts that losses from AI-driven fraud could exceed $40 billion within the next three years, an increase of $12.3 billion since 2023. 

With the lines between real and fake blurring faster than ever, businesses and consumers must reassess their approach to digital security and external cybersecurity. Here’s what to watch for and how to stay protected. 

The New Face of AI-Driven Cybercrime 

  1. Online scams and phishing: Phishing scams and social engineering have existed for decades and are arguably the most well-known instruments of cybercrime. Yet, despite their notoriety, fraudsters continue to stay ahead of evolving cybersecurity practices, with 84% of phishing email recipients replying or interacting with an attack within ten minutes of receiving a malicious email. Employee-facing phishing attacks are just one avenue for cybercriminals: Customer-facing phishing attacks are equally common and can lead to a loss in revenue and damage to a company’s reputation.  
  2. Impersonation scams: Cybercriminals are increasingly leveraging AI to create fraudulent identities, posing as executives, employees, or even government officials to manipulate victims into divulging sensitive information. According to the Federal Trade Commission (FTC), business email compromise (BEC) scams, in which attackers impersonate high-ranking individuals, led to $2.7 billion in reported losses in 2023. The use of AI-generated voices and realistic email phrasing makes these scams significantly more convincing than traditional phishing attempts. 
  3. Fraudsters: The digital landscape has seen a proliferation of fraudulent businesses established using AI-generated websites and synthetic identities. These fake enterprises deceive consumers into financial transactions for non-existent products or services. In 2024, the U.S. Trade Representative identified 38 online markets and 33 physical markets engaged in substantial trademark counterfeiting and piracy, reflecting the global scale of this issue. 
  4. Counterfeit goods: Luxury brands and consumer goods companies continue to grapple with a surge in counterfeit products sold online. According to the National Crime Prevention Council (NCPC), the global counterfeit market is valued at $2 trillion, highlighting the escalating efforts brands must take to combat fraudulent goods. The increasing sophistication of counterfeit operations, often enhanced by advanced technologies, is making it harder for brands and consumers to differentiate between authentic and fake products. 
  5. Deepfake scams: AI-generated deepfake technology is enabling highly sophisticated fraud schemes, including scams that target financial institutions. In one case reported by The National Council on Aging, cybercriminals used a deepfake video to impersonate a CEO and authorize fraudulent wire transfers. Older adults remain particularly vulnerable, with fraud losses among seniors reaching $3.4 billion in 2023, according to the same report. 
  6. Fake imagery: The rise of AI-generated imagery has led to a new breed of deception in online fraud. Scammers can now fabricate product photos, job listings and even social media profiles to gain consumer trust. A study by Harvard’s Misinformation Review found that Facebook pages utilizing AI-generated images amassed significant followings, with a mean follower count of 146,681 per page, indicating the effectiveness of such deceptive practices in engaging users. 

How AI is powering Large-Scale Fraud Operations 

Beyond individual scams, AI has also streamlined large-scale fraud operations. A recent case study from the Center for Long-Term Cybersecurity at UC Berkeley highlights how cybercriminals are automating fraud networks, using AI to create and distribute scam websites that look legitimate. The emergence of AI-powered chatbots further enables these schemes, as fraudsters can engage victims in real time, answering questions and reducing skepticism.  

Perhaps most concerning is the speed at which these attacks can now be executed. According to Unit21, 40% of transactions blocked in 2024 were flagged due to AI-driven fraud techniques. This indicates not only a rise in the volume of attacks but also an increase in their sophistication. Cybercriminals no longer need weeks to orchestrate a scam–AI allows them to launch full-scale operations in a matter of hours.  

Staying Ahead of AI-Enabled Threats 

To combat the escalating threat of AI-driven scams, consider the following strategies: 

  • Education: Companies should prioritize educating employees and consumers about the latest scam tactics. Fraudsters rely on social engineering, and awareness can significantly reduce the likelihood of falling victim to scams. Regular fraud awareness training should reflect the latest AI-driven threats and equip individuals with the knowledge to recognize suspicious activity.
     
  • Collaboration: Addressing AI-powered scams requires coordination across multiple teams. Legal, cybersecurity and social media professionals should work together to create a proactive approach to fraud detection and response. Internal teams play a critical role in identifying threats early and ensuring swift action is taken.
     
  • Prioritization: Not all scams carry the same level of risk. Businesses should first focus on the most damaging threats, such as direct brand impersonations and phishing campaigns, before tackling lesser-known fraud tactics. Having a structured approach to risk management allows organizations to allocate resources efficiently and mitigate the greatest threats first.
     
  • Reducing ROI for Scammers: Fraudsters operate on profitability, so making scams less lucrative is key to deterring attacks. Swift takedown processes, robust fraud detection mechanisms and legal action against persistent offenders can make fraudulent operations less viable. The harder it is for scammers to profit, the less likely they are to target a brand or business. 

The Future of AI and Cybersecurity

The arms race between cybercriminals and security professionals is only intensifying. As AI technology advances, scammers will continue refining their tactics, making it crucial for businesses to evolve alongside these threats. The key to mitigating risk lies in leveraging AI not just for efficiency but for security—turning the very technology fraudsters exploit into a force for protection.

The question is no longer whether AI-driven scams will target your business, but how prepared you are to counter them. The time to act is now.  

Avatar photo

Yoav Keren

Yoav has 24 years of experience in financial management, marketing and business development. He is currently a member of the anti-counterfeiting committee at INTA and was formerly a Council Member at ICANN. Yoav was a Senior Advisor to a minister in the Israeli government and was the head of the Technology branch of the Israeli military’s Information Security Department. He holds an MBA from the Kellogg & Recanati business school (Northwestern University & Tel-Aviv University), and a B.A. in Economics and Physics from Tel-Aviv University.

yoav-keren has 1 posts and counting.See all posts by yoav-keren