Sony Removes 75,000 Deepfake Items, Highlighting a Growing Problem
Sony Music reportedly removed more than 75,000 deepfake songs and other material from its online platforms, becoming the latest front in an ongoing battle against false, AI-generated images, videos, and audio that is touching everything from national elections to children.
And the problem is only growing as AI technologies improve and make it more difficult for people to spot deepfake material.
“Deepfakes are there and they are really progressing,” Righard Zwienenberg, senior research fellow at cybersecurity firm ESET, told Security Boulevard in an interview last fall. “They’re becoming more and more sophisticated. But if you would put the whole landscape of large language models in artificial intelligence from a few years ago up to now, it’s almost like exponential growth. That is one of the problems and that is really actively being misused.”
According to The Financial Times, some of the material that Sony removed mimicked such popular artists as Harry Styles and Beyonce. The company submitted the information to UK government officials, telling them that the growing problem of deepfake songs cause “direct commercial harm to legitimate recording artists” and that the 75,000-plus items taken down were a small portion of AI-generated content.
Sony submitted the information amid reports that the UK government is considering softening restrictions on AI technologies, hoping that the massive amount of deepfake material may influence lawmakers’ decisions.
An Expanding Challenge
The problem of deepfakes has been percolating for almost a decade, but has accelerated in recent years with the rapid innovation and adoption of generative AI and the wide availability of free or cheap AI tools that let even inexperienced people create hard-to-detect deepfakes. At the same time, tech vendors are rolling more tools for detecting and protecting against deepfakes.
Earlier this month, Google reported that scammers were using AI-generated videos of YouTube CEO Neal Mohan in a phishing campaign targeting content creators on the video site’s platform in hopes of installing malware and stealing credentials.
“Many phishers actively target Creators by trying to find ways to impersonate YouTube by exploiting in-platform features to link to malicious content,” the notice said.
There have been other high-profile cases of deepfake content being used maliciously, from AI-generated audio of then-President Biden during the New Hampshire presidential primary to the case of deepfake video of a Hong Kong company’s executives being used during a conference call and tricking an employee into transferring $25 million into the hands of the bad actors.
Businesses, National Security at Risk
A 2023 report by Northwestern University’s Buffett Institute for Global Affairs said the rise of deepfake content is becoming a national security problem, writing that “in a world rife with misinformation and mistrust, AI provides ever-more sophisticated means of convincing people of the veracity of false information that has the potential to lead to greater political tension, violence or even war.”
Regula Forensics, which provides identity verification and forensic tools, noted in a report in November 2024 about the threat to businesses of fraud posed by deepfakes. The company found that half of all businesses last year experienced fraud that involved audio and video deepfakes and 66% of business leader said deepfakes are a serious threat to their organizations.
In addition, businesses across industries have lost on average almost $450,000 to deepfakes.
Governments Struggle with Regulation
When it comes to AI-generated content, governments are struggling to walk that line between individuals’ rights and regulation. In the United States, House members are evaluating legislation that would penalize people and organizations that use AI for malicious purposes, including an update of the No Fakes Act, which would penalize creators and platforms for unauthorized AI-generated images, videos, and sound.
However, there is pushback from tech industry and other groups – including the Computer and Communications Industry Association, Center for Democracy and Technology, and Electronic Frontier Foundation – concerned that the No Fakes Act would impinge on people’s rights and add another layer of regulation. In a September 2024 letter to the House Subcommittee on Intellectual Property, the 10 groups said the legislation “offers a blunt solution before we understand the problem.”
Congress also is considering the Take It Down Act that looks to criminalize deepfake pornography – a growing problem in U.S. high schools – and is being supported by First Lady Melania Trump.
What Can Be Done
This all comes as the technology behind deepfakes gets better. A survey of 7,000 people in 2023 by security firm McAfee found that one in four had experienced an AI-generated voice-cloning scam or knew someone who had and that 70% said they weren’t confident they could spot the difference between a real voice and a cloned one.
Verification and authentication company authID last week released a white paper outlining methods for combating deepfake fraud in verification systems that includes combining AI with facial biometric authentication technologies.
“Deepfake fraud is no longer a theoretical risk,” authID Chief Product Officer Erick Soto said in a statement. “It’s a rapidly growing threat to businesses, financial institutions, and digital trust itself.”
ESET’s Zwienenberg said that human intelligence is a useful tool, though noting that it’s becoming more difficult as the technology evolves.
“Is this real or is this fake? Your first impression is usually the right one,” he said. “This is a big problem. It is now that advanced that you cannot hear or see the difference anymore. Basically, human intelligence can be affected. If you think it is not true, apply what you use on computers already. … Be skeptical.”