ZeroFOX has picked up an additional $74 million in funding that will be employed in part to advance the development of artificial intelligence (AI) tools capable of identifying fake digital content, including deepfake videos.
Company CEO James Foster said the company has been working closely with Intel to infuse more sophisticated machine and deep learning algorithms into the company’s cloud service. ZeroFOX is making use of both Intel Xeon Scalable Processors and an integrated Intel distribution of the OpenVINO toolkit for building applications capable of emulating human vision. Intel Capital led the latest round of funding.
In addition to making available the first commercial deepfake detection AI engine, ZeroFOX provides access to scalable object detection and optical character recognition AI tools infused with machine learning algorithms as well as tools that detect when business emails have been compromised.
The ZeroFOX platform analyzes millions of content pieces per day to provide threat intelligence and identify incidents and attacks. In addition, ZeroFOX provides access to a set of policy-based tools to automate remediation across any number of digital communication channels, including social media sites such as Facebook and communications platforms such as Slack.
Foster says with the advent of deepfake videos, organizations that already lose millions of dollars from brand hijacking by cybercriminals will suffer even more if consumers can no longer trust the content being surfaced. Political campaigns are especially vulnerable to deepfake videos that could, for example, divert campaign donations. Unless organizations can leverage AI to first identify deepfakes and ultimately thwart them, every form of digital communication will become suspect.
Of course, it’s possible end users will one day rely more on two-factor authentication and other techniques to verify the veracity of content. However, it may be quite some time before two-factor authentication is widely embraced. In the meantime, organizations will find themselves combatting all kinds of fraud, spanning simple business email compromise involving, for example, fake invoices to deepfake videos that might, for example, mimic an entire advertising campaign. The collaboration with Intel is intended to reduce the amount of time required to take down malicious content by removing accounts, domains and applications whenever required.
Foster said it’s not likely that purveyors of fraudulent digital communications will ever be completely defeated. The goal is to reduce the risk they represent to business today if cybercriminals are left unchecked. As cybercriminals continue to build multi-billion empires, they will have more money to invest in emerging techniques such as deepfake videos. However, cybercriminals may not always be the root of the problem—it is just as likely nation-states will be creating deepfake videos as part of a larger disinformation strategy.
At this point, no one knows for sure how big a threat to deepfake videos represents to the digital economy. However, it’s safe to say that billions—perhaps even trillions—of dollars are at stake. In comparison to the risk, the million dollars being spent on AI to combat these threats may appear trifling, indeed.