DARPA AI Cyber Challenge Part of White House Plan to Harness, Secure AI
It’s obvious that AI can be used by miscreants and defenders alike, so it’s not surprising that the U.S. Defense Advanced Research Projects Agency (DARPA) has launched an AI Cyber Challenge that encourages cybersecurity and AI pros to find ways to automatically detect and fix software flaws as well as protect critical infrastructure.
The White House unveiled the challenge, AIxCC, at this week’s Black Hat conference in Las Vegas. The competition follows two tracks—the Funded Track and the Open Track. Under the Funded Track, potential participants must submit their proposals through a small business solicitation. From those submissions, up to seven small businesses will be selected and receive funding.
Open Track hopefuls must register with DARPA, which has set up a website for the competition. Participants will not receive funding from DARPA. The first order of business will be a semifinal qualifying event with as many as 20 top-scoring teams competing. As many as five will advance to the final competition and receive monetary prizes. The final contest will yield the top three winners, who will also receive monetary rewards for their efforts.
“AIxCC represents a first-of-its-kind collaboration between top AI companies, led by DARPA, to create AI-driven systems to help address one of society’s greatest challenges–cybersecurity,” said Perri Adams, DARPA’s AIxCC program manager, said in a release. “In the past decade, we’ve seen the development of promising new AI-enabled capabilities. When used responsibly, we see significant potential for this technology to be applied to key cybersecurity issues. By automatically defending critical software at scale, we can have the greatest impact for cybersecurity across the country and the world.”
“We applaud the administration for its recognition of the crucial role the hacker community can play in identifying, codifying and closing the major security gaps that AI and ML platforms embody, foster or at the least, don’t address,” said Chloe Messdaghi, head of threat research at Protect AI, which debuted the Huntr platform that will pay security researchers that find vulnerabilities in open source software, with an exclusive focus on AI/ML.
Messadaghi noted that “people in security aren’t aware of all of the vulnerabilities inherent in AI/ML or that improper usage can create and amplify [those vulnerabilities].”
The Biden administration said the challenge is “part of a broader commitment” to harnessing AI’s power to address challenges that the U.S. faces while ensuring that it is “developed safely and responsibly.” The White House has already secured commitments from AI companies “to participate in an independent, public evaluation of large language models (LLMs)—consistent with responsible disclosure principles—at DEF CON 2023.”
AIxCC, which will have the Open Source Security Foundation (OpenSSF) as a challenge advisor, will be held at DEF CON 2024.
The Biden administration will also issue “an executive order and will pursue bipartisan legislation to help America lead the way in responsible AI innovation,” the White House said.
“Government funding for research into solving security issues in and with emerging technologies has the potential to help push forward the boundary of our understanding and capabilities in very meaningful ways,” said Thomas Atkinson, managing security consultant at NCC Group, adding that this could be a key point in history.
“Hopefully, this funding will help invigorate research in this space and create meaningful innovations,” said Atkinson. “There is definitely some great potential to be had from this initiative, and it’s great to see the U.S. government supplying funding at a potentially pivotal time in our lives.”