Effective AI Regulation Requires Adaptability and Collaboration

(AI) Fork in the Road

Artificial intelligence (AI) regulation stands at a pivotal juncture. The European Union’s AI Act is emerging as a cornerstone document shaping the trajectory of AI governance, with the United States’ policies and considerations soon to follow.

Recently, two researchers from the University of Ottawa and McGill University performed a meticulous analysis of the AI Act, unraveling its profound impact on the perception of AI systems and potential harms they may bring. The authors, Claire Boine and David Rolnick, submitted their paper, General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole, for WeRobot 2023 conference.

This blog post seeks to examine several of the key ideas presented in the paper, including research cited from HYAS Labs, the research arm of HYAS, and their work on AI-generated malware (specifically, BlackMamba), as well as its more sophisticated – and fully autonomous – cousin, EyeSpy.

We consider the implications not only on proposed policy, but on the very real fact that policy alone will not solve the rising dilemma of fully autonomous and intelligent malware.

The Influence of the AI Act

At the heart of the researchers’ argument lies the assertion that the AI Act reflects a specific conception of AI systems, viewing them as non-autonomous statistical software with potential harms primarily stemming from datasets. The researchers posit the concept of “intended purpose,” drawing inspiration from product safety principles, as a fitting paradigm for this perspective. Unquestionably, this framing has significantly influenced the initial version of the AI Act, shaping its provisions and regulatory approach.

Gaps in Regulation for General-Purpose AI Systems (GPAIS)

However, the researchers astutely highlight a substantial gap in the AI Act concerning AI systems devoid of an intended purpose, a category that encompasses General-Purpose AI Systems (GPAIS) and foundation models. It is here that the innovative contributions of organizations like HYAS Labs, the research arm of HYAS, come into sharp focus.

HYAS has played a pivotal role in conducting research and developing strategies to effectively address the unique challenges posed by GPAIS, specifically in the cybersecurity space.

AI-Generated Malware: GPAIS Gone Rogue

BlackMamba, the proof of concept cited in the paper, exploited a large language model to synthesize polymorphic keylogger functionality on-the-fly and dynamically modified the benign code at runtime — all without any command-and-control infrastructure to deliver or verify the malicious keylogger functionality.

EyeSpy, the more advanced (and more dangerous) proof of concept from HYAS Labs, is a fully autonomous AI-synthesized malware that uses artificial intelligence to make informed decisions to conduct cyberattacks and continuously morph to avoid detection.

HYAS Labs introduced the BlackMamba and EyeSpy proofs of concept as both validation of imminent adversarial cyberwarfare capabilities, as well as a call to action for the industry as a whole to prepare for a new generation of threats. Cybersecurity and cyber defense will consist of fully autonomous and dynamic entities capable of reasoning, adapting, and eluding detection.

GPAIS and Societal Harms

The paper goes on to underscore the intricate and multifaceted nature of harms caused by GPAIS, extending beyond individual impact to encompass collective and societal levels. Notably, societal harms associated with GPAIS encompass the alarming polarization of society, driven by the proliferation of fake social media accounts and the generation of AI-driven content. Moreover, the overreliance on GPAIS is posited as a potential threat to society’s critical thinking skills, introducing a long-term risk to societal well-being.

The integration of GPAIS into personal habits and workflows raises concerns about the concentration of power in the hands of a select few companies, prompting discussions on potential monopolistic tendencies.

“They can also include disasters affecting a significant fraction of the population such as the use of AI systems to create malware, biochemical weapons, weapons of mass destruction,” the authors conclude.

Policy Recommendations and EU Parliament’s Response

In response to the nuanced challenges posed by GPAIS, the EU Parliament has proactively proposed provisions within the AI Act to regulate these complex models. The significance of these proposed measures cannot be overstated.

However, as the field of AI continues to rapidly evolve, there is a pressing need for ongoing discourse and adaptation of regulatory frameworks. To this end, the paper not only emphasizes the importance of the EU Parliament’s response but also offers additional policy recommendations. These recommendations are designed to further refine the AI Act, ensuring its continued relevance in the dynamic landscape of AI technologies.

Looking Forward: The Collaborative Future of AI Regulation

As we traverse the intricate terrain of AI regulation, it becomes abundantly clear that ongoing research and collaborative efforts are indispensable. The work of organizations like HYAS Labs producing research like BlackMamba and EyeSpy, coupled with the academic rigor of institutions like the University of Ottawa and McGill University, contributes significantly to shaping policies that strike a delicate balance between fostering innovation and safeguarding societal well-being.

The evolving dialogue surrounding the AI Act underscores the need for continued discourse, research, and adaptation in the dynamic field of artificial intelligence. The collaborative efforts of researchers, policymakers, and industry players are essential in navigating the regulatory landscape. By fostering an environment of open communication and interdisciplinary collaboration, we can collectively work towards a future where AI technologies are harnessed for the betterment of society while mitigating potential risks and challenges.

The journey towards effective AI regulation is ongoing, and the insights gleaned from research and the proactive stance of organizations like HYAS Labs are integral to steering this course. As we collectively chart the future of AI governance, the lessons learned from the evolution of the EU AI Act serve as a testament to the importance of adaptability, collaboration, and a forward-thinking approach in regulating the transformative power of artificial intelligence.

General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole is essential reading to understand the inherent risks, challenges, and opportunities AI is bringing to our world – whether we are ready for it or not.

Don’t wait to protect your organization against cyber threats. Move forward with HYAS today.

*** This is a Security Bloggers Network syndicated blog from HYAS Blog authored by HYAS. Read the original post at: