SBN

Are We All Ready To Embrace AI?

As artificial intelligence (AI) continues to advance and integrate into various aspect of day to day life, ensuring its safe and secure operation becomes paramount.

Imagine for a moment, if you will, a malicious actor/hacker posing as a candidate for employment where the interview is conducted by AI.

Improperly trained and left unsecured, AI could be prompted into disclosing vulnerabilities and other sensitive information. Minutes later, the company holding the interview is hit with ransomware and brought to it’s knees. This is the new reality where attackers are leveraging AI tools to launch attacks quicker than ever (see Crowdstrike’s latest 2024 Threat Reports with the 2 minute, 7 second record breakout time…)

With AI seemingly taking a role in so many aspect of day to day life, we need to start considering if this also means potential vulnerabilities and dangers are are being introduced as well.

Some organizations are taking note and doing what they can. For example, the “2023-2024 CISA Roadmap for Artificial Intelligence” is a significant step forward in securing AI systems and ensuring their responsible deployment…

However, without comprehensive and enforceable regulations across North America, this roadmap alone is not sufficient to address the growing risks associated with AI technologies.

The CISA Roadmap: A Strong Foundation

CISA’s roadmap sets out a robust strategy for integrating AI into national cybersecurity efforts. It roughly breaks down to:

  • Cyber Defense: Utilizing AI tools to protect cyberspace against emerging threats.
  • Risk Reduction: Supporting the secure adoption of AI in critical infrastructure.
  • Operational Collaboration: Enhancing communication of AI-related threats.
  • Workforce Development: Expanding AI expertise within CISA’s operations​.

Despite these well-intentioned initiatives, the absence of binding regulations leaves a gap that will undermine the roadmap’s effectiveness.

Vision for a Secure AI Future

CISA envisions a future where AI systems bolster national cyber defenses, protect critical infrastructure from malicious uses of AI, and prioritize security in AI product development. This vision underscores the importance of AI not only as a tool for advancement but also as a potential risk that requires meticulous oversight and robust security measures.

Simply put; What happens when the AI tool you trust to defend you, is turned against you?

The Growing Need for Binding Regulations

Comparing North Americas lack of action with the EU AI Act

As usual, the EU is ahead of North America. The EU AI Act represents a comprehensive regulatory framework that categorizes AI applications based on their risk levels and imposes stringent requirements on high-risk AI systems. This includes measures such as:

  • Mandatory Risk Management Frameworks
  • High-Quality Data Governance
  • Transparency and Documentation Requirements
  • Human Oversight and Continuous Monitoring

In contrast, North America completely lacks a unified regulatory approach. Even the recent Executive Order on Safe, Secure, and Trustworthy AI in the US emphasizes voluntary guidelines rather than enforceable standards​. While the NIST AI Risk Management Framework provides valuable guidelines, it also does not carry the weight of law​.

The Patchwork of State and Federal Initiatives

In the US, AI governance is fragmented. Individual states like Illinois have enacted specific AI-related laws, such as the AI Video Interview Act, which mandates transparency and consent in AI-driven hiring processes​ (the door could still be left open to hacker-interviewees). Federally, the FTC has issued warnings against deceptive AI practices, but comprehensive, enforceable regulations are still in development.

Canada’s Directive on Automated Decision-Making

Canada’s approach, through its Directive on Automated Decision-Making, mandates algorithmic impact assessments and transparency for federal AI systems. However, this directive only applies to federal entities and does not cover private sector AI applications​.

The Implications of Inadequate Regulation

Without enforceable standards, the rapidly accelerating adoption of AI will introduce:

  • Security Risks: AI systems may remain vulnerable to attacks such as adversarial machine learning, data poisoning, and model evasion.
  • Ethical Concerns: Issues like bias in AI algorithms and lack of transparency could lead to unfair and discriminatory practices.
  • Operational Gaps: Organizations that do not have any incentive to adopt comprehensive AI risk management practices, could end up ignoring security while chasing profit.

Conclusion

Guidelines such as the 2023-2024 CISA Roadmap for Artificial Intelligence lays a critical foundation for securing AI systems in the US. However, to fully realize its potential, the whole of North America needs a regulatory framework akin to the EU AI Act. Such regulations would be a huge step forward in ensuring that AI technologies are developed, deployed, and managed in a manner that is secure, transparent, and accountable.

*** This is a Security Bloggers Network syndicated blog from Berry Networks authored by David Michael Berry. Read the original post at: https://berry-networks.com/2024/06/15/are-we-all-ready-to-embrace-ai/