White House Proposes a Path to a US AI Bill of Rights

The White House Office of Science and Technology Policy (OSTP) has issued a proposed AI “bill of rights” to codify how artificial intelligence and automated systems should engage with the citizens of the United States. The proposal isn’t a pithy recommendation; rather, it is a well-thought-out presentation designed to engage with the AI technology sector to evolve best practices and place consumers’ data protection at the forefront. It also serves as notice to industry that the White House has an agenda—the legislative branch will engage with industry and create a  reasonable and equitable AI bill of rights.

From the get-go, OSTP outlined the ongoing problems of using machines rather than humans, which practitioners have been addressing for years and which, in some instances, have largely been left unchecked. The free-wheeling nature of AI has evolved such that outcomes where privacy (and, oftentimes, individuals’ physical security) are put at risk as miscreants garner unfettered access to PII and PHI. Additionally, the OSTP highlighted how information is used in flawed algorithms which served to discriminate against individuals.

The White House landed on five principles they wish to see. These should serve as a “guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.” The OSTP goes on to provide “From Principles to Practice”—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process.”

Five principles of the Blueprint for an AI Bill of Rights:

  • Safe and Effective Systems – In a nutshell, “you should be protected from unsafe or ineffective systems”–This recommendation includes having independent evaluations of system data collection and utility to ensure the systems are safe and effective.
  • Algorithmic Discrimination Protections–Those writing algorithms must ensure that they do not inadvertently or deliberately discriminate and that algorithms are “designed in an equitable way.” The OSTP reminded us that some algorithm designs are in violation of legal protections.
  • Data Privacy–Protection should be built-in. Speaking to the consumer, OSTP advocated, “Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used.”
  • Notice and Explanation–Automation is able to mimic human behavior and engagement. The OTSP advocated that entities make sure they provide notice that an automated system is being used and that they are told how the system is making decisions that may impact the consumer.
  • Human Alternatives, Considerations and Fallback–Your information is your information and the ability to opt out and engage with an individual (not a bot, AI or automated system) to remedy problems should be the norm, not the exception.

CISOs and product managers will be well served to digest and take under consideration the principles and handbook for implementation going forward, as the White House has drawn a line in the sand with regard to AI implementation. It would be folly to ignore it.

Christopher Burgess

Christopher Burgess (@burgessct) is a writer, speaker and commentator on security issues. He is a former Senior Security Advisor to Cisco and served 30+ years within the CIA which awarded him the Distinguished Career Intelligence Medal upon his retirement. Christopher co-authored the book, “Secrets Stolen, Fortunes Lost, Preventing Intellectual Property Theft and Economic Espionage in the 21st Century”. He also founded the non-profit: Senior Online Safety.

burgesschristopher has 186 posts and counting.See all posts by burgesschristopher