AI Bias Validated!
In my BlogInfoSec column “Is A.I. For, or Against, Cybersecurity?” posted on June 18, 2018, I asserted that developers are introducing their personal biases into the design of AI (artificial intelligence) systems. My statement was based on experience, supposition, and intuition. To my surprise, a few days later (June 22, 2018) an Op-Ed article by Joy Buolamwini with the title “The Hidden Dangers of Facial Recognition” in The New York Times, which described biases in AI face-recognition programs that are used by some major companies in their hiring process. Buolamwini not only agrees with my theory but also provides empirical evidence to prove it, demonstrating that dark-skinned faces were clearly disadvantaged by the system—a bias introduced by prior experience of designers of the system.
I first came across this type of bias very early in my career when I was consulting to a major credit-card company. The consulting firm for which I was working at the time, had developed a “point-scoring program” to evaluate new credit-card applicants. This decision-assistance program contained the usual set of factors, such as length of time in job, type of job or profession, how long at current address, whether home was owned or rented, etc. The type and weightings of these factors came from an analysis of prior account holders who did and didn’t default. We would run the program and a human analyst would then review the results and decide whether to accept the application or not. It was a virtuous circle for some, and a vicious circle for others. Women and minorities did not do as well as Caucasian men. Those of us just out of school, with no credit history to speak of, also did not fare well. Was this prejudice? Possibly, but those who used the method truly believed that it was as neutral and unbiassed an approach as any, and they thought that it was working well. Perhaps the saving grace of the approach was the insertion of a human decisionmaker, although that could go either way. With the new AI systems there is not even fallback to persons of experience.
Not only may AI systems be biased, but they may not be testable for bias. In her book “Weapons of Math Destruction,” Cathy O’Neil states that “… many poisonous assumptions are camouflaged by math and go largely untested and unquestioned.” But what should we do to ameliorate this situation? In my June 4, 2018 BlogInfoSec column, “Cybersecurity and Safety of AI and Robots,” I recommend developing AI-based testing systems to assure the quality of AI systems. But that begs the question about what biases could have been built into the software assurance systems. If the answer to untested AI systems, which may be biased, is to build software-assurance AI systems, which may also be biased, will we have made any real progress when such testing systems become available, as they inevitably will? On the basis that something is better than nothing in an area so fraught with risks, smart testing systems do make sense, but only to the extent that we really understand designers’ biases in their approaches to testing. Gotcha!
*** This is a Security Bloggers Network syndicated blog from BlogInfoSec.com authored by C. Warren Axelrod. Read the original post at: https://www.bloginfosec.com/2018/07/02/ai-bias-validated/