This Week in Security: AI Bias, E-Stalker Apps, and AI Laughing at You

Automating Bias

With the advent of machine learning and artificial intelligence (AI), amazing progress has been made in terms of having computers do more of our work for us. However, offloading work to computers and algorithms comes with a hidden danger. When decision-making power is handed from people to algorithms, the decisions are suddenly assumed to be correct and immune to bias, even though this is far from the truth.

Not only can algorithms dangerously simplify complicated real-world situations to yes/no decisions or single numbers, but over-confidence in the accuracy of an algorithm can remove any kind of accountability or ability to second-guess a computer’s decision.

One example from nearly two years ago is a ProPublica report on racial bias, in a system used to calculate risk scores for people as they are processed through the criminal justice system. In their research they found that these systems assigned higher risk scores to African-Americans, and that these systems were widely used, sometimes at every point of the process in the criminal justice system.

Another interesting “gotcha” of AI is adversarial input – an active area of research regarding various ways and means to fool different AI systems. Here is a 3D printed turtle designed to fool Google’s inception v3 image classifier into thinking it’s a rifle, and this is a sticker designed to fool the VGG16 neural network into thinking a toaster is the subject of an image, regardless of what else is present.

Meanwhile, AI is being swiftly applied to everything that can’t get up and run away from a data scientist: analyzing military drone footage, determining who to search at the border, various aspects of crime-fighting, and secretive police facial recognition programs. While moving decision-making work towards computers and away from humans may appear to remove (Read more...)

This is a Security Bloggers Network syndicated blog post authored by Cylance Research and Intelligence Team. Read the original post at: Cylance Blog