SBN

Tackling Bias In AI | Avast

An interesting group from various disciplines came together to discuss AI bias at Avast’s CyberSec&AI Connected virtual conference this month. The event showcased leading academics and tech professionals from around the world to examine critical issues around AI for privacy and cybersecurity.

The panel session was moderated by venture capitalist Samir Kumar, who is the managing director of Microsoft’s internal venture fund M12 and included:

  • Noel Sharkey, a retired professor at the University of Sheffield (UK) who is actively involved in various AI ventures,
  • Celeste Fralick, the Chief Data Scientist at McAfee and an AI researcher,
  • Sandra Wachter, an associate professor at the University of Oxford (UK) and a legal scholar, and
  • Rajarshi Gupta, a VP at Avast and head of their AI and Network Security practice areas.

panel

The group explored first the nature of AI bias, which can be defined in various ways. First off, said Sharkey, is “algorithmic injustice,” where there are clear violations of human dignity. He offered up examples ranging from enhanced airport security, which supposedly picks random people for additional scrutiny, to predictive policing.

Part of the problem for AI is that bias isn’t such a simple parameter. Indeed, according to Fralick, there are two major categories of bias: societal and technological, “and the two feed on each other to set the context among commonly accepted societal mores,” she said during the presentation. “And these mores evolve over time too.” Part of evaluating these mores has to do with the legal context, which Wachter reminded the panel. “Look at the progress of affirmative action in the US,” she said. “Now that we have better equity in medical schools, for example, we don’t need it as much.” Wachter said, “the technological biases are easier to fix, such as using a more diverse collection of faces when training facial recognition models. Sometimes, we have to dial down our expectations when it comes to evaluating technology.”

Of course, part of the problem with defining bias is in separating correlation from causation, which was brought up several times during the discussion.

Another issue is the diversity of the team creating AI algorithms. Fralick said “if you don’t hire diverse people, you get what you pay for.” But diversity isn’t just by gender or race, but different professional fields and backgrounds as well. Wachter said, “As a lawyer, I think in legal frameworks, but I can’t give technical advice. There is a need for discourse across different backgrounds. We can use the same word in very different ways and have to create a common language to collaborate effectively.”

A separate part of understanding biases of AI is comparing the implied ethical standards in the AI output. Kumar asked if we should hold machines at higher standards than humans. Sharkey said “machines aren’t making better decisions than humans, it is more about the impact those decisions have on me personally.” Wachter feels that algorithms have been held to lower standards than humans. But there is another issue: “algorithms can mask racist and sexist behavior and can exclude certain groups without any obvious effect. It could happen unintentionally and as a result be much more dangerous.” Given her legal background, she suggests this is one place where we could apply new regulations that can test these unintended consequences.

One final note has to do with interpreting the results from AI modeling. “We must be able to explain these results,” said Gupta. “But the models have improved much faster than the quality of the explanations, especially for deep learning models.” Other panel members agreed and mentioned that there is a need to clearly define training and test sets to provide the most appropriate context.


This panel was part of the CyberSec&AI Connected, an annual conference on AI, machine learning and cybersecurity co-organized by Avast. To learn more about the event and find out how to access presentations from speakers such as Garry Kasparov (Chess Grandmaster and Avast Security Ambassador), visit the event website


*** This is a Security Bloggers Network syndicated blog from Blog | Avast EN authored by Avast Blog. Read the original post at: https://blog.avast.com/tackling-bias-in-ai-avast