SBN

MY TAKE: Even Google CEO Sundar Pichai agrees that it is imperative to embed ethics into AI

It took a global pandemic and the death of George Floyd to put deep-seated social inequities, especially systemic racism, front and center for intense public debate.

Related: Will ‘blockchain’ lead to more equitable wealth distribution?

We may or may not be on the cusp of a redressing social injustice by reordering our legacy political and economic systems. Only time will tell. Either way, a singular piece of technology – artificial intelligence (AI) — is destined to profoundly influence which way we go from here.

This is not just my casual observation. Those in power fully recognize how AI can be leveraged to preserve status-quo political and economic systems, with all of its built-in flaws, more or less intact.

Conversely, consumer advocates and diversity experts can see how AI could be utilized to redistribute political power more equitably, and in doing so, recalibrate society – including blunting systemic racism.

In late January, as COVID-19 was beginning to spread, the most powerful people on the planet flew to Davos, Switzerland to attend the 50th annual World Economic Forum. AI was prominent on their agenda. These heads of state and captains of industry even coined a buzz phrase, “stakeholder capitalism,” to acknowledge the need to take into account the interests of the economically disadvantaged and politically powerless citizens of the world as they bull ahead with commercial and political uses of AI.“AI is one of the most profound things we’re working on as humanity,” Sundar Pichai, CEO of Alphabet, Google’s parent holding company, told Bloomberg News in Davos. “It’s more profound than fire or electricity.”

Pichai was alluding to how AI has already impacted production and given corporations a glimpse of its upside potential to drive economic growth to dizzying new heights.  Google’s CEO said it was vital to keep bias out of AI implementations. And there will be plenty of that, going forward. A report from tech consultancy Accenture estimates that AI will double the annual growth rates of 12 of the world’s top economies by 2035.

Let that sink in for a moment. Over the next decade and a half, AI could drastically change the working relationship of man vs. machine. The question is, what will this doubling of the world’s economic output look like? Will it simply mean those in power will stay in power, and those being exploited today, will continue to be exploited 15 years from now, only more so? Or will AI help pivot us to a new, more equitable way of doing things?

This was at the core of what the world leaders discussed in Davos. In the current news cycle, the debate about where to set the guard rails for AI isn’t getting the attention it deserves. I spoke with several technologists and diversity experts about this. Here’s what everyone should know about what’s at stake when it comes to infusing some level of ethics into AI. 

‘Ethical AI’

Companies and big government agencies know ethical use of AI is important. Harvard’s Berkman Center for Internet & Society has launched a project to catalog all of the AI ethics declarations made by public and commercial organizations. As you’d expect, companies are giving it lip service, and not much more. We know this thanks to a report  put out by New York University’s AI Now Institute in December 2019. The study found that the “vast majority” of AI ethics statements say “very little about implementation, accountability, or how such ethics would be measured and enforced in practice.”

At the moment, there’s little to constrain corporations or government agencies from using AI however they want. Law enforcement, for instance, drew criticism for using a controversial facial recognition app — Clearview AI — to surveil citizens turning out to protest the George Floyd murder. Likewise, the COVID-19 contact tracing app developed by Apple and Google continues to evoke concerns that it will end up leveraging AI to normalize privacy invasion.

COVID-19 contact tracing and the Black Lives Matter protests exposed the unholy alliance between big tech and government,” says Will Griffin, vice president of ethics and diversity at Austin, Tex.-based Hypergiant Industries, a supplier of AI technologies. “Citizens are realizing these tools are not just being used against African-Americans — and Americans are now demanding that ethical vetting must be applied to these technologies before they can be released into the marketplace.”

Griffin

In response to elevated civil liberties concerns, IBM, Amazon and Microsoft all issued moratoriums on the use of their respective facial recognition systems, which make heavy use of AI. But Griffin points out that the tech giants only conceded as much as they felt they needed to. “Unlike nuclear arms control agreements, they did not pledge to destroy the data,” he noted. “The big tech companies are just putting their use of AI on hold until policy-makers set up a governance structure. It’s an on-going discussion but greater citizen awareness will drive the agenda.”

 Transparency is paramount

As always, the tech giants, want to do just enough to stave off government efforts to regulate their commercial use of AI, and thus force them to alter their business models. They fear that consumer-friendly AI ethics regulations could extend out of new, prescriptive data security and privacy protections laws, such as the EU’s General Data Protection Regulation, (GDPR) or the newly minted California Consumer Privacy Act (CPPA.)

A key measure of how meaningful any new ethics rules turn out to be – whether they be voluntary industry standards or new laws with enforcement teeth — will be how much transparency comes out of the other end.

Shashanka

“Transparency is the most important element for the ethical use of AI; there can’t be any oversight without transparency,” says Dr. Madhu Shashanka, co-founder and chief scientist at Concentric.ai, a San Jose, Calif.-based AI systems supplier. “‘Explainability’ is a close cousin to transparency. An explainable model lets non-technical people inspect how the model works, and that opens the door for those with different backgrounds to give feedback. A hard-to-explain model makes it impossible to get input from a diverse set of stakeholders.”

If that sounds like a diversity argument for ethical use of AI, that’s because that’s just what it is. Commercial deployments of AI, today, and looking ahead can’t help but intersect with the very human behavior of stereotyping. AI can exacerbate – or ameliorate – stereotyping. And that will directly impact whether social injustice endures or not.

“Machine learning systems amplify whatever biases exist in the data used to create them.” Shashanka explains. “No data set is free of bias and that makes mistakes unavoidable. No one assigns malicious intent to an algorithm that misclassifies, say, a picture of a leaf as coming from the wrong tree. Categorizing people incorrectly, on the other hand, is a minefield of potential bad outcomes.”

AI tools are designed to put people into categories, Shashanka says. At the moment the tools to do this are comparatively crude. However, as the data collected by Internet of Things systems gets deeper and richer, the algorithms running AI should get smarter and more accurate.

“But we’re still categorizing, and sometimes categorization itself leads to bad outcomes,” Shashanka cautions. “And that question moves us from the realm of science and engineering to society and policy. Transparency around training data, modeling assumptions and design tradeoffs, coupled with an inclusive way to incorporate feedback, could create utilitarian systems without any hype.”

Shedding behavior profiling

One progressive way for businesses to get ahead of science and policy is to reject the advertising-driven, behavior profiling model that has made the founders of Google, Facebook and other companies following their business model mega rich. This is a model in which the company uses tech tools to collect and control as much personal data as possible, while locking out any consumer control over personal data to the extent that it can get away with. It leads to predatory business practices that reinforce social injustice.

I spoke to Altaz Valani, director of research at Security Compass, a Toronto-based supplier of advanced application security solutions, about this. Valani argues that transparency is paramount because it helps foster accountability.

“Ethics in AI is predicated on clear transparency and accountability,” Valani told me. “Transparency involves introspective capabilities during creation, use and storage of information. For example, if an AI algorithm causes significant loss in revenue or even death, who is responsible? Can software or robots that make use of AI provide an audit trail of historic decisions that led to particular outcomes? Are our data sets for AI learning neutral? Can humans override the AI algorithms?”

Like everything else, some combination of standards and regulations will ultimately dictate what degree of ethics gets infused into AI. Consensus building already is well underway in organizations like the Institute of Electrical and Electronics Engineers (IEEE,) and the International Organization for Standardization (ISO.) One encouraging sign is that this work is starting on a foundation of ethical ontology – a branch of metaphysics used to graph the nature of being — which should help account for as many AI use cases as possible, Valani says.

Valani

“Ethical standards have the potential to fundamentally change business operating models and supply chains, as every organization will have to determine their tolerance for change based on balancing revenue and ethical considerations,” Valani says. “Having a clear and understandable statement on the ethical practices of AI algorithms being used by an organization permits the customer to have a voice in choosing who they wish to interact with.”

Of course, we’ve come this way before. When Google created online behavior profiling to monetize our digital footprints, based on what we searched for, and when Facebook launched to do much the same, based on our social postings, there was an opportunity to address ethics. But those moments flew by with no ethics discussions to speak of. The result has been tech, telecom, entertainment and advertising companies squeezing every bit of commercial value they can get out of behavior profiling data without any meaningful transparency. More invasive law enforcement surveillance and the Cambridge Analytica scandal are what we have to show for letting Google and Facebook shape society and policy.

It doesn’t have to continue to be that way as AI gets deployed more widely. COVID-19 tracing and the Black Lives Matter movement have put a bright spotlight on use of AI in surveillance. Sustained citizen outrage could pivot society and policy to a new course.

 ‘Stakeholder capitalism’

As ethics standards and regulations dictating AI take shape so will an auditable set of best practices. This will present businesses with the opportunity to implement ethical AI standards that have the potential to fundamentally change business operating models and supply chains, says Valani. “Every organization will have to determine their tolerance for change based on balancing revenue and ethical considerations,” he told me. “Having a clear and understandable statement on the ethical practices of AI algorithms being used by an organization permits the customer to have a voice in choosing who they wish to interact with.”

Corporations competing to on the basis of seeing who can have the most transparent AI ethics policy could be a big win-win, says Adam Darrah director of intelligence Vigilante, a Phoenix, Ariz.-based supplier of threat intelligence systems. “The great blessings associated with proper and thoughtful AI implementation are endless,” Darrah says.

Darrah

I asked Darrah to blue sky possible use cases. Here’s what he envisions: “What if we could work to reduce the negative effects of vehicle traffic? We could reduce pollution, save people money, reduce stress on the individual and on our infrastructure. In short, we could be more efficient. In the financial sector, we could perhaps flag patterns to lessen the likelihood of fraud, we could intervene earlier when flags of financial ruin are eminent, or even ‘train’ the systems to be more secure while making things easier for the end user to use.”

Dr. Shashanka, of Concentric.ai, foresees AI ethics advancing to the point where it actually could begin to make a dent on racism. “We know there are many areas in society where implicit human bias is still a problem; an AI-based job candidate screening tool, for example, could be designed to make hiring less prone to human bias,” Shashanka says. “AI is not a magical black box; it’s just another tool – a powerful one to be sure – but removing the mystique will go a long way to creating more realistic expectations.”

This gets to the heart of “stakeholder capitalism,” as our political and business leaders discussed in January in Davos. Only time will tell if they, indeed, have the personal convictions and political will to lead us to a world where AI ethics underpins new business models that endure because they distribute wealth and power more equitably. I’ll keep watch.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(This column originally appeared on  Avast Blog.)


*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/my-take-even-google-ceo-sundar-pichai-agrees-that-it-is-imperative-to-embed-ethics-into-ai/