IBM Jumps on BLM Bus, Drops Failing Facial Biz

IBM, that well-known paragon of woke from Armonk, has stopped work on facial recognition. The technology risks “promoting discrimination and racial injustice” is why.

This has nothing at all to do with IBM’s abject failure to make any money out of it. No way, not at all, uh-uh. Whatever gave you that idea?

Is “black lives matter” the latest choice of cynical excuse for once-great companies to lay off another bunch of over-40-year-old employees? In today’s SB Blogwatch, we ask, WWTJWD?

Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: Yes, we’re gonna have a wingding.

But Clearview AI Doesn’t

What’s the craic? Lauren Hirsch reports—“IBM gets out of facial recognition business”:

 IBM CEO Arvind Krishna called on Congress Monday to enact reforms to advance racial justice and combat systemic racism. The decision for IBM to get out of the facial recognition business comes amid criticism of the technology, employed by multiple companies, for exhibiting racial and gender bias.

IBM decided to shut down its facial recognition products and announce its decision as the death of George Floyd brought the topic of police reform and racial inequity into the forefront of the national conversation, [and that it] did not generate significant revenue for the company … a person familiar with the situation told [me]. The decision was both a business and an ethical one, [they] said.

Uh huh. Ina Fried chickens out—“Why it matters”:

 Facial recognition software is controversial for a number of reasons, including the potential for human rights violations as well as evidence that the technology is less accurate in identifying people of color. … IBM said that AI, for example, has a role to play in law enforcement, but should be thoroughly vetted to make sure it doesn’t contain bias.

An IBM representative [said] the decisions were made over a period of months and have been communicated with customers, though this is the first public mention of the decision. … The company is also calling for stricter federal laws on police misconduct.

Go on. Arvind Krishna speaks—with the help of a few PR minions:

 In September 1953 … Thomas J. Watson, Jr., then president of IBM [refused] to enforce Jim Crow laws at IBM facilities. Yet nearly seven decades later, the horrible and tragic deaths of George Floyd, Ahmaud Arbery, Breonna Taylor and too many others remind us that the fight against racism is as urgent as ever.

Technology can increase transparency and help police protect communities but must not promote discrimination or racial injustice. … IBM firmly opposes … uses of any technology … for mass surveillance, racial profiling, [or] violations of basic human rights and freedoms.

Vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported. … National policy also should encourage and advance uses of technology … such as body cameras and modern data analytics techniques.

Are we alleging an allegation? No, but Kyogreex is:

 Translation: … IBM can’t compete in this market, so they’re trying to soften any market impact by making this about how virtuous they are.

Too strong? Pascal Monett is all, like, “Good on IBM”:

 I will applaud any company that faces the reality that facial recog simply does not work in public areas. The error rate is way too high.

So yeah, let’s say that IBM is throwing in the towel at the opportune moment, I don’t care. On this, the CEO is right and has made the right call.

And Shompol identifies a very topical problem:

 Wide use of masks made face recognition inoperable. IBM got out of the development of what is not going to be a viable product in the near future, perhaps ever.

What’s the name of that other firm working on law-enforcement facial recognition? Corinne Reichert reminds us—“Clearview AI facial recognition could be used on protesters”:

 Sen. Edward Markey has raised concerns that police and law enforcement agencies have access to controversial facial recognition app Clearview AI in cities where people are protesting the killing of George Floyd. [He] said Tuesday the technology could be used to identify and arrest protestors.

Clearview AI identifies people by comparing photos to a database of images scraped from social media and other sites. … Google, YouTube, Microsoft and Twitter have sent cease-and-desist letters to Clearview AI, and the company is also facing multiple lawsuits.

Markey … has previously hammered Clearview AI over its sales to foreign governments, use by domestic law enforcement, and use in the COVID-19 pandemic. [He] is now asking the company for a list of law enforcement agencies that have signed new contracts since May 25.

Clearview AI CEO and co-founder Hoan Ton-That … said he will respond. … “Clearview AI’s technology is intended only for after-the-crime investigations, and not as a surveillance tool relating to protests or under any other circumstances.”

Wait, what? A slightly sweary jenningsthecat (no relation) waxes excoriating:

 And Facebook and Twitter aren’t “intended” for propagating racism and misogyny. And guns aren’t “intended” for criminal use.

Why do people bother saying this ****? If you’re not taking substantive steps to actively prevent your product or service from being used in a way you didn’t “intend”, then just STFU.

Or better yet, be honest and say “we’re in it for the money.”

The junior U.S. Senator from Massachusetts, Edward J. Markey (D), pens an actual letter—or, at least, a PDF on his website:

 Dear Mr. Ton-That: … I have previously written to you about law enforcement’s use of your technology, expressing my fear that it could infringe on Americans’ civil liberties … but your responses failed to allay my concerns. … These concerns do not exist purely in the abstract.

As demonstrators across the country exercise their First Amendment rights by protesting racial injustice, it is important that law enforcement does not use technological tools to stifle free speech or endanger the public. … The prospect of such omnipresent surveillance also runs the risk of deterring Americans from speaking out against injustice for fear of being permanently included in law enforcement databases.

Your company has not been adequately transparent about several issues. … In your May 15, 2020 response letter, you did not commit to submitting Clearview AI’s technology for an independent assessment of accuracy and bias … including testing for error rates for true negatives, false matches, and people of color, and publish the results of this assessment publicly. Given the concerns raised by civil liberties experts that false positives could lead to innocent protesters (especially women and people of color) being arrested or confronted by police, will you now commit to submitting Clearview AI to such an assessment?.

I urge you to take every step necessary to ensure that your technology will not force Americans to choose between sacrificing their rights to privacy or remaining silent in the face of injustice. Thank you for your continued attention to these important matters.

But where do I know that firm from? Your Humble Blogwatcher threw together this word salad back in February—“Clearview, a Startup Probably Holding Your Image, Gets Hacked”:

 Clearview AI sells facial recognition services based on the 3 billion images of us that it claims to hold. Predictably, this young, agile startup has been hacked in an embarrassing data breach.

The company lost control of its customer database, it warns its customers. And—wow—isn’t this a convenient time to announce the breach? … Right in the middle of the RSA Conference.

But Kohath has used up all three cliché wishes:

 The facial recognition genie isn’t going back in the bottle. The cat isn’t going back in the bag. The secret is out and it’s not going back to being secret.

It will probably be used whenever someone wants to recognize faces. If not Clearview, then some other facial recognition.

If you’re on camera doing something, and your face (or tattoos) are visible, and people want to know who did it, they’ll probably find out it was you. If you burned down someone’s home or business, I hope they do.

Meanwhile, given the rumors about how easy it is to get a “free trial” of Clearview AI, dalrympm has an idea:

 How about running some photos of the unidentified riot police through it? Seems like this thing should cut both ways.

And Finally:

Max Sansalone and friends’ legit cover

Hat tip: Gareth Branwyn

Previously in And Finally

You have been reading SB Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi or [email protected]. Ask your doctor before reading. Your mileage may vary. E&OE. 30.

Image sauce: IBM Research Zurich (cc:by-nd)

Featured eBook
The Dangers of Open Source Software and Best Practices for Securing Code

The Dangers of Open Source Software and Best Practices for Securing Code

More and more organizations are incorporating open source software into their development pipelines. After all, embracing open source products such as operating systems, code libraries, software and applications can reduce costs, introduce additional flexibility and help to accelerate delivery. Yet, open source software can introduce additional concerns into the development process—namely, security. Sponsorships Available Unlike ... Read More
Security Boulevard

Richi Jennings

Richi Jennings is a foolish independent industry analyst, editor, and content strategist. A former developer and marketer, he’s also written or edited for Computerworld, Microsoft, Cisco, Micro Focus, HashiCorp, Ferris Research, Osterman Research, Orthogonal Thinking, Native Trust, Elgan Media, Petri, Cyren, Agari, Webroot, HP, HPE, NetApp on Forbes and Bizarrely, his ridiculous work has even won awards from the American Society of Business Publication Editors, ABM/Jesse H. Neal, and B2B Magazine.

richi has 421 posts and counting.See all posts by richi