Who Is Making Sure Robots Aren't Racist?
03/17/2021
A+
|
a-
Print Friendly and PDF

Earlier: NYT: The Racist Robot Crisis Is A Billion Dollar Opportunity For Wokesters

In the New York Times’ Not-So-Great Reads section, Cade Metz, the Scott Alexander doxing guy, wields humanity’s greatest weapon in its war with artificially intelligent racist robots: natural stupidity.

Who Is Making Sure the A.I. Machines Aren’t Racist?

When Google forced out two well-known artificial intelligence experts, a long-simmering research controversy burst into the open.

“Your life starts getting worse when you start advocating for underrepresented people,” the researcher Timnit Gebru said before losing her job at Google.

By Cade Metz, March 15, 2021

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men.

Whitemen.

More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

I’ve got a great idea: People of Intersectionality should form a tech start-up to invent anti-racist artificial intelligence. They’ll be billionaires!

… Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.

She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.”

How hard is the subject matter of “ethical A.I.?” I’m largely baffled by real A.I., and at 62 will likely never have much of a clue about what A.I. guys do on the job. But I have a hunch I could get up to speed in the Ethical A.I. business in about a month.

Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.

Their work was hailed as groundbreaking. The nascent A.I. industry, it had become clear, needed minders and people with different perspectives.

Minders?

Then there is a long section in the NYT about how facial recognition systems have a harder time accurately recognizing blacks than whites.

I guess what Metz is calling for is for artificial intelligence companies to hire lots of blacks to make facial recognition work better so A.I. can use security camera footage to identify more black criminals and put more black men in jail.

A commenter points to this comment on the NYT article:

Professor, Menlo Park

Let me explain what the author of this piece probably isn’t privy to, and no one will have the courage to tell him unless it’s anonymous. The field of A.I. ethics, while well-intentioned on the surface, is really dominated by the following type of people. Getting into tech, and especially into A.I., is a modern gold rush. Everybody wants a piece of the pie. The catch? You have to be quite smart, actually. Like, very, very nerdy. So, what about people who are not smart? Mediocre, or worse? Enter, A.I. ethics. Almost every student I know who is not smart enough to make money in tech in the usual way, wants to go into A.I. ethics. It’s like a team of hustlers, wanting to make money off of other people’s hard work. The companies want to improve their image by hiring such people, but they will realize, as Google is doing now, that putting mediocre people in prominent positions is not a good idea. By the way, I am not white, nor from a privileged upbringing.

I could do Ethical A.I.! (If not for my racial, gender, ethnic and ethical liabilities.)

In reality, there is a potential problem with race in A.I. as Jonatan Pallesen explains:

Can't load tweet https://twitter.com/jonatanpallesen/status/1371717853410381830?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1371717853410381830%7Ctwgr%5E%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.unz.com%2Fisteve%2Fwho-is-making-sure-robots-arent-racist%2F: Sorry, that page does not exist

As I pointed out in 2018, we want to both reduce crime and do justice. But there can be a trade-off. For example, when sentencing a criminal, judges often try to take into account both what the defendant did and what he might do. So, if a convict has a lot of individual traits (such as membership in a criminal gang) that predict he might do something bad again when he is let out of prison, the judge might sentence the gang member to a longer term than somebody without those negative predictive traits who committed the same crime.

But what should you do when the only significant difference between two convicts is race?

It’s a little like racial profiling.

The fact is that all else being equal, a black who commits a particular crime is more likely to do it again than a nonblack who commits the same crime. But is it fair to sentence an individual black to a longer prison term than a nonblack due to the statistical tendency of his race?

These are tough questions.

You can tell a judge to ignore race in making his decision. He may or may not be able to do that, but he can try.

But with recent artificial intelligence systems that research questions for themselves and bring back a black box model, it’s hard to do that.

Even if you hardcode the rule Thou Shalt Not Consider Race, the AI system might go find correlates for race, such as the defendant drinks grape soda and smokes menthols.

But the AI system isn’t a Bad Person or a Nice Person, it’s not a person, it just burrows away until it finds something that works.

Why haven’t the smartboys of Silicon Valley found a way around this problem? I suspect that they are hamstrung because the problem is only recognizable to us Bad People who recognize that race itself is a factor in predicting criminal behavior tendency. A huge number of very smart people assume that, of course, if only you adjust for this and that using huge amounts of computing power, of course these skin-deep racial differences must be accounted for without using race.

And even if you know the truth, do you dare bring it up at work with your colleagues? If you do, will you wind up fired like the two Georgetown Law professors?

[Comment at Unz.com]

Print Friendly and PDF