Artificial Intelligence—The Robot Becky Menace
07/20/2018
A+
|
a-
Print Friendly and PDF

https://www.bleedingcool.com/wp-content/uploads//2013/01/sean-young-replicant-rachel-blade-runner-headshot.jpgAs white people ever more often get hammered in the national media — with potential lifelong effects on their employability — for a single Type I (false alarm) error involving calling the cops on black people who turn out to be law-abiding, it’s only natural to look to Artificial Intelligence to get you out from under the vast amount of hatred hunting for Evil White People. “Hey, it wasn’t me who called the cops, it was the AI system!”

But what if the AI crime-reporting robots turn out to be Beckies and Permit Patties too?

Not surprisingly, the conventional wisdom increasingly believes Artificial Intelligence needs a dose of Artificial Stupidity to keep it from being as racist and sexist as Natural Intelligence. Otherwise, the Robot Permit Patties will run amok, says Nature:

AI can be sexist and racist — it’s time to make it fair

18 JULY 2018

Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data, argue James Zou and Londa Schiebinger. …

Biased decision-making is hardly unique to AI, but as many researchers have noted1, the growing scope of AI makes it particularly important to address. Indeed, the ubiquitous nature of the problem means that we need systematic solutions. Here we map out several possible strategies.

Most of these are pretty sensible for applications not concerned with important real time tasks such as crime prevention, but there is no thought about the dangers of Type II errors in public safety systems.

… As much as possible, data curators should provide the precise definition of descriptors tied to the data. For instance, in the case of criminal-justice data, appreciating the type of ‘crime’ that a model has been trained on will clarify how that model should be applied and interpreted. …

Lastly, computer scientists should strive to develop algorithms that are more robust to human biases in the data.

Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals.

Hey, what could be wrong with “essentially nudging” the data? It’s just a nudge. It’s not like were cheating with the data, we’re just nudging it in the right direction we want it to go.

A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.

This is the Blue Whale in the bathtub. For say, teenage males, the difference in criminal propensity between African-Americans and Asian-Americans is between one and two orders of magnitude. If you reduce AI’s dependence on “ethnicity, gender, income,” you are pretty much dead in the water when it comes to public safety, especially if “age” gets added to the list as another “sensitive attribute.” Everything else, such as measuring the body language of the individuals or their vocabularies, will have massive Disparate Impact problems.

Such nascent de-biasing approaches are promising, but they need to be refined and evaluated in the real world.

An open challenge with these types of solutions, however, is that ethnicity, gender and other relevant information need to be accurately recorded.

This is obfuscation intended to trigger confusion in readers’ minds. Say your AI visual recognition system for crime prevention occasionally confuses African-American youths with dark-skinned Dravidian-American youths. Making its ethnicity recognition module more accurate would just make the system more biased against African-Americans, not less.

Unless the appropriate categories are captured, it’s difficult to know what constraints to impose on the model, or what corrections to make. The approaches also require algorithm designers to decide a priori what types of biases they want to avoid.

The problem is that when it comes to crime, reality is biased.

… As computer scientists, ethicists, social scientists and others strive to improve the fairness of data and of AI, all of us need to think about appropriate notions of fairness. Should the data be representative of the world as it is, or of a world that many would aspire to?

For example, as Good People we aspire to a world in which black youths get arrested for street crime no more often than Asian youths. If a human Becky or two or three gets mugged or raped because of our noble aspirations, well, that’s just a price the Beckys will have to pay for our moral superiority.

[Comment at Unz.com]

Print Friendly and PDF