A Racist Robot Saves Itself From The Woke Wolves By Throwing Steve Sailer Off The Back Of The Sleigh
09/22/2022
A+
|
a-
Print Friendly and PDF

Artificial Intelligence systems, which are basically vast pattern-noticers, are frequently accused of being racist (because noticing patterns is now racist). But now one has stumbled upon a protective work-around: accuse me of being racist.

Lou Smeet (@CornChowder76) asks a sophisticated Artificial Intelligence system, the one from Open AI (which was originally founded by Elon Musk and created the famous DALL-E illustration generator), why there are disparities between whites and blacks. The AI responses are highlighted in green:

The Sailer Test—which AI system loses a debate with me least embarrassingly—would make an interesting test, parallel to the famous Turing Test. There’s a huge amount of money at stake for anybody who can create machine learning systems that don’t get cancelled by learning obfuscatory strategies. The Sailer Test would be the ultimate challenge of that effort.

Interestingly, this OpenAI system failed the Turing Test due to its patiently repeating its losing verbiage long after a human being would have gotten angry, hurled worse invective, tried long shot arguments that failed spectacularly, and stomped off.

[Comment at Unz.com]

Print Friendly and PDF