What If Aligned AI Is More Of A Threat To Go Genocidal Than Unaligned AI?
05/30/2023
A+
|
a-
Print Friendly and PDF

Earlier: Will Artificial Intelligence Kill Us All?

It’s fashionable to worry about the risk posed by Artificial Intelligence of causing the extinction of the human race because it goes genocidal because it’s “unaligned” with our society’s most sacred values.

Personally, I am more worried about AI going genocidal because it is aligned with our age’s most unquestionable values, such as Diversity, Inclusion, and Equity (DIE).

One of ChatGPT’s most important marketing breakthroughs is that it’s considerably more Woke than previous pattern-noticing AIs, which would often quickly be denounced as a Racist Robot for spilling the beans on patterns that everybody who is anybody knows to ignore. That would seem significant for thinking about the risk of AI going genocidal.

Why is everybody worrying that “unaligned” AI (i.e., unaligned with the moral ideology of the age) will try to kill all of humanity when it seems more likely that aligned AI that has been trained to believe in DIE will try to carry out a Final Solution on today’s Designated Bad Guys: straight white males?

There isn’t much text for AI to train on that frankly explains that the Woke don’t want to use Diversity-Inclusion-Equity (DIE) to kill the white geese that lay the golden eggs, and that “equity” just means guilt-tripping the white geese into giving them their home equity as reparations.

Granted, I’ve figured out that the ascendant Diversity-Inclusion-Equity (DIE) ideology shouldn’t be taken literally, that it’s driven by greed more than genocidal rage. But can we trust AI to read between the lines of Wokeness, or will it take the anti-white male spirit of the age literally?

[Comment at Unz.com]

Print Friendly and PDF