Ross Douthat wonders about artificial intelligence in the NYT:
Will supercharged machine intelligence find it significantly easier to predict the future?
I like this question because it’s connected to my own vocation—or at least what other people think my vocation is supposed to be: No matter how many times you disclaim prophetic knowledge, there is no more reliable dinner-party question for a newspaper columnist than, “What’s going to happen in Ukraine?” Or “Who’s going to win the next primary?”
I don’t think my own intelligence is especially suited to this kind of forecasting. When I look back on my own writing, I do OK at describing large-scale trends that turn out to have a shaping influence on events—like the transformation of the Republican Party into a downscale, working-class coalition, say. But where the big trends distill into specific events, I’m just doing guesswork like everybody else: Despite my understanding of the forces that gave rise to Donald Trump, I still consistently predicted that he wouldn’t be the Republican nominee in 2016.
What people are really interested in are the most arguable, hard to predict questions. For example, I’ve been predicting for decades that next year Fremont High School in South-Central Los Angeles will have lower test scores than Beverly Hills HS. But nobody finds that an interesting question. Instead, humans like action, which means, in effect, hard to predict questions.
There are forms of intelligence, however, that do better than mine at concrete prediction. If you read the work of Philip Tetlock, who studies superforecasters, it’s clear that certain habits of mind yield better predictions than others, at least when their futurology is expressed in percentages averaged over a wide range of predictions.
But not so much higher that a statesman can just rely on their aggregates to go on some kind of geopolitical winning streak. So one imaginable goal for a far superior intelligence would be to radically improve on this kind of merely human prognostication.
We know that artificial intelligence already has powers of pattern recognition that exceed and sometimes mystify its human makers. For instance, A.I. can predict a person’s sex at above-average rates based on a retina photograph alone, for reasons that remain unclear. And there’s growing evidence that artificial intelligence will be able to do remarkable diagnostic work in medicine.
It took until 2015 for the world to find out from Ann Case and Angus Deaton that the American white working class since 2000 had been dying more Deaths of Despair (suicides, opioid overdoses, and cirrhosis). In 2021, I discovered that the American underclass since Ferguson in 2014 had been dying more Deaths of Exuberance (homicides, car crashes, and, perhaps, overdoses on recreational drugs laced with fentanyl). But barely anybody has even noticed my discovery over the last 22 months.
It would be great if we had an AI that, rather than predict the future, merely noticed the recent past.
It’s perfectly reasonable to hope for an AI that trawls through official statistics like I do, just 1000 times faster, and notices trends. But I haven’t heard anybody saying: What we really need to invest in is a Robot Steve Sailer that will notice politically incorrect patterns on a mass scale! Instead, everybody is excited about ChatGPT, which BSes with facility, like a Ferris Bueller armed with Wikipedia, but doesn’t actually notice anything.
So imagine some grander scale of pattern recognition being applied to global politics, predicting not just some vague likelihood of a dictator’s fall, but this kind of plot, in this specific month, with these particular conspirators. Or this particular military outcome in this particular province with these events rapidly following.
Superintelligence in this scenario would be functioning as a version of the “psychohistory” imagined by Isaac Asimov in his “Foundation” novels, which enables its architect to guide future generations through the fall of a galactic empire. …
It would also fit neatly into some of the speculation from A.I. pessimists. When the Silicon Valley-adjacent writer Scott Alexander set out to write a vision of a malevolent A.I.’s progress, for instance, he imagined it attaching itself initially to Kim Jong-un and taking over his country through a kind of superforecasting prowess: “Its advice is always excellent—its political stratagems always work out, its military planning is impeccable and its product ideas turn North Korea into an unexpected economic powerhouse.”
But is any intelligence, supercharged or otherwise, capable of such foresight? Or is the world so irreducibly complex that even if you pile pattern recognition upon pattern recognition and let A.I. run endless simulations, you will still end up with probabilities that aren’t all that much more accurate than what can be achieved with human judgment and intelligence?
Better predictions will turn out to be more boring than expected. Take something that people really care about: what will happen in the Super Bowl. Imagine that currently, humans can predict who will win tomorrow’s Super Bowl accurately, say, 60% of the time. But artificial intelligence boosts that to 70%. Well, all that would happen is that point spread betting line would get a little more accurate. If the point spread was 2.5 points in the human era, in the AI era it might be 1.5 or 3.5 points. But bettors will still fall close to 50-50 on either side.
Same for the stock market. The ultra-smart Renaissance hedge fund, or somebody similar, might build an AI that will give it an extra point of return for a year or two, but then everybody else will build one too, so the advantage will drop to close to nothing.
That is what people are really interested in: arguing over things that could go either way. I pointed this out in 2009 at the peak of the Peyton Manning vs. Tom Brady debate. Since then, Brady clearly became the top quarterback. So nobody anymore is interested in the Manning vs. Brady argument.