Big Tech: Dr. Epstein’s 10-Point List of Techniques It Uses to Shift Votes Secretly
09/28/2018
A+
|
a-
Print Friendly and PDF

Robert Epstein’s Twitter account (@DrREpstein) says that this is his “breakthrough article” on the subject he has been researching for several years, namely the power of liberal social media giants to influence public opinion in very substantial ways. The upshot is that he is accumulating more evidence about how Big Tech functions to have become a huge hidden persuader in political life today.

Google is particularly powerful because it slants search results leftward.

Dr. Epstein described how Google influences political opinion during an August appearance with Tucker Carlson:

EPSTEIN: Well I think they’re doing this all the time actually, because we’re well aware of the fact that they suppress material, sometimes they announce it, sometimes they don’t. We are aware of the fact that Google puts some items higher in search results than other items; well, if search results favor one candidate that shifts votes. I think we’re well aware of the fact that news feeds on Facebook sometimes seem to favor one political point of view over another, and that shifts votes. So I have an article coming out very soon about ten different ways that these big tech companies can shift millions of votes in November, in fact I calculate this November they’ll be able to shift upwards of 12 million votes just in the midterm elections.

Here is his article listing ten ways Big Tech can shift millions of votes through surreptitious means:

10 Ways Big Tech Can Shift Millions of Votes in the November Elections—Without Anyone Knowing, by Robert Epstein, Epoch Times, September 26, 2018

A noted researcher describes 10 ways Google, Facebook, other companies could shift millions of votes in the US midterms

Authorities in the UK have finally figured out that fake news stories and Russian-placed ads are not the real problem. The UK Parliament is about to impose stiff penalties—not on the people who place the ads or write the stories, but on the Big Tech platforms that determine which ads and stories people actually see.

Parliament’s plans will almost surely be energized by the latest leak of damning material from inside Google’s fortress of secrecy: The Wall Street Journal recently reported on emails exchanged among Google employees in January 2017 in which they strategized about how to alter Google search results and other “ephemeral experiences” to counter President Donald Trump’s newly imposed travel ban. The company claims that none of these plans was ever implemented, but who knows?

While U.S. authorities have merely held hearings, EU authorities have taken dramatic steps in recent years to limit the powers of Big Tech, most recently with a comprehensive law that protects user privacy—the General Data Protection Regulation—and a whopping $5.1 billion fine against Google for monopolistic practices in the mobile device market. Last year, the European Union also levied a $2.7 billion fine against Google for filtering and ordering search results in a way that favored their own products and services. That filtering and ordering, it turns out, is of crucial importance.

As years of research I’ve been conducting on online influence has shown, content per se is not the real threat these days; what really matters is (a) which content is selected for users to see, and (b) the way that content is ordered in search results, search suggestions, newsfeeds, message feeds, comment lists, and so on. That’s where the power lies to shift opinions, purchases, and votes, and that power is held by a disturbingly small group of people.

I say “these days” because the explosive growth of a handful of massive platforms on the internet—the largest, by far, being Google and the next largest being Facebook—has changed everything. Millions of people and organizations are constantly trying to get their content in front of our eyes, but for more than 2.5 billion people around the world—soon to be more than 4 billion—the algorithms of Google and Facebook determine what material will be seen and where it will turn up in various lists.

In randomized, controlled, peer-reviewed research I’ve conducted with thousands of people, I’ve shown repeatedly that when people are undecided, I can shift their opinions on just about any topic just by changing how I filter and order the information I show them. I’ve also shown that when, in multiple searches, I show people more and more information that favors one candidate, I can shift opinions even farther. Even more disturbing, I can do these things in ways that are completely invisible to people and in ways that don’t leave paper trails for authorities to trace.

Worse still, these new forms of influence often rely on ephemeral content—information that is generated on the fly by an algorithm and then disappears forever, which means that it would be difficult, if not impossible, for authorities to reconstruct. If, on Election Day this coming November, Mark Zuckerberg decides to broadcast go-out-and-vote reminders mainly to members of one political party, how would we be able to detect such a manipulation? If we can’t detect it, how would we be able to reduce its impact? And how, days or weeks later, would we be able to turn back the clock to see what happened?

Of course, companies like Google and Facebook emphatically reject the idea that their search and newsfeed algorithms are being tweaked in ways that could meddle in elections. Doing so would undermine the public’s trust in their companies, spokespeople have said. They insist that their algorithms are complicated, constantly changing, and subject to the “organic” activity of users.

This is, of course, sheer nonsense. Google can adjust its algorithms to favor any candidate it chooses no matter what the activity of users might be, just as easily as I do in my experiments. As legal scholar Frank Pasquale noted in his recent book “The Black Box Society,” blaming algorithms just doesn’t cut it; the responsibility for what an algorithm does should always lie with the people who wrote the algorithm and the companies that deployed the algorithm. Alan Murray, president of Fortune, recently framed the issue profoundly: “Rule one in the Age of AI: Humans remain accountable for decisions, even when made by machines.”

Given that 95 percent of donations from Silicon Valley generally go to Democrats, it’s hard to imagine that the algorithms of companies like Facebook and Google don’t favor their favorite candidates. A newly leaked video of a 2016 meeting at Google shows without doubt that high-ranking Google executives share a strong political preference, which could easily be expressed in algorithms. The favoritism might be deliberately programmed or occur simply because of unconscious bias. Either way, votes and opinions shift.

It’s also hard to imagine how, in any election in the world, with or without intention on the part of company employees, Google search results would fail to tilt toward one candidate. Google’s search algorithm certainly has no equal-time rule built into it; we wouldn’t want it to! We want it to tell us what’s best, and the algorithm will indeed always favor one dog food over another, one music service over another, and one political candidate over another. When the latter happens … votes and opinions shift.

Here are 10 ways—seven of which I am actively studying and quantifying—that Big Tech companies could use to shift millions of votes this coming November with no one the wiser. Let’s hope, of course, that these methods are not being used and will never be used, but let’s be realistic too; there’s generally no limit to what people will do when money and power are on the line.

1. Search Engine Manipulation Effect (SEME)

Ongoing research I began in January 2013 has shown repeatedly that when one candidate is favored over another in search results, voting preferences among undecided voters shift dramatically—by 20 percent or more overall, and by up to 80 percent in some demographic groups. This is partly because people place inordinate trust in algorithmically generated output, thinking, mistakenly, that algorithms are inherently objective and impartial.

But my research also suggests that we are conditioned to believe in high-ranking search results in much the same way that rats are conditioned to press levers in Skinner boxes. Because most searches are for simple facts (“When was Donald Trump born?”), and because correct answers to simple questions inevitably turn up in the first position, we are taught, day after day, that the higher a search result appears in the list, the more true it must be. When we finally search for information to help us make a tough decision (“Who’s better for the economy, Trump or Clinton?”), we tend to believe the information on the web pages to which high-ranking search results link.

(Continues)

Print Friendly and PDF