1 June 2016 – When it comes to AI there tend to be two camps: those that say we are entering a golden age of discovery and human advancements (the “Star Trek” acolytes), and those who believe we are about to enter the most dystopian period mankind has ever seen (the “Mad Max” believers).
After seeing how Facebook has seized the media (there is a much more sinister story here beyond the “liberal agenda” headlines; post to come), a NATO briefing on the military’s fear of mass drone attacks on civilian populations, a new Chinese technology that can change metadata without being detected, and genetic manipulation .. well, I am in the “Mad Max” camp. In reality, it will be something in between. But for the moment … now well into my four year neuroscience/artificial intelligence program … I will give a wide berth to boasts about dragon-slaying medical breakthroughs and an information age nirvana.
But there is one area in which I totally agree with the tech geeks: “computational manipulation”.
Humanity has been advancing the field of propaganda for as long as we’ve been at war or had political fights to win. But today, propaganda is undergoing a significant change based on the latest advances in the fields of big data and artificial intelligence. Over the past decade, as we know, billions of dollars have been invested in technologies that customize ads increasingly precisely based on individuals’ preferences. Now this is making the jump to the world of politics and the manipulation of ideas, especially in the U.S., through political bots.
As if American political discourse wasn’t ugly enough.
Over the last year (longer, perhaps) Silicon Valley has been touting bots as a new tool for social engagement. Policy makers, journalists, and civic leaders often use them transparently:
- @congressedits uncovers political interference on Wikipedia
- @staywokebot critiques racial injustice
- The New York Times’ new election bot promotes political participation.
But as the power of bots grows, so does the capacity for misuse. Bots now pollute conversations around topics like #blacklivesmatter and #guncontrol, interrupting productive debate with outpourings of automated hate. We’ve seen antivaccination bots reach out to parents in a campaign to discourage child inoculations. So it’s no surprise that bots are creeping into election politics.
Note: the U.S. military has been at this since at least 2008 with automated computational propaganda and now bots using sophisticated artificial intelligence frameworks that have algorithms that can not only read the news, but write it (rewrite it?) and disseminate it to countries it wishes to destabilize. It’s not all drones.
The most conspicuous use on the American political front involves the next U.S. President, Donald Trump. We all remember (well, if you follow Twitter) Donald Trump’s Tweets after sweeping the Nevada caucus “I understand minority communities”. Well, hell. Just ask Pepe Luis Lopez, Francisco Palma, and Alberto Contreras. These guys are among the candidate’s 7 million Twitter followers, and each ReTweeted in support of Trump after that victory.
Except for one problem: Pepe ,Francisco, and Alberto aren’t people. They’re bots – spam accounts that post autonomously using programmed scripts. In reality Trump’s rhetoric has alienated much of the Latino electorate, a fast-growing voting community throughout the U.S. And while it’s unclear who’s behind the accounts of Pepe and his digital pals, their Tweets succeed in impersonating Latino voters at a time when the real estate mogul needs them most.
And they then get ReTweeted by REAL people, and then mainstream media picks up on it and does its “analytics” and says “wow, the Hispanic community is moving toward Trump”. These bots tend to have few followers and disappear quickly, dropping propaganda bombs as they go.
According to TwitterAudit, one in four of Trump’s followers is fake, and similar ratios run through the accounts of other candidates.
Note: TwitterAudit is not affiliated with Twitter. I had a chance to learn the analytics behind how TwitterAudit works and it is fodder for a follow-up post. Simplified … and straight off their website … each audit takes a random sample of 5000 Twitter followers for a user and calculates a score for each follower. This score is based on number of tweets, date of the last tweet, and ratio of followers to friends. They use these scores to determine whether any given user is real or fake. Of course, this scoring method is not perfect (and I have simplified it and not provided the math) but it is a good technique to tell if someone with lots of followers is likely to have increased their follower count by inorganic, fraudulent, or dishonest means.
Even if most of these bots are inactive, they still exaggerate a candidate’s popularity.
Two research teams at the University of Washington and the University of Oxford (called PoliticalBots) tracks bot activity in politics all over the world, and what they see is the politics of the future in its infancy. In past elections, politicians, government agencies, and advocacy groups have used bots to engage voters and spread messages. Now these teams have caught bots disseminating lies, attacking people, and poisoning conversations. They note that automated campaign communications are a very real threat to democracy and elections and can unduly influence the 2016 U.S. Presidential election.
This is only the start. For years, robocalling and push polling have been used to manipulate voters. But not everyone is reachable by a landline anymore. So bots will become the “go-to” mode for negative campaigning in the age of social media.
Say the race is close in your state. If an army of bots can seed the web with negative information about the opposing candidate, why not unleash them? If you’re an activist hoping to get your message out to millions, why not have bots do it?
And don’t underestimate bots: the PoliticalBots folks said there are tens of millions of them on Twitter alone, and automated scripts generate 60 percent of traffic on the web at large. The worst bots undermine voter sophistication by pervading the networks and making themselves “go-to” sources for news and information.
A biggie: setting up a fake social network supposedly concerned with public health, or politics, or immigration or [fill in your cause], complete with hashtags, dummy advertisements and a database of users’ “political tendencies”, and draw people to it.
And this last move makes sense. A study at Indiana University have suggested that obvious bot accounts are much less effective at spreading political lies. Facebook and Twitter currently rely on passive and somewhat arbitrary methods for combating automated speech; they tend to wait for users to report suspicious activity and have a patchy record when it comes to stopping harmful propaganda. Yet they’re perfectly capable of labeling non-bot messages derived from a platform API or mobile phones. Just as Wikipedia alerts readers to flawed articles, social media sites should (and could) clearly identify fake users with big red flags.
And let’s face it. This technology exploits the simple fact that we are much more impressionable than we think. We see something cool or controversial or totally “in synch” with our beliefs and we automatically ReTweet or share it. I certainly fall into that but less so. I now take the time to follow/read/analyze an underlying link if it is in a social media post (even if I know the writer) and at least offer a comment when I can.
Remember last year’s story about Facebook’s experiments to modify users’ moods? It showed us that the very language we use to communicate is subject to manipulation based on the stories that the Facebook algorithm chooses to show us. Furthermore, researchers at MIT followed up ton that to show that a false “up vote” cast early on can improve the public response to a story by 25 per cent; a single early “down vote” can make an otherwise good story be perceived as a low-quality piece of journalism. The propaganda bots use this knowledge to influence news feeds – those automated “friends” will like, retweet and comment on stories that are in line with their propaganda goals.
“Paging Mr Orwell ….”