Trending News: Can you tell if you’re engaging with artificial intelligence online?
A research paper published in November of this year from the University of Southern California’s Department of Computer Science found that artificial intelligence, so-called “bots” exhibited anti-social behavior online.
These social media bots targeted specific social media influencers with negative posts that were intended to exacerbate violent commentary and sway their political bias.
This study, one of the first of its kind was published in Proceedings of National Academy of Sciences of the United States of America and was co-authored by assistant research professor and associate director of Informatics, Emilio Ferrara. Ferrara noted that “Every user is exposed to this either directly or indirectly because bot-generated content is nowadays very pervasive”.
The study on social hacking reviewed nearly 4 million Twitter posts that focused on the controversial vote for a referendum on Catalan independence. Researchers looked at the type of behavioral dynamics employed by the bots, which users were targeted and the type of content that was shared by artificial intelligence online.
The bots were strategically selecting and pursuing certain social media influencers. The AI bots posted mostly negative comments that would antagonize those individuals into generating more inflammatory content.
Those human users were also unaware that the accounts that they were engaging with were in fact artificial in design.
It was previously believed that bots had no bias for a type of content or specific users, but this paper entitled “Bots Increase Exposure to Negative and Inflammatory Content in Online Social Systems” seems to prove otherwise.
Perhaps there is a glitch in their initial programming or a concentrated effort by their programmers to influence social debate. What’s certain is that artificial intelligence has the power to influence social media behavior anonymously.
Can artificial intelligence be relied upon to use moral standards for posting and promoting harmful content online? Kentaro Toyama, W.K. Professor of Community Information at the University of Michigan School of Information, doesn’t think so. He’s the author of Geek Heresy: Rescuing Social Change from the Cult of Technology. Guy Counseling spoke to him to gain his insight and he shared the following with us: “Most AI systems today, as smart as they may be in some ways, are still completely indifferent to human values and morality.”
Toyama went on to say, “There are research teams such as at Berkeley’s Center for Human-Compatible AI which are investigating ways to make AI systems more ethical, but these ideas are nascent at best. But even if such technologies existed, the problem with ethical AI is that unless individual people and corporations who use AI choose to use ethical AI systems, there can always be non-ethical AI systems out there causing harm.”
The last U.S. presidential election gave new fire to the debate over social hacking and it is widely believed that so-called “fake news” played an integral role in swaying and manipulating the United States electorate.
There are human moderators and automated monitoring systems in place on social media sites like Twitter and Facebook, but their effectiveness in counteracting social hacking is hit or miss.
Some of those moderating systems are employing artificial intelligence themselves. Until better safeguards are put in place to monitor or control what type of language or content is being distributed by social media bots, it’s likely that artificial intelligence will have a say in every major socio-political debate to come.
Have you ever been trolled by a bot on social media? Would you know it if it happened?