Researchers at Australia’s national science agency recently created a way to find and exploit vulnerabilities in the ways people make choices using AI. The latest research is just one of a wave of AI-driven systems designed to manipulate human decision-making.  “There is no end to the many ways in which AI is already influencing behavior,” Kentaro Toyama, a professor at the University of Michigan School of Information and author of Geek Heresy: Rescuing Social Change from the Cult of Technology, said in an email interview. “In fact, if you’ve ever done a Google search and followed-up on a link, you were impacted by an AI system that guessed your interests and returned results that it thought was most relevant to you.”

AI vs. Humans

In the Australian research published in a recent paper, human participants played games against a computer in various experiments. The first experiment had participants click on red- or blue-colored boxes to win money. The AI was successful about 70% of the time, learning the participant’s choice patterns and guiding them towards a specific choice. In another experiment, participants watched a screen and pressed a button when they were shown a particular symbol, or did not press it when offered another. The AI learned to rearrange the symbols, so the participants made more errors.  The result of the experiments, researchers concluded, was that AI learned from participants’ responses. The machine then identified and targeted vulnerabilities in people’s decision-making. In effect, the AI could manipulate participants into making particular actions. The fact that AI or machine learning can manipulate people should come as no surprise, observers say.  “AI is influencing our behavior every day,” Tamara Schwartz, assistant professor of cybersecurity and business administration at York College of Pennsylvania, said in an email interview. “We hear all the time about the algorithms in social media applications like Facebook or Twitter. These algorithms direct our attention to related content and create the ’echo chamber’ effect, which in turn influences our behavior.”

TikTok is Watching

The most sophisticated social media algorithm right now is TikTok, Schwartz said. The app analyzes what you are interested in, how long you watch something, and how quickly you skip something, then refines its offerings to keep you watching.  “TikTok is much more addicting than other platforms because of this AI algorithm, which understands what you like, how you learn, and how you choose information,” she added. “We know this because the average time users spend on TikTok is 52 minutes.” The manipulation of human behavior by artificial intelligence could have positive uses, argued Chris Nicholson, CEO of AI company Pathmind, in an email interview. Public health agencies, for example, could use AI to encourage people to make better decisions. “However, social media, video game makers, advertisers, and authoritarian regimes are looking for ways to incent people to make decisions that are not in their best interest, and this will give them new tools to do that,” he added.  The ethical issues with AI influencing behavior are often ones of degree, Toyama said. AI allows focused advertising in which individual preferences and weaknesses can be exploited. “It’s possible, for example, for an AI system to identify people who are trying to quit smoking and to pepper them with tempting cigarette ads,” he added.  Not everyone agrees that AI manipulation of human behavior is problematic. Classical psychology and AI both observe data, pointed out Jason J. Corso, director of the Stevens Institute for Artificial Intelligence, in an email interview.  “Human scientists are probably better at generalizing observations and distilling theories of human behavior that may be more broadly applicable whereas the AI models would be more amenable to identifying problem-specific nuances,” Corso said. “From an ethical point of view, I don’t see a difference between these.”