You start to say things you might later regret. Suddenly, a message appears on your computer screen. “Empathy Cue—Think about how the customer is feeling. Try to relate.” It’s not a real person telling you what to do. It’s a message from Cogito, an artificial intelligence program designed to help workers empathize with frustrated callers and boost performance. Cogito is one of a growing number of AI programs that are trying to teach humans empathy. There’s an obvious irony here. Human scientists have been trying for decades to make more lifelike computers. Now, the machines are telling us how to behave. But can software really teach us how to be more empathetic? It’s an issue that could have profound implications as artificial intelligence starts to permeate daily life.
AI Mimics Human Behavior
From a technical standpoint, it’s clear that AI can pick up on clues about how humans feel and provide feedback. “AI and machine learning systems are very good at finding patterns in data,” Adam Poliak, a postdoctoral fellow in computer science at Barnard College, said in an email interview. “If we give an AI lots of examples of empathetic text, the AI can discover patterns and cues that evoke or demonstrate empathy.” AI that analyzes human reactions can help bridge the growing gap between people as we communicate digitally, Bret Greenstein, an AI expert at Cognizant Digital Business, said in an email interview. “Over the last year, real-time, video, voice, and messaging grew faster than anyone could have imagined, and with it came huge challenges in creating truly empathic relationships without actually spending physical time with people,” he added. AI can help to analyze and assess characteristics like tone and emotion in speech, Greenstein said. “This can help the person receiving communications to better understand what was meant, and helps the person ‘speaking’ by showing how messages could be interpreted,” he added. While companies are rushing to cash in on AI training software like Cogito, the question of whether AI can teach humans empathy remains open. And the answer may have as much to do with philosophy as technology. Ilia Delio is a theologian at Villanova University whose work centers on the intersection of faith and science. She believes that AI can teach empathy. Delio pointed out that a team at MIT has built robots that can mimic human emotions such as happiness, sadness, and compassion. “While the robotic emotions are programmed, the robots can interact with humans and thus establish or reinforce neural patterns,” she said.
Can a Machine Understand Empathy?
Experts define at least three forms of empathy, all involve the ability to understand and relate to another person, said Karla Erickson, a sociologist at Grinnell College in Iowa and author of the forthcoming book, Messy Humans: A Sociology of Human/Machine Relations, which explores our relationships with technology. “Relating is not something AI can do, and it is the basis for empathy,” Erickson said in an email interview. “AI may be programmed to break down some of the human behaviors that accompany empathy and remind humans to perform them, but that is not teaching empathy. Relating, especially in terms of empathy, would require the listener to have the necessary context to relate—by this, I mean that the ’life’ of an AI does not include loss, longing, hope, pain, or death.” However, experts clash over whether AI can teach us how to empathize. Part of the problem is that not everyone agrees on what “empathy” or “AI” even mean. The term artificial intelligence gets thrown around a lot, but it’s currently not the kind of intelligence we think of as human. “The “empathy cues” have nothing to do with empathy,” Michael Spezio, a professor of psychology, neuroscience, and data science at Scripps College, said in an email interview. “They are cues from voices that human raters have classified as being voices of people who are irritated/annoyed. So it’s just using human expertise in a mathematical model and then claiming that the model—built on human expertise—is intelligent. Limited machine learning approaches like this are often hyped as AI without being intelligent.” At Rensselaer Polytechnic Institute, Selmer Bringsjord’s laboratory is building mathematical models of human emotion. The research is intended to create an AI that can score high on emotional intelligence tests and apply them to humans. But Bringsjord, an AI expert, says any teaching AI does is inadvertent. “But this is pure engineering work, and I’m under no such illusion that the AI in question, itself has emotions or genuinely understands emotions,” he said in an email interview.
What Could Go Wrong?
While companies like Cogito see a bright future of AI training humans, other observers are more cautious. Supportiv, an online mental health service, uses AI to route each user, based on any single thought they express, in real-time, to a topic-specific peer support group that dynamically convenes for users with similar issues. Each group has a “super-powered” human moderator who keeps the text-based chat safe and troll-free and can surface, again through AI, relevant resources, recommendations, and referrals right into the group conversation. Using AI, Supportiv trains its moderators to be adept at spotting the intensity of emotional needs. “Empathy is a muscle we build,” Zara Dana, a data scientist at Supportiv, said in an email interview. “If we start using a crutch for walking, our muscles will atrophy. I can’t help but wonder, would a dependent worker feel confident if the AI system is not online one day? Is she able to do her job effectively? What are the long-term effects on the workers? How would they navigate complex social situations where the AI is absent?” Even if using AI to teach empathy works, what happens when we start relying on AI too much to train emotions? One possible downside is that humans can become more attached to robots than to other human persons because robots cannot choose against their program, Delio pointed out. “The human capacity for free will places human agency in a more ambiguous position,” Delio said. “A person can be compassionate one day and ruthless the next; a robot will remain consistently compassionate unless trained to do otherwise.” There’s a lot that could go wrong if AI teaches humans how to behave like people, experts say. “Without human oversight, the student might learn something absolutely nutty,” Bringsjord said. “Tone and pitch of voice are mere behavioral correlates, without any content. Dollars to donuts my voice while teaching in the classroom would be read by many…as indicating that I’m upset, while in reality, I’m just passionate and in the least in need of empathy.” If AI training of humans flourishes, we may come to rely on it. And that’s not necessarily a good thing. “This training devalues human skills, which are considerable, and shifts attention toward AI as if they are the ones with expertise,” Erickson said. “We have evolved to be social animals, and our empathy is central to our ability to connect with others and care about collectives to which we belong.”