Former Google executive Mo Gawdat said in a recent interview that AI will soon overtake human intelligence, with dire consequences for our civilization. As evidence, Gawdat claims that he witnessed a robot arm making what he perceived to be a taunting gesture to AI researchers. But some experts beg to differ. “AI is woefully inadequate in many domains and relies heavily on Big Data and human surveillance to fuel its software models,” Sean O’Brien, a visiting fellow at the Information Society Project at Yale Law School, told Lifewire in an email interview.
Smarter Than Who?
Gawdat joins a long line of doomsayers who warn of an impending AI apocalypse. Elon Musk, for example, claims that AI might one day conquer humanity. “Robots will be able to do everything better than us,” Musk said during a speech. “I have exposure to the most cutting edge AI, and I think people should be really concerned by it.” AI developers at Google X, Gawdat claimed in the interview, had a fright when they were building robot arms able to find and pick up a ball. Suddenly, he said that one arm grabbed the ball and seemed to hold it up to the researchers in a gesture that, to him, seemed like it was showing off. “And I suddenly realized this is really scary,” Gawdat said. “It completely froze me.”
Enter the Singularity
Gawdat, and others concerned about future AI, talk about the concept of “the singularity,” which will mark the moment when artificial intelligence becomes smarter than humans. “The development of full artificial intelligence could spell the end of the human race,” physicist Stephen Hawking once famously told the BBC. “It would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” But O’Brien called the singularity “a fantasy that relies upon fundamental misunderstandings about the nature of body and mind as well as a misreading of the writing of early pioneers in computing such as Alan Turing.” Artificial intelligence isn’t close to being able to match human intelligence, O’Brien said. AI analyst Lian Jye Su agrees that AI can’t match human intelligence, although he’s less optimistic about when that could occur. “Most, if not all, AI nowadays are still focused on a single task,” he told Lifewire in an email interview. “Therefore, the estimate is that we will need one or two new generations of hardware and software before technological singularity is within reach. Even when the technology is mature, we also need to assume that the developer(s) of AI is given complete authority over its creation without any check and balance and a built-in ‘kill’ switch or fail-safe mechanism.”
True Concerns About AI
The real danger of AI is its ability to divide humans, Su said. AI already has been used to seed discrimination and spread hatred through deepfake videos, he noted. And, Su said, AI has helped “social media giants create echo chambers through personalized recommendation engines, and foreign powers alter political landscapes and polarize societies through highly effective targeted advertising.” Just because AI may be a poor and misguided model of human cognition doesn’t mean it isn’t dangerous or that it can’t approach or surpass humans in many areas, O’Brien said. “A pocket calculator is better and faster at arithmetic than a human will ever be, just as machines can be much stronger than humans and ‘fly’ or ‘swim,’” he added. How AI affects humans depends on how we use it, O’Brien said. Robot labor, for example, could help humans by freeing up people for creative work or forcing them into poverty. “Likewise, we are now well aware of the perils of AI and its inherent biases, which are misused across the digital landscape to repress people of color and marginalized populations,” he added.