Snap, the company behind Snapchat, has acquired a neurotech startup whose headband lets the wearer control a computer with their thoughts. The company plans to integrate the headband with its ongoing research into augmented reality (AR) products. “Levering neural thoughts that can be translated into direct commands has enormous application (and privacy implications) as removing the hardware element lowers the total cost of ownership for enabling an AR/VR experience,” Mark Vena, the CEO of tech consulting company SmartTech Research, told Lifewire in an email interview.
Mind Reader
NextMind is a Paris-based company known for creating a $400 tiny brain-computer interface (BCI). In the announcement post, Snap says NextMind will help drive “long-term augmented reality research efforts within Snap Lab,” the company’s hardware team that’s currently building AR devices. “Snap Lab’s programs explore possibilities for the future of the Snap Camera, including Spectacles,” the company wrote. “Spectacles are an evolving, iterative research and development project, and the latest generation is designed to support developers as they explore the technical bounds of augmented reality.” Snap’s recent Spectacles include displays for real-time AR, voice recognition, optical hand tracking, and a side-mounted touchpad for UI selection. Augmented reality is an interactive experience of a real-world environment where the objects in the real world are enhanced by computer-generated information. Vena called NextMind capability “an early prototype that shows what’s possible, and it will be highly dependent on a strong development community to build useful and practical applications.” He said he doesn’t expect a working mind-controlled AR headset for at least 2 to 3 years. “There are also the thorny privacy issues that will inevitably need to be addressed as consumers will certainly not be fond of unauthorized monitoring of their neural waves,” Vena added.
A New Wave
Gabe Newell, the co-founder and president of Valve, said the company is working to develop open-source brain-computer interface software. One possible use for the technology would be to let people be more connected to gaming software. Brain-computer interfaces also could help people with disabilities. For example, a device developed by researchers at the University of Tubingen in Germany recently allowed a 37-year-old fully paralyzed man to communicate with his family. The patient learned how to formulate sentences 107 days into his training. On day 245, he spelled out: “wili ch tool balbum mal laut hoerenzn,” which the scientists translated from German to “I would like to listen to the album by Tool loud.” Amir Bozorgzadeh, the CEO of VR company Virtuleap, said in an email interview that you could break down the utility of EEG-based/brain-wave-driven VR experiences into two categories: passive and active. The passive utility is seen when allowing the immersive experiences to automatically adapt to the maximum user comfort and the accessibility settings of the specific user so that the font size, color, and volume settings, for example, can adjust without the user having to manually do it themselves. In the future, a brain interface could allow features like adjusting the intensity of an experience to the level of a user’s preferences in terms of their stress levels and cognitive load, Bozorgzadeh said. And a user could navigate their virtual avatars and environments with their thoughts, without the need to physically participate in the experience. “Imagine Neo at the end of the original Matrix movie, and how he was able to bend time and space at will like a god,” Bozorgzadeh said. “That is the intrinsic potential of neuroscience-driven experiences in a VR and AR context.”