The Facebook-backed initiative, which aims to let people write in their minds, has ended with new findings released today.
The Steno project was a multi-year collaboration with Facebook and Chang Lab of the University of California, San Francisco. The goal was to create a system that translates brain activity into words. New research paper, published New England Journal of Medicine, shows the possibility of introducing technology for the speech impaired.
But alongside the study, Facebook made it clear that it supports the idea of a commercial head-mounted brain reader and builds wrist interfaces. The new study does not have a clear applicability to mass market technology products and in a press release, Facebook said it would “focus” its priorities away from head – mounted brain – computer interfaces.
“For clarity, Facebook has no interest in developing products that require implanted electrodes,” Facebook said in a press release. Elsewhere in the publication, it stated that “while we still believe in the long-term potential of end-to-end optical BCI technologies, we have decided to focus our immediate efforts on another neural interface approach that has a closer time to market.”
Chang Lab’s ongoing research involves the use of implanted brains and computer interfaces (BCIs) to restore human speech. The new article focuses on a participant who lost his or her ability to speak after a stroke more than 16 years ago. The laboratory installed the man with implanted electrodes that were able to detect brain function. The man then spent 22 hours (spread over exercises worth more than a year) training the system to identify specific patterns. In that training, he tried to speak isolated words from a 50-word vocabulary. In the second training course, he tried to produce whole sentences using these words, which included basic verbs and pronouns (such as “am” and “I”) as well as specific useful nouns (such as “glasses” and “computer”) and commands (such as “yes” and “No”).
This training helped to create a language model that could react when a man thought of saying certain words, even if he did not actually speak them. The researchers fine-tuned the model to predict what of the 50 words he thought, by integrating the probability system into English-like patterns resembling a predictive smartphone keyboard. The researchers said that in recent experiments, the system was able to decode a median rate of 15.2 words per minute, calculation errors, or 12.5 words per minute with only correctly decoded words.
Chang Lab published a previous Project Steno study in 2019 and 2020, proving it electrode arrays and predictive models is able to create relatively fast and sophisticated thought typing systems. In many previous typing options, the cursor was mentally aligned around the on-screen keyboard with a brain implant, although some other researchers have tried methods such as handwriting visualization. While a previous study in the lab was about decoding the brain activity of normally speaking people, this latest release shows that it works even when subjects don’t speak (and can’t) speak out loud.
UCSF neurosurgical chair Eddie Chang says in a press release that the next step is to improve the system and test it with more people. “On the hardware side, we need to build systems with higher data accuracy so we can store more information about the brain and faster. On the algorithm side, we need systems that can translate these very complex brain signals into spoken words, not text, but actually oral, audible. One one of the top priorities, Chang says, is to expand the vocabulary considerably.
Today’s research is valuable to people who are not served by keyboards and other existing interfaces because even a limited vocabulary can help them communicate more easily. But it falls far short an ambitious goal set by Facebook in 2017: a non-invasive BCI system that allows people to type 100 words per minute, comparable to the highest speeds of a traditional keyboard. The latest UCSF study is related to implanted technology and is nowhere near beating its number or even speed most people can be reached on the phone keypad. This favors technology such as an external headband that optically measures brain oxygen levels, commercial prospects that Facebook Reality Labs (the enterprise’s virtual and augmented reality hardware wing) revealed in prototype form.
Since then, Facebook has acquired the electromyography (EMG) wristband company CTRL-Labs in 2019, giving it alternative control option For AR and VR. “We are still in the early stages of unlocking the potential of wrist-based electromyography (EMG), but we believe it is a core element of AR glasses, and the application we learned from BCI will help us become faster,” says Sean Keller, research director at Facebook Reality Labs. Facebook won’t give up on a head-mounted brain interface system, but it plans to make the software open source and share hardware prototypes with outside researchers while it completes its own research.