The company has been working with several universities on the effort, including the University of California, San Francisco. Facebook helped pay for UCSF researchers to study whether electrodes placed in the brain could help us learn to “decode” speech from brainwaves in real time.
Facebook is also footing the bill for a new, year-long study that UCSF is currently conducting where it will try to use brain activity to help a person who can’t speak communicate. The social network hopes the efforts could help reveal which brain signals are key for that non-invasive wearable that it’s planning for in the years ahead.
“We expect that to take upwards of 10 years,” Mark Chevillet, a research director at Facebook Reality Labs who runs its brain-computer interface group, told CNN Business of the overall project. “This is a long-term research program.”
An idea years in the making
At Facebook’s 2017 developer conference, F8, the company painted a fantastical picture of a mysterious, noninvasive device that would pick up on your brain signals and one day enable you to type 100 words per minute.
Such a gadget would be a far cry from the brain-computer interfaces scientists have been working on for decades. They still tend to be stuck in labs because they are pricey, have to be implanted under a user’s skull, and need to be connected to a computer to perform even the simplest tasks.
Now, however, some of Facebook’s efforts are coming to light with news of the ongoing research projects.
Electrodes on their brains
Participants in UCSF’s study are people with epilepsy who had a small patch of electrodes implanted on their brains in hopes of figuring out where their epileptic seizures originated; they volunteered to help with the Facebook-related study while in the hospital.
For the work related to Facebook, researchers had participants listen to questions while tracking their brain activity. Machine-learning algorithms eventually determined how to spot when participants were answering a question and which one of 24 answers they were choosing.
Translating what happens inside your brain into words is hard and doing it immediately is even harder. The speech decoding algorithms the researchers used were accurate up to 61% of the time at figuring out which of two dozen standard responses a participant spoke, right after the person was done talking.
This may not sound all that impressive, but Edward Chang, a neurosurgeon and professor of neurosurgery at UCSF who coauthored the study released this week believes it’s a “really important result” that could help people who have lost the ability to speak.
Facebook’s Chevillet, who previously worked at Johns Hopkins University as an adjunct professor of neuroscience, said the UCSF work is an “important yet somewhat expected milestone,” as the interpretation of brain signals tends to be done offline rather than in real time. Facebook, he said, has no interest in making any sort of medical device; what it wants is to understand the neural signals needed to create a silent speech interface.
A year-long study
Chang called the work in the paper published this week a “proof of concept.” Another Facebook-sponsored brain-computer-interface project he recently began in his lab is much lengthier: Chang will spend a year working with a single patient (a male who can no longer speak), tracking his brain activity with the same kind of electrode array used on the epilepsy patients, in hopes of restoring some of his communication abilities.
“We’ve got a tall order ahead of us to figure out how to make that work,” Chang said.
If it does, it could one day help a range of people, from those who have lost the ability to speak due to various brain-related injuries, to people who simply want to control a computer or send a message with their mind.
Meanwhile, at Facebook…
Chevillet said his group at Facebok is continuing its work on finding noninvasive ways to figure out what’s happening within the brain, too. It’s investigating how light may be able to indirectly track brain activity, specifically by using near-infrared light to measure oxygen saturation levels in your brain.
Though any sort of think-to-type device you might be able to buy is still far in the future, Chevillet can already imagine how he thinks it should look: a pair of glasses that uses augmented reality and includes a brain-based method for doing everything from sending a text message to adjusting the volume of a song to simply performing the equivalent of a mouse click.
“The use cases we envision are certainly for everybody,” he said.
Correction: The original version of this story misstated some details about the UCSF study.
READ MORE HERE