Speech Recognition and Synthetic Voices: Brain-Computer Interfaces that Decode and Capture Brain Signals into Text and Words
There are two teams of researchers who describe brain–Computer interfaces that translate neural signals into text or words spoken by synthetic voices. The BCIs can decode speech at 62 words per minute and 78 words per minute, respectively. Natural conversation happens at around 160 words per minute, but the new technologies are both faster than any previous attempts.
Christian Herff, a neuroscientist at Maastricht University, believes these are products in the very near future.
“For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships,” said Bennett in a statement to reporters.
In a separate study2, Edward Chang, a neurosurgeon at the University of California, San Francisco, and his colleagues worked with a 47-year-old woman named Ann, who lost her ability to speak after a brainstem stroke 18 years ago.
Despite the implants that Willett’s team use, it is nice to see that it’s possible to achieve low word-error rate with E CoG.
Chang and his team also created customized algorithms to convert Ann’s brain signals into a synthetic voice and an animated avatar that mimics facial expressions. They personalized the voice to sound like Ann’s before her injury, by training it on recordings from her wedding video.
“The simple fact of hearing a voice similar to your own is emotional,” Ann told the researchers in a feedback session after the study. “When I had the ability to talk for myself was huge!”
The Future of Brain-Computer Interfaces: Analyzing Brain-Cognizable Devices for Clinical Use in the Early Stages of Alzheimers Disease
Many improvements are needed before the BCIs can be made available for clinical use. Ann said the ideal scenario was for the connection to be corded. A BCI that was suitable for everyday use would have to be fully implantable systems with no visible connectors or cables, adds Yvert. The teams hope their decoding methods will increase the speed and accuracy of the devices.
And the participants of both studies still have the ability to engage their facial muscles when thinking about speaking and their speech-related brain regions are intact, says Herff. “This will not be the case for every patient.”
Willett says that the proof of concept provides motivation for industry people to translate it into a product that someone can actually use.
The devices must also be tested on more people to prove they are reliable. Judy Illes, a researcher at the University of British Columbia in Canada, said that no matter how sophisticated the data is, they need to understand it in a more measured way. “We have to be careful with over promising wide generalizability to large populations,” she adds. “I’m not sure we’re there yet.”
Two studies show how brain-Computer interface could help people to communicate and how hot it can get in the tropics.
How wind-tunnel experiments could help athletes run the fastest marathon ever, and an analysis that could help explain why birds are the colours they are.
How much tropical rainforest survives a warming climate? Two women lose their language due to ALS and brain damage using a brain–computer interface
As the climate warms, tropical forests around the world are facing increasing temperatures. It’s unknown how much the trees can survive before their leaves die. A team has combined multiple data sources to try and answer this question, and suggest that a warming of 3.9 °C would lead to many leaves reaching a tipping point at which photosynthesis breaks down. The scenario would probably cause serious harm to the ecosystems by disrupting their vital carbon storage and home to significant biodiversity.
Two women lost their ability to speak due to paralysis. For one, the cause was amyotrophic lateral sclerosis, or ALS, a disease that affects the motor neurons. The other had suffered a stroke in her brain stem. Though they can’t enunciate clearly, they remember how to formulate words.
It is an exciting step towards restoring real-time speech using a brain-computer interface that scientists say is slower than the roughly 160-word-per-minute rate of natural conversation. A neurologist at a university who was not involved in the new studies says that it is getting close to being used in everyday life.
A tiny square sensor that looks like a hairbrush with 64 needle-like bristles is used in the BCI that was developed by researchers in the study. Each is tipped with an electrode, and together they collect the activity of individual neurons. Researchers trained a neural network to decode brain activity and translate it into words.