Stanford Researchers Develop Incredible Brain-Computer Interface System Converting Speech-Related Neural Activity Into Text At 62 Words Per Minute


It’s incredible what Artificial Intelligence and Machine Learning can do. Stanford University researchers have developed a Brain-Computer Interface that allows people with lost speech ability to communicate at an astonishing 62 words per minute! This breakthrough technology has the potential to revolutionize the way people with paralysis, stroke, and other conditions communicate. In this blog post, we’ll explore how this Brain-Computer Interface works and how it can help people with lost speech ability to communicate effectively.

We’ll start by looking at what Brain-Computer Interface (BCI) is and how it works. BCI is a device that allows users to interact with computers solely through brain activity. It is a direct pathway with the help of which the electrical activity of the brain tries to communicate with a foreign device. This external device is mostly a computer or a robotic limb. In Artificial Intelligence, BCI measures the central nervous system’s (CNS) activity and changes it into an artificial output.

The Stanford researchers used the Recurrent Neural Network (RNN) to process the Brain-Computer Interface, making it capable of synthesizing speech from signals found and captured in a patient’s brain. Compared to the previously existing BCI approaches that allow speech decoding, this latest method enables a person to communicate at 62 words per minute which is 3.4 times faster than the previous ones.

The team tried to capture the words uttered by the patient when she tried to speak by using the intracortical microelectrode arrays implanted in the patient’s brain. These microelectrode arrays record signals at a single neuron resolution. These signals were then transferred to the GRU model to decode the speech.

When the RNN model was trained on a bounded vocabulary of 50 words, the BCI system displayed an error rate of 9.1 percent. After increasing the vocabulary to 125k words, the error rate changed to 23.8%. The error rate improved to 17.4% when adding a language model to the decoder. The total data that the team collected for the training purpose was 10850 sentences which were done by showing a few hundred sentences every day to the patient to utter.

This system is a major breakthrough in the work of BCIs and can greatly help people with paralysis, stroke, and more. With 3.4 times better performance than currently existing approaches, this system can work wonders.

Check out the paper to learn more about this incredible research. All credit for this research goes to the researchers on this project. Also, don’t forget to join our 14k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

Published
Categorized as AI

Leave a comment

Your email address will not be published. Required fields are marked *