Amazing neuron implant explains neural activity on speech almost immediately, focusing on the production of sound instead of choosing words

Published:

Brain computer interfaces must be one of my favorite futuristic technologies. These are wonderfully named pieces of the set that allow the brain to directly connect to the computer. BCIS Wondes in the medical community, giving patients even very narrow mobility, such as closed team, tools for re -communicating with the world.

The very idea of ​​controlling computers with our minds is stern things of the fresh generation and I can’t wait to see how it will go to the game. Until then, I will celebrate every win I can find on the pitch, and this contains this implant noticed Ars Technica This gives users the opportunity to speak at a natural pace.

- Advertisement -

Before his death in 2018, Steven Hawking became just as known for his special computer voice as for his contribution to learning. Many do not know that Hawking took some time to turn the thought into speech. His system consisted of using a sensor that detected movements in the cheek muscle to select characters on the screen with glasses. Although brilliant, it usually took Hawking about a minute to say one word.

This fresh technology works through neuropości researchers from UC Davis bypasses them – not to be a language in the cheeks – older methods by combining a nerve prosthesis straight to the brain. He also does not spread words into the alphabet for choice, and instead translates the brain directly into sounds. This is a more natural way of speaking, it is not based on the possibility of speaking the user and of course there is a lot Faster than what we have previously achieved.

The first tests of this technology required implantation of 256 microelectrods into the abdominal bend before a concentric patient, a region in front of the brain that supervises the vocal muscles. This signal is then sent to a AI neural decoder. Because the algorithm is not trained only in text words, it works much faster than those looking for letters. He also had more nuances, such as changing Pitch to indicate a question or tone, and even could utilize “hmm” sounds in conversation.

But perhaps the most impressive it was able to do it in principle immediately. The delay provided with this method was measured at about 10 milliseconds. This is about half of the milestone pause, so there is really no time.

Technology still has stern restrictions and is working, but promising results so far in testing the patient have gone from almost incomprehensible to full script conversations that others were able to understand. As for Unscriptted, they still got about half of what the patient tried to say what seems to be a huge step from nothing.

Then the team wants to test their methods with improved technology. 256 is a fairly petite amount of electrodes for such a task. For example, other interfaces, such as the one from the co -founder of Neuralink, utilize 4096 electrodes – although they are non -invasive, which means that they are probably further from information that can be their own problems. We also saw such things between the hair follicles that claim that they are approaching, and at the same time are not invasive, which can be great for such a task.

Of course, the goal here is to restore speech and agency to those who need it, so all efforts in testing and developing technology are very welcome. I hope that I will never need this technology in a medical function and instead I can wait when it finally reaches the games. I can’t wait to think about dialog in my future of games one day.

The best computer controller 2025

Related articles