Blog | IBSA Foundation

How computers can read our minds and fight loss of speech

Written by Paolo Rossi Castelli | 06 Aug 2024

A cutting-edge experiment. Neurosurgeons at Ichilov Hospital in Tel Aviv have created a brain-computer interface featuring a number of electrodes, which enabled a patient to produce the sound of two letters of the alphabet through thought alone, without speaking. It offers new hope for those affected by loss of speech. 

A team of researchers in Israel have managed to literally read the mind of a patient using electrodes implanted in his brain, then convert a tiny part of them (two vowels) into sounds using a computer.  

Neurosurgeons at Sourasky Medical Center (Ichilov Hospital) in Tel Aviv created a special brain-computer interface for use on a 37-year-old male surgical epilepsy patient, producing results that could have huge implications in the future. The patient had a type of epilepsy that does not respond well to conventional treatment. Unlike most epileptic seizures, his were caused by activity deep inside the brain rather than near the surface.  

When medicine does not work and the seizures continue, surgery is usually performed to stop the activity of the cells that are causing them. However, in this case an operation would have proved extremely complex due to difficulties locating the part of the brain responsible for the seizures. Therefore, the decision was made to implant electrodes in the man’s brain so that, during a seizure, the exact position of the nerve cells behind abnormal activity could be found. This would allow precision surgery to be performed, without causing damage to healthy cells (or at least minimizing it). The presence of electrodes deep inside the brain gave the researchers an idea.

Trials with artificial intelligence to give a voice back to people following loss of speech 

As reported in the scientific journal Neurosurgery, neurosurgeons asked the man (who took part voluntarily in the trials) to produce the vowel sounds “a” and “e”. The two sounds were picked up by an artificial intelligence system that linked the voice with the electrical activity in the brain recorded by the electrodes at the time when the sounds were made and specifically mapped it all. The “a” and “e” corresponded to specific activity in the man’s brain. The second stage then began. The man was asked just to think of the same vowel sounds, without saying them. Having “learned” that the thoughts in question corresponded to the sounds, the computer correctly produced the sounds, even though the patient remained silent.

A huge help (with inherent ethical issues to resolve)

If it’s confirmed that the programme genuinely has these capabilities, a system of the same kind could be developed for all those who’ve lost the ability to speak (for example, due to cancer, paresis, a stroke, or trauma), or for patients with neurodegenerative disorders that can cause them to lose the ability to speak, such as motor neurone disease.  

In such cases, the brain’s activity could be recorded before patients lose the ability to speak. Their personal maps could then be used to create interfaces that allow them to “speak” even when their bodies are no longer capable of doing so.  
Using this system, the plan is to eventually “decode” entire sentences.  
However, this line of research opens up a significant ethical issue, because the technique could be used to “read people’s minds”. 

For the time being, the important thing is enabling people who cannot speak to utter syllables such as “Yes” and “No”.