The speech-aiding brain implant by Stanford University consists of baby-aspirin sized sensors. (Unsplash) 
Health

Brain implant, AI offers hope to paralysed patients to speak via computer

US researchers have in a breakthrough developed a brain implant system, which when powered by artificial intelligence (AI) helps paralysed patients with speech difficulties communicate via a computer screen.

NewsGram Desk

 US researchers have in a breakthrough developed a brain implant system, which when powered by artificial intelligence (AI) helps paralysed patients with speech difficulties communicate via a computer screen. 

The speech-aiding brain implant by Stanford University consists of baby-aspirin sized sensors -- square arrays of tiny silicon electrodes -- that are implanted in patients’ cerebral cortex, the brain’s outermost layer. 

Each array contains 64 electrodes, arranged in 8-by-8 grids and spaced apart from one another by a distance of about half the thickness of a credit card. The electrodes penetrate the cerebral cortex to a depth roughly equaling that of two stacked quarters.

The implanted arrays are attached to fine gold wires that exit through pedestals screwed to the skull, which are then hooked up by cable to a computer. 

The devices transmit signals from a couple of speech-related regions in patients' brains to state-of-the-art software that decodes their brain activity and converts it to text displayed on a computer screen.

Pat Bennett, now 68, is a former human resources director and onetime equestrian who jogged daily. In 2012, she was diagnosed with amyotrophic lateral sclerosis, a progressive neurodegenerative disease that attacks neurons controlling movement, causing physical weakness and eventual paralysis. 

“When you think of ALS, you think of arm and leg impact,” Bennett wrote in an interview conducted by email. “But in a group of ALS patients, it begins with speech difficulties. I am unable to speak.”

Usually, ALS first manifests at the body’s periphery -- arms and legs, hands and fingers. For Bennett, the deterioration began not in her spinal cord, as is typical, but in her brain stem. She can still move around, dress herself and use her fingers to type, albeit with increasing difficulty. 

But she can no longer use the muscles of her lips, tongue, larynx and jaws to clearly enunciate the phonemes -- or units of sound, such as sh -- that are the building blocks of speech.

Although Bennett’s brain can still formulate directions for generating those phonemes, her muscles can’t carry out the commands. 

She volunteered to participate in the clinical trial, published in the journal Nature, on the speech-aiding brain implant.  

On March 29, 2022, Jaimie Henderson, neurosurgeon at Stanford Medicine, placed two tiny sensors apiece in two separate regions -- both implicated in speech production -- along the surface of Bennett’s brain. 

The sensors are components of an intracortical brain-computer interface, or iBCI. Combined with state-of-the-art decoding software, they’re designed to translate the brain activity accompanying attempts at speech into words on a screen.

About a month after the surgery, a team of scientists began twice-weekly research sessions to train the software that was interpreting her speech. After four months, Bennett’s attempted utterances were being converted into words on a computer screen at 62 words per minute -- more than three times as fast as the previous record for BCI-assisted communication.

“These initial results have proven the concept, and eventually technology will catch up to make it easily accessible to people who cannot speak,” Bennett wrote. “For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships.”

Bennett’s pace begins to approach the roughly 160-word-per-minute rate of natural conversation among English speakers, said Henderson, the surgeon who performed the surgery.

“We’ve shown you can decode intended speech by recording activity from a very small area on the brain’s surface,” said Henderson, a Professor in the department of neurosurgery. 

An AI algorithm receives and decodes electronic information emanating from Bennett’s brain. (Pixabay)

An AI algorithm receives and decodes electronic information emanating from Bennett’s brain, eventually teaching itself to distinguish the distinct brain activity associated with her attempts to formulate each of the 39 phonemes that compose spoken English. 

It feeds its best guess concerning the sequence of Bennett’s attempted phonemes into a so-called language model, essentially a sophisticated autocorrect system, which converts the streams of phonemes into the sequence of words they represent.

“This is a scientific proof of concept, not an actual device people can use in everyday life,” said Frank Willett, staff scientist at the Howard Hughes Medical Institute. “But it’s a big advance toward restoring rapid communication to people with paralysis who can’t speak.”

(IANS/SR)

Book Your Airport Taxi Limo Service Today for a Smooth and Stylish Arrival

American Children Who Appear to Recall Past-Life Memories Grow Up to Be Well-Adjusted Adults

In the ‘Wild West’ of AI Chatbots, Subtle Biases Related to Race and Caste Often Go Unchecked

Future of Education with Neuro-Symbolic AI Agents in Self-Improving Adaptive Instructional Systems

Lower turkey costs set table for cheaper US Thanksgiving feast this year