Overview
- A man with ALS enrolled in the BrainGate2 clinical trial at UC Davis Health regained the ability to speak and sing through a brain-computer interface that translates his neural activity into real-time voice.
- The device consists of 256 silicon microelectrodes implanted in the motor cortex, which record intended vocalizations and send data to AI models trained on the participant’s pre-onset voice recordings.
- Deep-learning algorithms process neural signals every 10 milliseconds to reconstruct words with natural-sounding tone, pitch and emphasis, reducing latency from seconds to near-instantaneous speech.
- The participant demonstrated expressive capabilities including intonation shifts to ask questions, emphasis on specific words, novel interjections and simple melody humming.
- Although listeners correctly identified nearly 60% of synthesized words, researchers say further improvements are needed to enhance clarity, expand vocabulary and adapt the system for broader use.