Overview
- The new system focuses on direct sound production instead of word-by-word selection to restore natural communication
- Four microelectrode arrays implanted in the ventral precentral gyrus capture neural patterns tied to attempted speech
- An AI-powered decoder transforms those signals into audible speech and singing with roughly 10 ms latency
- Initial tests with an ALS patient achieved about 60 percent intelligibility compared to just 4 percent without the BCI
- Researchers are expanding the BrainGate2 trial to enroll more participants and refine the technology’s accuracy and applicability