Particle.news

Download on the App Store

UC Davis Unveils Neural Implant That Synthesizes Speech and Song in Real Time

The trial device uses four microelectrode arrays paired with an AI decoder to convert neural activity into personalized speech in roughly 10 milliseconds

Image
Image
Image

Overview

  • The new system focuses on direct sound production instead of word-by-word selection to restore natural communication
  • Four microelectrode arrays implanted in the ventral precentral gyrus capture neural patterns tied to attempted speech
  • An AI-powered decoder transforms those signals into audible speech and singing with roughly 10 ms latency
  • Initial tests with an ALS patient achieved about 60 percent intelligibility compared to just 4 percent without the BCI
  • Researchers are expanding the BrainGate2 trial to enroll more participants and refine the technology’s accuracy and applicability