Particle.news

Download on the App Store

Stanford Decodes Inner Speech in Real Time with Brain–Computer Interface

This early proof-of-concept shows that motor-cortex microelectrode recordings can power real-time inner-speech decoding at 74% accuracy with a mental-password safeguard.

© Emory BrainGate Team
Image
Image
Image

Overview

  • A pilot study with four participants (three with ALS, one with brainstem stroke) recorded motor-cortex activity via implanted microelectrodes as subjects imagined speaking.
  • AI phoneme recognition and language models reconstructed imagined sentences from a 125,000-word vocabulary with up to 74% accuracy in live decoding trials.
  • A mental-password mechanism reliably detected a preset internal cue (e.g., “chitty chitty bang bang”) with about 98% accuracy to block unintended thought translation.
  • Participants reported that inner-speech decoding was faster and less effortful than attempted-speech systems, supporting conversational rates near 120–150 words per minute.
  • Researchers caution that small sample size, weaker inner-speech signals and lack of spontaneous free-form decoding underscore the need for larger trials and technical improvements.