Overview
- A pilot study with four participants (three with ALS, one with brainstem stroke) recorded motor-cortex activity via implanted microelectrodes as subjects imagined speaking.
- AI phoneme recognition and language models reconstructed imagined sentences from a 125,000-word vocabulary with up to 74% accuracy in live decoding trials.
- A mental-password mechanism reliably detected a preset internal cue (e.g., “chitty chitty bang bang”) with about 98% accuracy to block unintended thought translation.
- Participants reported that inner-speech decoding was faster and less effortful than attempted-speech systems, supporting conversational rates near 120–150 words per minute.
- Researchers caution that small sample size, weaker inner-speech signals and lack of spontaneous free-form decoding underscore the need for larger trials and technical improvements.