Particle.news

Download on the App Store

Stanford BCI Decodes Inner Speech, Tests First Mental-Privacy Safeguards

A Stanford Cell study introduces privacy controls in response to overlapping signals.

photo of a tiny black square chip with a toothed surface, held up by a golden-colored rod. out of focus in the background is a human eye, which is very large in comparison to the chip.
image of a person's finger, with a small black square connected to a copper wire. The black square has a series of very small spikes on it.

Overview

  • Stanford researchers demonstrated proof‑of‑concept decoding of silent speech from motor‑cortex activity in four volunteers with severe paralysis, publishing the results in Cell.
  • Signals for inner and attempted speech overlapped, and a previously trained attempted‑speech decoder was unintentionally activated when participants only imagined sentences.
  • In cued tests, performance reached up to 86% accuracy on a 50‑word vocabulary and about 74% when expanded to a 125,000‑word lexicon.
  • Unstructured inner speech proved difficult to decode, yielding only just‑above‑chance results on a sequence‑memory task and producing unusable outputs for open prompts.
  • The team introduced two safeguards—training decoders to ignore labeled inner speech and a user‑imagined activation phrase detected about 98% of the time—while noting the need for larger trials and improved hardware.