Overview
- A peer-reviewed Communications Medicine study introduced SeeMe, a video-based AI that analyzes ultra-fine facial movements in response to verbal commands.
- In 37 recent brain-injury patients, SeeMe documented command-specific eye-opening in 30 of 36 analyzable cases and mouth movements in 16 of 17.
- The system detected attempted eye opening a mean 4.1 days earlier and mouth movement 8.3 days earlier than clinicians recorded those signs.
- Larger and more frequent micro-movements correlated with better outcomes, although several patients with early signals did not later show visible recovery.
- Researchers say the approach could complement resource-intensive fMRI and EEG, needs larger multi-site validation, and will next be tested for yes–no communication.