Overview
- The multimodal foundation model learns synchronized brain, heart, respiratory and muscle patterns using five-second segments with a leave-one-out contrastive training approach.
- SleepFM was trained on polysomnography from roughly 65,000 participants across multiple sleep clinics and then fine-tuned for downstream tasks.
- The model identified 130 disease categories with strong ranking performance, surpassing a C-index of 0.8 for outcomes such as Parkinson’s disease, dementia, several cancers and all-cause mortality.
- Researchers paired Stanford Sleep Medicine Center studies from 1999–2024 with up to 25 years of electronic health records and demonstrated transfer to external cohorts including the Sleep Heart Health Study.
- Authors report competitive results on standard sleep tasks and caution that selection bias and limited interpretability require further work, including wearable integration and broader validation before clinical use.