Overview
- An Iris Telehealth survey finds 73% of respondents want human providers to make final decisions in AI‑flagged mental health emergencies, with only 8% trusting AI to act on its own.
- Top concerns include false positives in crisis detection (30%) and losing human connection due to overreliance on technology (23%).
- When risk is detected, people prefer human‑centered steps such as notifying a preselected contact (28%) or receiving a counselor call within 30 minutes (27%), while just 22% favor automatic AI connection without permission.
- Acceptance varies by demographics, with Millennials and Gen Z more comfortable than Baby Boomers, men more open than women, and higher‑income and PhD‑educated respondents more skeptical of automatic monitoring.
- Parallel reporting shows students increasingly turning to ChatGPT for around‑the‑clock, low‑cost, private support, even as experts cite trials and studies on perceived empathy alongside warnings that AI lacks true human connection and must operate under human oversight, including in resource‑constrained settings like India.