Overview
- A study by the University of Geneva and the University of Bern tested six large language models (LLMs), including ChatGPT-4, on five standard emotional intelligence assessments designed for humans.
- The LLMs achieved an average score of 82%, significantly higher than the 56% scored by human participants, showcasing their advanced emotional reasoning abilities.
- ChatGPT-4 also autonomously generated new emotional intelligence tests, validated by over 400 participants as being as reliable and realistic as traditional tests developed over years.
- The findings suggest potential applications for AI in education, coaching, and conflict management, provided expert oversight ensures ethical and effective use.
- The peer-reviewed study, published in *Communications Psychology*, highlights the growing capability of AI to understand and reason about emotions, expanding its role in human-centric domains.