Particle.news

Download on the App Store

Mount Sinai Researchers Publish AEquity to Detect and Reduce Bias in Health AI Data

Peer-reviewed results show the Mount Sinai workflow surfaces overlooked dataset biases before AI models are trained.

Overview

  • The Journal of Medical Internet Research published the AEquity study on September 4, detailing a dataset-level workflow validated across medical images, patient records, and NHANES.
  • Tests identified both established and previously overlooked racial and subgroup biases, including disparities visible in inputs and in outputs such as predicted diagnoses and risk scores.
  • The approach is model-agnostic, functioning with simple algorithms and systems used in large language models, and it scales from small to complex datasets.
  • Authors propose use during algorithm development and in pre-deployment audits, noting potential value for developers, researchers, and regulators.
  • Study leaders emphasize that technical checks must be paired with improvements in data collection and governance, and the work was funded by the National Center for Advancing Translational Sciences and the National Institutes of Health.