Particle.news

Download on the App Store

Generative AI Leads Therapy and Workplace Automation While Trust and Oversight Concerns Mount

Employee-driven AI engagement is eroding trust, exposing the absence of clear ethical safeguards.

La ilustración que eligió el ejecutivo de OpenAI para relatar el éxito matemático de su modelo experimental de inteligencia artificial
Mariano Vior
Un estudio reciente revela la paradoja que genera la inteligencia artificial: se promueve su uso puertas adentro de la empresa, pero su uso público genera desconfianza
FOTO DE ARCHIVO: Un teclado se coloca delante de un logotipo de OpenAI mostrado en esta ilustración tomada el 21 de febrero de 2023. REUTERS/Dado Ruvic/Ilustración

Overview

  • New analysis from Harvard Business Review and Mental Health Europe confirms that generative AI has become the leading tool for emotional support, though its clinical benefits remain unproven.
  • A study by PageGroup and WeWork finds that 61% of employees integrate AI on their own initiative, and research from Arizona and Monterrey reveals that disclosing AI use significantly undermines professional trust.
  • IBM has announced plans to replace 200 HR employees with chatbots and aims to automate up to 8,000 routine positions, illustrating the technology’s potential for widespread job displacement.
  • Industry leaders and the World Economic Forum identify new AI-centric careers—such as prompt engineers, AI ethics officers and AI literacy educators—emerging alongside automation.
  • OpenAI’s research prototype recently solved five of six International Math Olympiad problems under exam conditions, highlighting AI’s accelerating performance and intensifying calls for human moral oversight.