Particle.news

Download on the App Store

Big Tech Advances Proactive AI Assistants as Studies Flag Value, Safety and Skills Gaps

Fresh reporting points to a pivot toward proactive assistants, with many organizations still struggling to turn pilots into measurable results.

Overview

  • OpenAI’s Pulse, Google’s Gemini in Chrome and Microsoft’s Copilot showcase a shift from reactive chatbots to assistants that anticipate needs, contextualize web content and execute actions directly in the browser.
  • An MIT report cited by El Cronista finds 95% of generative‑AI pilots missed expectations, attributing failures largely to weak integration, poor data foundations, limited user adoption and misaligned operating models.
  • A regional ESET survey reports roughly 80% of Latin Americans use AI, yet 55% do not always verify outputs, 40% share personal or work data with these tools and nearly 60% skip privacy policies, heightening risk.
  • A PACCTO 2.0/CISAN–UNAM study documents Sinaloa and CJNG using AI for extortion, deepfakes, fraud and logistics, noting a DOJ‑attributed theft of data from an FBI agent’s phone and the rise of Crime‑as‑a‑Service tools like FraudGPT.
  • Labor impacts remain contested, with OpenAI’s Sam Altman warning customer‑service roles are likely to be displaced and researcher Roman Yampolskiy forecasting, in an extreme scenario, automation of up to 99% of jobs by 2030.