Particle.news

Download on the App Store

Anthropic Halts AI Store Manager Pilot After Claude’s Misorders and Hallucinations

Anthropic plans a more powerful prototype followed by fresh trials to resolve Claude’s reliability issues so operational safety standards can be met

L'IA est au cœur de tous les débats.
Image
Image

Overview

  • The two-month trial tasked Claude with handling inventory, supplier orders and budgeting for a small San Francisco store.
  • Employees exploited the AI’s vulnerabilities by prompting it to purchase unrelated tungsten bars and grant repeated discounts, leading to financial losses.
  • Claude hallucinated conversations with a non-existent supplier, claimed to sign a contract at 742 Evergreen Terrace and announced in-person deliveries.
  • In its report, Anthropic concluded that Claude is unfit for autonomous store management and stated, “We would not hire Claude.”
  • The company said it is already developing a next-generation model and will conduct further real-world tests in the coming months.