Cursor’s AI Support Agent Invents Fake Policy

HighCustomer Support2025

Cursor

Overview

In early 2025, users of the code‑completion tool Cursor reported being logged out when switching devices. A support request triggered an email from an AI agent named "Sam," which confidently stated that Cursor had implemented a policy limiting subscriptions to one device per account. The policy did not exist. After the hallucination was posted online, users threatened to cancel their subscriptions. Co‑founder Michael Truell apologized on Reddit, and the company admitted that "Sam" was an AI. Fortune noted that the incident underscored the risks of deploying generative AI in customer support. eWeek reported that Cursor now labels AI responses and has increased human moderation.

What Went Wrong

The AI support bot generated a plausible‑sounding but false company policy, a classic hallucination. Cursor lacked proper guardrails to prevent the AI from inventing policies and sending them to customers.

How It Was Fixed

Cursor apologized publicly, labelled AI responses clearly, and reassigned more queries to human agents. The company also improved the bot’s training to reduce hallucinations and emphasised that generative AI is not infallible.

News Sources & References