Cursor Launches Automations, Introducing Always-On AI Agents That Code Without Human Prompting
Anysphere's Cursor ships event-driven AI coding agents that trigger on commits, Slack messages, and incidents, aiming to close the gap between AI-accelerated code production and human-speed review.
Overview
Anysphere, the company behind the AI-powered code editor Cursor, launched a feature called Automations on March 5, making Cursor the first major coding tool to offer always-on AI agents that operate without requiring a human prompt. The system triggers autonomous coding agents based on external events — code commits, Slack messages, PagerDuty incidents, Linear issues, merged pull requests, or simple timers — and runs them in cloud sandboxes with full codebase access, according to TechCrunch.
The launch arrives as Cursor’s annual recurring revenue has surpassed two billion dollars, having doubled in just three months, according to Dataconomy. The feature addresses what the company identifies as a growing bottleneck in AI-assisted development: while agents can now produce code at unprecedented speed, the surrounding processes of review, security auditing, and incident response still move at human pace.
How Automations Work
Unlike traditional AI coding assistants that wait for a developer to type a prompt, Automations agents are configured once and then run continuously. Developers define a trigger — such as a push to the main branch or a PagerDuty alert — along with a set of instructions and a preferred AI model. When the trigger fires, Cursor spins up an isolated cloud sandbox, gives the agent access to the codebase and connected services, and lets it execute the task autonomously, as described by Pulse2.
Agents self-verify their output by running tests and syntax checks before notifying humans. A memory system stores patterns from previous runs, allowing agents to improve accuracy over time, reduce false positives, and accelerate duplicate detection. The system integrates with external services through the Model Context Protocol, connecting to tools like Datadog, Linear, Notion, and Slack.
“We’re introducing Cursor Automations to build always-on agents. These agents run on schedules or are triggered by events like a sent Slack message, a newly created Linear issue, a merged GitHub PR, or a PagerDuty incident,” wrote Cursor engineers Jack Pertschuk, Jon Kaplan, and Josh Ma in the announcement.
Use Cases in Production
Cursor has been running Automations internally, providing concrete examples of the system’s capabilities. A security review automation triggers on every push to the main branch, auditing code diffs for vulnerabilities, skipping previously discussed issues, and posting only high-risk findings to Slack. The company reports this workflow has caught multiple vulnerabilities and critical bugs that might otherwise have reached production, according to Dataconomy.
For incident response, a PagerDuty-triggered automation queries server logs through Datadog, scans for recent code changes, and delivers a diagnostic summary to Slack along with a proposed fix as a pull request. The company says this has significantly reduced incident response time. Other internal uses include weekly codebase change summaries, test coverage gap identification, and bug report triaging.
Enterprise adoption has already begun. HR and payroll platform Rippling is using Automations for task consolidation, documentation updates, incident triage, and weekly status reports, according to Pulse2.
What We Don’t Know
Anysphere has not disclosed pricing details for Automations or whether the feature is available across all Cursor subscription tiers. The company has also not released data on the rate of false positives in security reviews or incident diagnostics, making it difficult to assess the system’s reliability at scale. While Cursor estimates it runs hundreds of automations per hour internally, no third-party benchmarks exist to validate the quality of autonomous agent output compared to human reviewers.
The broader trust question remains unresolved. Industry data indicates that only 29 to 33 percent of developers trust AI-generated code, down from 40 percent in 2024, and research suggests that nearly 39 percent of code generated by AI assistants like GitHub Copilot contains security flaws. Whether always-on agents that operate without direct human oversight will improve or worsen these trust metrics is an open question.
Competitive Landscape
Cursor Automations enters a crowded agentic coding market but occupies a distinct niche. GitHub Copilot remains the most widely used AI coding assistant but operates as an interactive autocomplete tool requiring human authorship. OpenAI’s Codex functions as a cloud-native coding agent that executes tasks autonomously but still requires a human to assign each task. Anthropic’s Claude Code operates as a command-line agent. What distinguishes Automations is its event-driven, always-on architecture — no human needs to initiate the work.
Engineering lead Josh Ma told Dataconomy that the key insight was about depth of analysis: “This idea of thinking harder, spending more tokens to find harder issues, has been really valuable.” Chief engineering officer Jonas Nelle framed the human role as supervisory rather than directive: “They’re called in at the right points in this conveyor belt.”
With Cursor’s revenue doubling to over two billion dollars in three months and approximately 25 percent market share among generative AI coding clients according to Ramp data cited by Dataconomy, the company is betting that the next phase of AI-assisted development is not about helping developers write code faster, but about removing humans from the loop entirely for routine engineering tasks.