Anthropic admitted Thursday that three separate bugs caused Claude Code's widely reported quality decline over the past two months, finally confirming what users had been saying all along. The company's official postmortem traced the degradation to changes in Claude Code's reasoning effort defaults, a caching bug that erased Claude's memory mid-session, and a system prompt edit that hurt coding quality. All three have been fixed as of April 20 (v2.1.116). The API was never affected.
"We never intentionally degrade our models," Anthropic wrote. "We were able to immediately confirm that our API and inference layer were unaffected." The first issue dates to March 4. Anthropic dropped Claude Code's default reasoning effort from high to medium after some users hit long latencies on Opus 4.6, with the UI appearing frozen. The fix created a different problem: Claude felt less intelligent. Users could manually switch back via /effort, but most didn't know or bother.
Anthropic reversed the change on April 7. A caching bug on March 26 compounded things. Anthropic tried to clear old thinking from sessions idle for over an hour to reduce resume costs.
Instead of clearing once, it cleared on every turn for the rest of the session. Claude lost memory of why it made edits and tool calls as conversations progressed, producing the forgetfulness and repetition users widely reported.
"This surfaced as the forgetfulness, repetition, and odd tool choices people reported," Anthropic said. The same bug caused usage limits to drain faster than expected because each request resulted in a cache miss. Simon Willison noted he frequently has sessions idle for hours or days: "I estimate I spend more time prompting in these 'stale' sessions than sessions that I've recently started." The third issue hit when Anthropic added a length-limits instruction to reduce Opus 4.7's verbosity: "Keep text between tool calls to ≤25 words." Internal testing showed no regressions, but broader evaluations later revealed a 3% drop for both Opus 4.6 and 4.7 on coding quality.
Combined with two unrelated experiments that made reproduction difficult, Anthropic took weeks to identify root causes despite user complaints dating to early March.
"As part of this investigation, we back-tested Code Review against the offending pull requests using Opus 4.7," Anthropic noted. "When provided the code repositories necessary to gather complete context, Opus 4.7 found the bug."
Cat Wu, head of product for Claude Code and Cowork at Anthropic, acknowledged earlier this week that users feel overwhelmed by AI's release pace and said she wants tools that educate rather than stress people out.
Anthropic is now resetting usage limits for all subscribers as of April 23.















