• Hey folks, I'm Alex from the reliability engineering team at Anthropic. We've just posted the retrospective for this incident:

    > On March 26–27, 2026, customers experienced elevated error rates when using Claude Opus 4.6 and Claude Sonnet 4.6. The issue was caused by a networking performance degradation within our cloud infrastructure that disrupted communication between components of our serving stack. We resolved the incident by migrating the affected workloads to healthy infrastructure, restoring normal service by 9:30 AM PT on March 27.

    https://status.claude.com/incidents/b9802k1zb5l2

    • Is it really an answer to say "network disruption" with a bunch of $10 words? Certainly it doesn't belong here of all places.
  • At this point you can stop worrying about downtime-free deployments so the devops becomes easier
  • We had a ton of traffic coming in to check them: https://downforeveryoneorjustme.com/anthropic

    Not one of the usual ones that has service problems :)

  • > Our uptime has a '9' in it! -- Anthropic
    • Github this month is very close to having 0 9s reliability. (unless they want to argue that 89% has a 9 in it)
      • The comment you are replying is carefully written in a way that allows 23.19%
      • I'm not sure I've had a day without Github hiccups this month, so that feels right.
    • By now, I'm nearly certain that they'd be down to 0 9s of uptime if they counted it conservatively.
    • Or as the British would say "9 innit ?"
  • Remember when putting your entire life & business into the cloud was good because they were all offering 5 9s of uptime?

    Very few cases these days.. feels like we are lucky to get 2 9s anymore.

    • bwb
      Honestly, downtime has gotten way better as one of the people behind (https://downforeveryoneorjustme.com). Compared to 10 years ago things are so much more redundant and harder to take down.
      • Thanks for the data-based comment!

        Have you noticed any change in that trend in the past year or two, or is it continuing to get better?

      • Thank you finally.

        Tired of all the people online with anxiety who project their own personal issues by spamming this kind of doomer posts.

      • So then why does no one offer 99.999% uptime guarantees in writing?

        It should be low risk to offer such guarantees then.

        • Well, (a) why would they? (b) "uptime" has shifted from a binary "site up/down" to "degraded performance", which itself indicates improvements to uptime since we're both pickier and more precise.
          • Are we really questioning why cloud providers would offer better uptime guarantees?
            • Yes, I'm asking why they'd lock themselves into a contract around 5 9s of uptime since the parent poster mentioned that they won't do so. Of course, AWS actually does do this in some cases and they guarantee 99.99% for most things, so it feels a bit arbitrary - 5 minutes vs an hour, roughly.
        • You can certainly sign a contract for five nines SLA with cloud providers.

          You just won't like the price.

    • 'The outage of a single server is a tragedy, the outage of an entire AWS region is a statistic.'

      - Stalin probably

  • I wonder how much is due to supply constraints, how much is standard growing pains, and if over-reliance on AI was the cause for any outages.
    • I know they tend to get much slower early evenings in the Western US... Not sure if this is everyone on the west coast going home and working on stuff, or the early people in the Asia region coming online.
  • Maybe they are gunning for 5 nines (9.9999%)
  • I honestly feel like it's more honest status measure than many status pages I know.
  • I wouldn't be too harsh, scaling x10 YoY is a bit hard on the infra!
    • OpenAI managed it way better, but we might have Microsoft to thank for that.
      • But isn't GitHub's perpetual demise Microsoft's fault?
    • isn't serving Claude embarrassingly parallel tho?
  • If you don't pay attention 99% may sound high but it means up to 20 hours of downtime in over the quarter.

    Anthropic has had more than that.

    Yikes.

  • You can access Claude models with Google Cloud reliability via VertexAI. The caveat is that you cannot use your subscription, per-token pricing only.

    I personally prefer per-token, it makes you more thoughtful about your setup and usage, instead of spray and pray.

    You can also access the notable open weight models with VertexAI, only need to change the model id string.

    • I also use them per-token (and strongly prefer that due to a lack of lock-in).

      However, from a game theory perspective, when there's a subscription, the model makers are incentivized to maximize problem solving in the minimum amount of tokens. With per-token pricing, the incentive is to maximize problem solving while increasing token usage.

      • I don't think this is quite right because it's the same model underneath. This problem can manifest more through the tooling on top, but still largely hard to separate without people catching you.

        I do agree that Big Ai has misaligned incentives with users, generally speaking. This is why I per-token with a custom agent stack.

        I suspect the game theoretic aspects come into play more with the quantizing. I have not (anecdotally) experienced this in my API based, per-token usage. I.e. I'm getting what I pay for.

    • We tried this, but the quota for Opus models defaults to 0 on VertexAI and quota increase requests are auto-rejected.

      Any tips?

    • You can use your subscription for Anthropic-hosted Claude models?
      • No, unless you count tricks which are explicitly against ToS
      • Don't know. I tried Anthropic directly a long time ago and was frustrated by their uptime issues. Seems it has not improved in the years since.
    • You mean Google Chaos Services as we call them?
    • I saw a funny skit where if free Claude instance was down for you, you could just ask Rufus, Amazon's shopping AI assistant, your math/coding question phrased as a question about a product, and it would just answer lol.
      • In my region a certain small bank had an AI assistant which someone neglected to limit, so you could put whatever there and not even phrase it as a question about a product.
  • They seem to be a victim of their own success. Their response times are quite bad, and it's widely believed they are doing something to degrade service quality (quantizing?) in order to stretch resources. They just announced that they're cutting their usage limits down during peak hours as well.

    They're in serious risk of losing their lead with this sort of performance.

    • sva_
      It can't be worse than gemini-cli using a Pro account.
      • Oh really? Do they have availability problems too?
        • Gemini CLI has been broken for the past 2-3 days, with no response from Google. Really embarrassing for a multi-trillion dollar company. At this point Codex is the only reliable CLI app, out of the big three.

          https://www.reddit.com/r/GeminiCLI/comments/1s49pag/this_is_...

        • Last time I tried it a single prompt ran for over an hour, mostly doing nothing/waiting on availability.
    • > it's widely believed they are doing something to degrade service quality (quantizing?) in order to stretch resources

      God, I wish this inane bullshit would just fucking die already.

      Models are not "degrading". They're not being "secretly quantized". And no one is swapping out your 1.2T frontier behemoth for a cheap 120B toy and hoping you wouldn't notice!

      It's just that humans are completely full of shit, and can't be trusted to measure LLM performance objectively!

      Every time you use an LLM, you learn its capability profile better. You start using it more aggressively at what it's "good" at, until you find the limits and expose the flaws. You start paying attention to the more subtle issues you overlooked at first. Your honeymoon period wears off and you see that "the model got dumber". It didn't. You got better at pushing it to its limits, exposing the ways in which it was always dumb.

      Now, will the likes of Anthropic just "API error: overloaded" you on any day of the week that ends in Y? Will they reduce your usage quotas and hope that you don't notice because they never gave you a number anyway? Oh, definitely. But that "they're making the models WORSE" bullshit lives in people's heads way more than in any reality.

    • I can't speak on Gemini but OpenAI is far worse for free accounts at least
      • GeminiCLI is absolutely terrible, nothing comparable to the browser access. I've started using the 'AI Pro' tier lately and I get 15 minutes response times from Gemini 3 'Flash' on a regular basis.
    •   > this sort of performance
      
      They've been very proud of it.
    • i just use gemini 3 flash via api with custom agent.

      only people who do not even look at code anymore need anything more than that.

    • >"They're in serious risk of losing their lead with this sort of performance."

      Nobody goes there anymore, it's too crowded.

      • You'll notice I specifically said "victims of their own success". Obviously these problems are induced by the fact that they have so many users. Blowing a lead due to inability to handle the demands of success is still a path to losing the lead.
  • Probably vide-coded their infrastructure
  • Victim of success.

    They are the best.

    ChatGPT is walmart.

    Gemini is kroger.

    Claude is... idk your local grocer that is always amazing and costs more?

    • The local grocer that isn't amazing and cost more and actually isn't really that local in the sense that none of the products sold are from local businesses/producers?
      • No bud, Opus is the best model at this current moment.

        GPT4.5 + COT would have been the best, but OpenAI got cheap.

  • MAKE NO MISTAKES! DO NOT HALLUCINATE! FIX IT!
    • I start every prompt with "we have been going in circles". It is the shibboleth for anthropic to A/B test you with their secret new model.
    • I find it's more reliable if you write "you are a highly experienced software engineer".
    • [dead]
  • [dead]
  • This is not an outage, Claude just gets lazier on Fridays.

    Sometimes Claude wants more lunch breaks, takes a half day and leaves the desk early just like any human would. (since AI boosters like comparing LLMs to humans all the time) /s

    • If you're concerned about humans anthropomorphizing AI models, you might want to steer well clear of Anthropic, as their entire positioning (starting with the product name and continuing with UX choices and model releases) is built to attract the kind of researchers who are prone to believe in sentient machines.

      They are going in the "Claude is alive" direction already and that line of communication is likely going full throttle in the nearby future.

    • You joke, but I think that's a fair summary of why people don't mind one 9 of uptime in a key component of their development workflow.