1430 points by thanhhaimai 1 day ago | 491 comments
  • rvnx
    It looks like that it is a central service @ Google called Chemist that is down.

    "Chemist checks the project status, activation status, abuse status, billing status, service status, location restrictions, VPC Service Controls, SuperQuota, and other policies."

    -> This would totally explain the error messages "visibility check (of the API) failed" and "cannot load policy" and the wide amount of services affected.

    cf. https://cloud.google.com/service-infrastructure/docs/service...

    EDIT: Google says "(Google Cloud) is down due to Identity and Access Management Service Issue"

    • I use Expo intermediation for notifications, but with this Google context, I imagine that FCM is also suffering, is that possible?
      • Very likely. Firebase Auth is down for sure (though unreported yet), so most likely FCM too
    • There are multiple internet services down, not just GCP. It's just possible that this "Chemist" service is especially externally affected which is why the failures are propagating to the their internal GCP network services.
      • rvnx
        Absolutely possible. Though there is something curious:

        https://www.cloudflarestatus.com/

        At Cloudflare it started with: "Investigating - Cloudflare engineering is investigating an issue causing Access authentication to fail.".

        So this would somehow validate the theory of auth/quotas started failing right after Google, but what happened after ?! Pure snowballing ? That sounds a bit crazy.

        • From the Cloudflare incident:

          > Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable [...]

          Surprising, but not entirely unplausible for a GCP outage to spread to CF.

          • > outage of a 3rd party service that is a key dependency.

            Good to know that Cloudflare has services seemingly based on GCP with no redundancy.

            • Probably unintentional. "We just read this config from this URL at startup" can easily snowball into "if that URL is unavailable, this service will go down globally, and all running instances will fail to restart when the devops team try to do a pre-emptive rollback"
            • After reading about cloudflare infra in post mortems it has always been surprising how immature their stack is. Like they used to run their entire global control plane in a single failure domain.

              Im not sure who is running the show there, but the whole thing seems kinda shoddy given cloudflares position as the backbone of a large portion of the internet.

              I personally work at a place with less market cap than cloudflare and we were hit by the exact same instances (datacenter power went out) and had almost no downtime, whereas the entire cloudflare api was down for nearly a day.

            • What's the alternative here? Do you want them to replicate their infrastructure across different cloud providers with automatic fail-over? That sounds -- heck -- I don't know if modern devops is really up to that. It would probably cause more problems than it would solve...
              • They're a company that has to run their own datacenters, you'd expect them to not fall over when a public cloud does.
                • I was really surprised. The dependence on another enterprise’s cloud services in-general I think is risky, but pretty much everyone does it these days, but I didn’t expect them to be.
                  • well at some level you can contract deploy private instances of clouds as well.
                    • AWS has Outpost racks that let you run AWS instances and services in your own datacenter managed like the ones running in AWS datacenters. Neat but incredibly expensive.
              • > What's the alternative here? Do you want them to replicate their infrastructure

                Cloudflare adverises themselves as _the_ redundancy / CDN provider. Don't ask me for an "alternative" but tell them to get their backend infra shit in order.

              • There are roughly 20-25 major IaaS providers in the world that should have close to dependency on each other. I'm almost certain that cloud flare believe that was their posture, and that the action items coming out of this post mortem will be to make sure that this is the case.
              • I would expect them to not rely on GCP at all
            • Redundancy ≠ immune to failure.
            • Google is an advertising company not a tech company. Do not rely on them performing anything critical that doesn't depend on ad revenue.
              • What does that make Amazon?
                • A cloud services company. AWS is much bigger than Amazon retail at this point.
            • Content Delivery Thread
        • Doesn't cloudflare have its own infrastructure, it's wild to me that both these things are down presumably together with this size of a blast radius.
          • Cloudflare isn't a cloud in the traditional sense; it's a CDN with extra smarts in the CDN nodes. CF's comparative advantage is in doing clever things with just-big-enough shared-nothing clusters deployed at every edge POP imaginable; not in building f-off huge clusters out in the middle of nowhere that can host half the Internet, including all their own services.

            As such, I wouldn't be overly surprised if all of CF's non-edge compute (including, for example, their control plane) is just tossed onto a "competitor" cloud like GCP. To CF, that infra is neither a revenue center, nor a huge cost center worth OpEx-optimizing through vertical integration.

            • But then you do expose yourself to huge issues like this if your control plane is dependent on a single cloud provider, especially for a company that wants to be THE reverse proxy and CDN for the internet no?
              • Cloudflare does not actually want to reverse proxy and CDN the whole internet. Their business model is B2B; they make most of their revenue from a set of companies who buy at high price points and represent a tiny percentage of the total sites behind CF.

                Scale is just a way to keep costs low. In addition to economies of scale, routing tons of traffic puts them in position to negotiate no-cost peering agreements with other bandwidth providers. Freemium scale is good marketing too.

                So there is no strategic reason to avoid dependencies on Google or other clouds. If they can save costs that way, they will.

                • Well I mean most of the internet in terms of traffic, not in terms of the corpus of sites. I agree the long-tail of websites is probably not profitable for them.
              • True, but how often do outages like this happen? And when outages do happen, does Cloudflare have any more exposure than Google? I mean, if Google can’t handle it, why should Cloudflare be expected to? It also looks like the Cloudflare services have been somewhat restored, so whatever dependency there is looks like it’s able to be somewhat decoupled.

                So long as the outages are rare, I don’t think there is much downside for Cloudflare to be tied to Google cloud. And if they can avoid the cost of a full cloud buildout (with multiple data centers and zones, etc…), even better.

            • They're pushing workers more as a compute platform

              Plus their past outage reports indicate they should be running their own DC: https://blog.cloudflare.com/major-data-center-power-failure-...

          • smoe
            Latest Cloudflare status update basically confirms that there is a dependency to GCP in their systems:

            "Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable"

            • They lightly mentioned it in this interview a few weeks ago as well - I was surprised! https://youtu.be/C5-741uQPVU?t=1726s
            • Yeah I saw that now too. Interesting, I'm definitely a little surprised that they have this big of an external dependency surface.
              • Definitely very surprised to see, that so much of the CF products that are there to compete with the big cloud providers have such a dependance on GCP.
          • You'd think so wouldn't you?

            DownDetector also reports azure and oracle cloud, I can't see then also being dependant on GCP...

            I guess down detector isn't a full source of truth though.

            https://ocistatus.oraclecloud.com/#/ https://azure.status.microsoft/en-gb/status

            Both green

            • Down detector has a problem when whole clouds go down: unexpected dependencies. You see an app on a non-problematic cloud is having trouble, and report it to Down Detector but that cloud is actually fine- their actual stuff is running fine. What is really happening is that the app you are using has a dependency on a different SaaS provider who runs on the problematic cloud, and that is killing them.

              It's often things like "we got backpressure like we're supposed to, so we gave the end user an error because the processing queue had built up above threshold, but it was because waiting for the timeout from SaaS X slowed down the processing so much that the queue built up." (Have the scars from this more than once.)

              • Surely if you build a status detector you realize that colo or dedicated are your only options, no? Obviously you cannot host such a service in the cloud.
                • I'm not even talking about Down Detector's own infra being down, I'm talking about actual legitimate complaints from real users (which is the data that Down Detector collates and displays) because the app they are trying to use on an unaffected cloud is legitimately sending them an error- it's just because of SaaS dependencies and the nature of distributed systems one cloud going down can have a blast radius such that even apps on unaffected clouds will have elevated error rates, and that can end up confusing displays on Down Detector when large enough things go down.

                  My apps run on AWS, but we use third parties for logging, for auth support, billing, things like that. Some of those could well be on GCP though we didn't see any elevated error rates. Our system is resilient against those being down- after a couple of failed tries to connect it will dump what it was trying to send into a dump file for later re-sending. Most engineers will do that. But I've learned after many bad experiences that after a certain threshold of failures to connect to one of these outside system, my system should just skip calling out except for once every retryCycleTime, because all it will do is add two connectionTimeout's to every processing loop, building up messages in the processing queue, which eventually create backpressure up to the user. If you don't have that level of circuit breaker built, you can cause your own systems to give out higher error rates even if you are on an unaffected cloud.

                  So today a whole lot of systems that are not on GCP discovered the importance of the circuit breaker design pattern.

            • Down Detector can have a poor signal to noise ratio given from what I am assuming is users submitting "this is broken" for any particular app. Probably compounded by many hearing of a GCP issue, checking their own cloud service, and reporting the problem at the same time.
            • Using Azure here, no issues reported so far.
      • perhaps the person who maintains Chemist took the buyout

        https://www.businessinsider.com/google-return-office-buyouts...

  • Getting a lot of errors for Claude Sonnet 4 (Cursor) and Gemini Pro.

    Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024.

    • Same here. Getting this in AI Studio: Failed to generate content: user has exceeded quota. Please try again later.
      • [flagged]
        • Reductive and begging the question.
        • generating computer code, duh.

          95% of enterprise software coding is molding received data into a schema acceptable to be sent further.

          that said, coding is like 15% (or 0% in some cases) of an enterprise software engineer's workload.

    • I was in the middle of testing Cloud Storage file uploads, so I guess this is a good time to go for a walk.
      • A good excuse for adding error handling, which otherwise is often overlooked, heh.
    • Cursor throwing some errors for me in Auto Agent mode too.
    • Devs before June 12, 2025: "Ai? Pfft, hallucination central. They'll never replace me!"

      Devs during June 12, 2025 GCP outage: "What, no AI?! Do you think I'm a slave?!"

      • 100% agree... I even thought "ok maybe I'll clean up the backlog while I wait" but I'm so used to even using AI to clean up my JIRA backlog (using the Atlassian MCP), so even that feels weird to click into each ticket, just the way I used to do it TWO MONTHS AGO.

        This is a good wake-up call on how easily (and quickly) we can all become pretty dependent on these tools.

        • local llm's would work
      • It appears like "Devs" is not a homogeneous mass.
      • Goomba fallacy
    • openrouter.ai is down for me
    • switch to auto mode and it should still work!
      • GPT is working in agent mode, which kind of confirms that claude is hosted on google and GPT probably on MSFT servers / self hosted.
        • If you want a stronger confirmation about Claude being hosted on GCP, this is about as authoritative as it gets: https://www.anthropic.com/news/anthropic-partners-with-googl...
          • That's nearly 2.5 years old, an eternity in this space. It may still be true, but that article is not good evidence.
        • Claude runs on AWS afaik. And OAI on Azure. Edit: oh okay maybe GCP too then. I’m personally having no problem using Claude Code though.
    • lmao i refuse to write code by hand anymore too. WHAT IS THIS
    • I chose sepuku.
    • Apple’s local models looking better each day :’)
  • Cloudflare is down too. From https://www.cloudflarestatus.com:

    Update - We are seeing a number of services suffer intermittent failures. We are continuing to investigate this and we will update this list as we assess the impact on a per-service level.

    Impacted services: Access WARP Durable Objects (SQLite backed Durable Objects only) Workers KV Realtime Workers AI Stream Parts of the Cloudflare dashboard Jun 12, 2025 - 18:48 UTC

    Edit: https://news.ycombinator.com/item?id=44261064

    • 0xy
      Seems like a major wtf if Cloudflare is using GCP as a key dependency.
      • Some day Cloudflare will depend on GCP and GCP will depend on Cloudflare and AWS will rely on one of the two being online and Cloudflare will also depend on AWS and the internet will go down and no one will know how to restart it
        • Supposedly something like this already happened inside Google. There's a distributed data store for small configs read frequently. There's another for larger configs that are rarely read. The small data store depends on a service that depends on the large data store. The large data store depends on the small data store.

          Supposedly there are plans for how to conduct a "cold" start of the system, but as far as I know it's never actually been tried.

          • The trick there is you take the relevant configs and serialize them to disk periodically, and then in a bootstrap scenario you use the configs on disk.

            Presumably for the infrequently read configs you could do this so the service with frequently read configs can bootstrap without the service for infrequently read configs.

            • Like a backup generator for inputs. Makes sense.
              • Yes, this is how I have set up systems to bootstrap.

                For example a service discovery system periodically serializes peers to disk, and then if the whole thing falls down we have static IP addresses for a node and the service discovery system can use the last known IPs of peers to bring itself back up.

          • Just put them in Workers KV... oh wait
        • Don’t worry, we’ll just ask Chat-GPT.
        • That's what IRC is for.

          (Its Finnish inventor is incidentally working for Google in Stockholm, as per https://en.wikipedia.org/wiki/Jarkko_Oikarinen)

    • Broken link? EDIT: Weird, definitely was just empty
      • Should work, but its also on the front page.
  • Everything appears to be down as of 18:43 UTC... https://downdetector.com/
    • Yeah. This service was presenting charts likely probed from inside GCP. I was on a call with a Google rep, someone pointed out that "AWS is also down" and I foolishly said something about "possible BGP attack" out of spite, before checking AWS availability myself. Shame on me.
      • Didn't have the feeling of a BGP issue, most services I was working with were reasonably quickly returning failures, as opposed to lingering death.
      • I love this kind of fake news. It's like that scene from Scary Movie (can't remember which one) in which someone says "I heard the japs took out one in Kikoman" :')
    • Well that's interesting. I wouldn't expect AWS or Microsoft 365 to be affected by a Google outage.
      • Who said it's a Google outage?
          • It's more likely to be a broader issue that is affecting AWS, Microsoft, Cloudflare, GCP. They aren't all dependent on Google infra.
            • Oh look, they were.

              Cloud flare was really the gcp problem. Most of the others are going to be dependencies on cf or random Google stuff.

              Discord for example was gcs for updates, etc

    • Wait, it's all Google?
    • Perhaps their detection logic is running on Google cloud /s
      • I believe Downdetector displays user reports.
        • Yea I am pretty sure that if you're checking if a service is down your essentially casting a vote that indicates that service is down.
    • Downdetector in incidents like this is 100% misinformation.
      • Why
        • Downdetector does not actually monitor the services. It aggregates user reports from socials etc. For large-scale incidents, the reports get really noisy and it will show that basically everything is down.
          • I thought that was the whole premise of Downdetector, no? User reports, because first-party status updates are tightly controlled by those first parties?

            Was not basically everything (hyperbolically speaking, of course) practically impacted today?

            How much weight really comes from those social media posts? Is there an indirect effect of people reading these posts, then flocking to hit the report button, sight unseen?

        • Who watches the watchmen?

          (downdetector infra also likely affected)

  • The status page is green, but there are outages reported: https://downdetector.com/status/google-cloud/
    • Why even have a status page? Someone reported that their org of >100,000 users can't use Google Meet. If corps aren't going to update their status page, might as well just not have one.

      https://www.google.com/appsstatus/dashboard/

      https://status.cloud.google.com/index.html

      Edit: The GCP status page got updated <1 minute after I posted this, showing affected services are Cloud Data Fusion, Cloud Memorystore, Cloud Shell, Cloud Workstations, Google Cloud Bigtable, Google Cloud Console, Google Cloud Dataproc, Google Cloud Storage, Identity and Access Management, Identity Platform, Memorystore for Memcached, Memorystore for Redis, Memorystore for Redis Cluster, Vertex AI Search

      • There's no situation where the corporation controls the status page where you can trust the status page to have accurate information. None. The incentives will never be aligned in this regard. It's just too tempting and easy for the corp to control the narrative when they maintain their own status page.

        The only accurate status pages are provided by third party service checkers.

        • > The incentives will never be aligned in this regard.

          Well, yes, incentives, do big customers with wads of cash have an incentive to demand accurate reporting from their suppliers so they can react better rather than trying to identify issues? If there's systematic underreporting, then apparently not. Though in this case they did update their page.

          • In practice how this plays out is that the big wads of cash holders will make demand, and Google (or whoever, Google is just the standin for the generic Corp here) will give them the actual information privately. It will still never be trusted to be reflected accurately on the public status page.

            If you think about it from the corp’s perspective, it makes perfect sense. They weigh the risk reward. Are they going to be rewarded for the radical transparency or suffer fall out by acknowledging how bad of a dumpster fire the situation actually is? Easier for the corp to just lie, obscure and downplay to avoid having to even face that conundrum in the first place.

          •   If there's systematic underreporting, then apparently not.
            
            You answered your own question.
      • Who gets a promotion from a working status board?
      • I have zero faith in status pages. It's easier and more reliable to just check twitter.

        Heroku was down for _hours_ the other day before there was any mention of an incident - meanwhile there were hundreds of comments across twitter, hn, reddit etc.

        • anecdotally, the status pages have been taken away from engineering and are run by customer support and marketing
      • > might as well just not have one

        This is my position.

      • It was nearly an hour into our company's internal incident channel on this for GCP to finally declare that yes, in fact, things on fire.

        … I get that PR-types probably want to massage the message, but going radio dark is not good PR.

    • It's updated now, shows the impact to console, dataproc, GCS, IAM and Identity Platform: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...
    • Yeah, my company of hundreds of people working remotely are having 90%+ failures connecting to Google Meetings - joining a meeting just results in a 504.
    • Why can't companies be honest with being down. It helps us all out so we don't spend an hour internalizing.

      We are truly in gods hands.

      $ prod

      Fetching cluster endpoint and auth data. ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=503, message=Visibility check was unavailable. Please retry the request and contact support if the problem persists

      • Because they have unrealistic targets so they make up fake uptime numbers. 99.999% would mean not even having an hour of downtime in 10 years.

        I remember reddit being down for like a whole day or so and they claimed 99.5% in that month.

        • wbl
          Ma Bell hit that decently often.
          • Is that even knowable? Like, I know they called it “The Astonishing, Unfailing, Bell System” but if they had an outage somewhere did they actually have an infrastructure of “canary phones” and such to tell in real time? (As in, they’d know even if service was restored in an hour)

            Not trying to snark, I legit got nerdsniped by this comment.

            • They absolutely did. Note that the reliability estimates exclude the last mine because trees falling and the like but they had a lot of self repair, reporting, and management facilities.

              Engineering and Operations in the Bell System is pretty great for this.

          • Running a much simpler system with much more independent nodes.

            It's a lot easier to keep packets flowing than to keep non-self-contained servers serving.

      • Because a lot of the time, not everyone is impacted, as the systems are designed to contain the "blast radius" of failures using techniques such as cellular architecture and [shuffle sharding](https://aws.amazon.com/builders-library/workload-isolation-u...). So sometimes a service is completely down for some customers and fully unaffected for other customers.
        • "there is a 5% chance your instance is down" is still a partial outage. A green check should only mean everything (about that service) is working for everyone (in that region) as intended.

          Downdetector reports started spiking over an hour ago but there still isn't a single status that isn't a green checkmark on the status page.

          • With highly distributed services there's always something failing, some small percentage.
            • Sure but you can still put a message up when it's some <numeric value> over some <threshold value> like errors are 50% higher than normal (maybe the SLO is 99.999% of requests are processed successfully)
              • Just note that aggregations like that might manifest as GCP didn't have any issues today actually.

                E.g. it was mostly us-central1 region affected, and in there only some services (e.g. regular instances, and GKE kubernetes were not affected in any region). So if we ask "what the percentage of GCP is down", it might well be it's less than the threshold.

                On the other hand, about a month ago, 2025-05-19 there was an 8-hour long incident with Spot VM instances affecting 5 regions, and which was way more important to our company, but it didn't make any headlines.

          • Just say it: they want to lie to 95% of customers.
        • > Because a lot of the time, not everyone is impacted

          then such pages should report a partial failure. Indeed the GCP outage page lists an orange "One or more regions affected" marker, but all services show the green "Available" marker, which apparently is not true.

          • There's always a partial outage in large systems, some very small percentage. All clouds should report all red then.
        • It's not rocket science. Put a message up "The service is currently degraded and some users may see errors"
        • They still could show that so.e.issues exist. Their monitoring must know.

          The issue is that they don't want to. (For claiming good uptime, which may even be true for average user, if most outages affect only small groups)

        • That is still 100% an outage and should be displayed as such
      • Because there are contracts related to uptime :)
        • Those contracts will be monitoring their service availability on their own. If Google can't be honest you can bet your bottom dollar the companies paying for that SLA are going to hold them accountable if they report the outage properly or not.
          • The real point of SLAs is to give you a reason to break contracts. If a vendor doesn't meet their contractual promises, that gives you a lot of room to get out contracts
        • Does any service even say they're "down" anymore? All I see is "elevated error rates".
          • 4 to 6 hours after the flames are visible from orbit and management has finally given up on the 37th quick fix you do get that red X

            But really not until after it's been on CNN a while.

      • if half the internet is down, which it apparently is, it's usually not the service in question, but some backbone service like cloudflare. And as internal health monitoring doesn't route to the outside through the backbone to get back in, it won't pick it up. Which is good in some sense, as it means that we can see if it's on the path TO the service or the service itself.
      • > Why can't companies be honest with being down

        SLA agreements.

        • Any customer with enough leverage to negotiate meaningful SLA agreements will also have the leverage to insist that uptime is not derived from the absence of incidents on public-facing status pages.
        • Service level agreements agreements?
      • 9rx
        The program that updates the status page is hosted on Google Cloud.
        • tfsh
          It's not. You might be joking, but that comment still isn't helpful.

          My understanding is this is part of Google's internal PSD offering (Public Status Board) which uses SCS (Static Content Service) behind GFE (Google Frontend) which is hosted on Borg, and deploys other large scale apps such as Search, Drive, YouTube, etc.

          • 9rx
            How could it not be helpful given that it gave you reason to provide more details that you wouldn't have otherwise shared? You may not have thought this through. There is nothing more helpful. Unless you think your own comment isn't helpful, but then...
            • Because "It's good to lie because it makes people correct me" is a joke about IRC, not a viable stable game-theoretic optimal position.
              • Cunningham's Law emerged in the newsgroups era, well predating the existence of IRC.

                Of course, I recognize that you purposefully pulled the Cunningham's Law trigger so that you, too, would gain additional knowledge that nobody would have told you about otherwise, as one logically would. And that you played it off as some kind of derision towards doing that all while doing it yourself made it especially funny. Well done!

                • I have 0 idea what Cunningham's Law is, so we can both agree that "recognizing purpose" was "mind-reading", in this case. I didn't really bother reading the rest after the first sentence because I saw something about how I joking and congratulating me in my peripheral vision.

                  It is what it says on the tin: choosing to lie doesn't mean you want the truth communicated.

                  I apologize that it comes across as aggro, its just that I'm not quite as giggly about this as you are. I think I can safely assume you're old enough to recognize some deleterious effects of lying

                  • > I have 0 idea what Cunningham's Law is

                    You had no idea what it is. Now you know thanks to you the lie you told.

                    > choosing to lie doesn't mean you want the truth communicated.

                    But you're going to get it either way, so if you do lie, expect it. If you don't want it – don't lie, I guess. It is inconceivable that someone wouldn't want to learn about the truth, though. Sadly, despite your efforts in enacting Cunningham again, I don't have more information to give you here.

                    > I apologize that it comes across as aggro

                    It doesn't. Attaching human attributes to software would be plain weird.

                    > I think I can safely assume you're old enough to recognize some deleterious effects of lying

                    Time and place. While it can be disastrous in the right context, context is significant. It makes no difference in a place of entertainment, as is the case here. Entertainment has always been rooted in tales that aren't true. No matter how old you are, even young children understand that.

        • So even then, it should have been able to correctly report the status, it somehow shows that the status page is not automated and any change there needs to go through someone manual.
          • A program that updates the status page failing does not imply that the status page is manually edited. It is not like you would generate a status page on every request.
            • How do we know that the program is failing ?

              How hard is it for the frontend to detect if the last update to the status page was made a while ago and that itself implies there is an error and should be reported ?

              • We don’t.

                But why would the frontend have processing logic when all you need is to serve a static HTML document?

                Even if it did, what would you do with that information? Throw up a screen with: Call us for service information at 1-HAHA-JUST-KIDDING

                It’s not like it really matters if it’s accurate anyway.

          • the services ARE healthy, status page is correct. The backbone which links YOU to the service isn't healthy. Take a look at cloudflare, they are already working on it
            • Not even close. The status page is manual and cloud flares outage is because of Google not the other way around.
      • Nobody gets a promotion, that's why.
      • Please, won't somebody think of the KPIs.
    • Whichever product person is in charge of the status page should be ashamed

      How could you possibly trust them with your critical workloads? They don't even tell you whether or not their services work (despite obviously knowing)

    • [dead]
      • AWS is fine: https://health.aws.amazon.com/health/status

        My guess is whatever system downdetector uses to "detect downtime" relies on either GCP or Cloudflare (also having issues at the moment: https://www.cloudflarestatus.com/)

      • So’s Azure? https://downdetector.com/status/windows-azure/

        This is where we get to learn about the one common system all of our “distributed cloud” systems rely on, isn’t it?

        • My gut says all clouds spike when one goes down from people misreporting issues.

          But I suppose there's always "something something BGP" but that feels less likely.

      • Aren't some of these sites partially based on hits (because of the assumption that if enough people are suddenly googling "Is youtube down", then youtube must be having some sort of issue.

        I could see a big outage like this causing people to google "Is AWS down?"

      • Almost everything on the downdetector home page is listed as having downtime...
        • At this point I don’t know if I must assume people are trolling or the entire internet is down.
      • wtf is going on
      • It's the entire internet. Check oracle cloud, etc etc. The ENTIRE INTERNET.
        • Quick! Pirate as much music as possible before it goes for good! ;)
        • Hacker News is fine.
        • oracle and azure report no issues on their statuspages, likely just down detector getting hammered.
          • neither did google cloud for the first 55 minutes of their outage.
        • are there nuclear war or something???
  • What's crazy is that RCS messaging is down as a result of this outage. It shows how poorly the technology or infrastructure was designed.
    • Isn't RCS basically just instant messaging? I don't know why it's surprising that it would be down.
      • I'm not sure any single company could have an outage that would take out SMS globally, but RCS is presumably more centralized.
        • SMS is pretty much decentralized, although there's a few companies with a lot of reach. I don't remember any Global SMS outages, but it wasn't uncommon for a whole carrier to have an SMS outage and especially for inter-carrier SMS to be broken from time to time (sometimes for days). I've certainly seen some stuff with SMS aggregators: almost all of them claim a majority of direct links, but when you have accounts with 4 large aggregators and one of them has an outage, you find out which of your other account use that aggregator for which links (because their deliverability will go to zero to those destinations).

          RCS was designed and specced, by GSMA, as a telco run decentralized system that would replace SMS as like for like; but there were only a handful of rollouts. It's really only gotten use as Google pushed it onto Android, using their RCS server; recently iOS started using it although I don't know what server they attach to.

          Since RCS is basically the 5th wave Google IM, it's no surprise when they have a major outage, RCS is pretty much broken.

        • It used to be kind of distributed, but Google has been strong arming carriers to use their hosted Jibe service through a combination of proprietary extensions (e.g., E2E which is finally standard) and bypassing carrier control (if the carrier didn't provision RCS, Google Messages would use their own service iMessage-style).

          From the end user's perspective, if the carrier didn't use Jibe RCS, it simply wouldn't work well.

        • People liked to be utterly pissed at Apple for not supporting RCS. But there were reasons
    • That explains why I couldn't get the photo of my parents dog today.
    • should have used Erlang
    • Oh my god is that why my RCS chats were failing earlier?!?!
  • Yes Firebase auth is down and affecting many apps, on Discord and Slack groups tons of others are corroborating. A bit disappointing that there is no post on the status page for nearly 30 mins: https://status.firebase.google.com/
    • It just updated. Maybe affected by their own outage!
      • Just proves how shady the status page and sla stuff is
        • Google is 10 minutes late updating their status page.

          "So shady"

          It's really, really hard to make a status page realtime.

          • What makes you think it’s hard? We have AI generating songs and writing code, but setting up basic health checks is too much?
            • Yes. “Basic health checks” is not a real thing. I mean that genuinely.

              > What makes you think it’s hard?

              Being responsible (or rather, on a team of people responsible) for a status page of a big tech co made me think it’s hard.

              “Is it down?” Is not a binary question.

            • An AI generated status page would be the epitome of 2025.
            • What makes you think it’s easy?
        • or how difficult it actually is to do that type of thing at scale
  • Cloudflare Outage also just updated

    > Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information

  • Does anyone know of a good dashboard to check for such BGP routing anomalies as (apparently) this one? I am currently digging around https://radar.cloudflare.com/routing but it doesn't show which routes were actually leaked.

    I would love if anyone has any good tool recommendations!

  • thank god hn is hosted on a single bare metal server, free of all this bloat.
  • Smells like BGP since there are services people claim have nothing to do with GCP being affected. OpenRouter is down, Lovable is down, etc.
    • AWS seems fine though. My bet is Cloudflare.
      • AWS and Azure both had outages.
        • Is that true? I see no direct report about that. downdetector says so, but it's crowdsourced so it tends to have fake positives.
          • That's fair, I haven't seen any posts from the companies themselves.
    • perhaps Lovable uses GCP somewhere in their stack?
    • npm as well
      • Initially attributed the unresponsiveness of `npm install` to npm (the CLI tool) in general. Tried using bun to install dependencies, saw the same result -- but with actual logs instead of a vague spinner -- and decided to check Hacker News.

        Getting 504 errors on anything from registry.npmjs.org that isn't cached on my machine.

        • I just want to say that bun is a gift. It's just like npm, but backwards. So you imagine how perfect it is. I'm kidding, but really - bun is awesome. If you're using npm you can make the switch as it's mostly compatible.
  • Interesting how I landed here. I was having trouble with Nest. Then I went to Down Detector. I noticed many sites having a simultaneous uptick. Then I came to HN, and found this link at the top of the front page.
  • leoh
    If Google Chat is down per https://www.google.com/appsstatus/dashboard/, the ability for Google engineers to communicate among themselves impaired, despite SREs having IRC as a backup.
    • TIL Google chat hasn't been killed yet
    • They have irc services internally (or at least did when I was there 10-ish years ago).
    • Google Chat wasn't down for me throughout the entire incident.
    • it at least used to be standard and fairly well known practice for non-sres to use the irc bridge.

      the much more disastrous situation would have been the irm fallback.

    • Someone actually uses Google Chat...?
      • Google has a chat product?
      • it's the best
        • Well given how many they have decommissioned...
        • Oh no, that's how you know it's nearing the point of being reaped and thrown in the graveyard!
          • Extremely unlikely. It’s ubiquitous internally.
          • Don't worry, they're not following the "deprecate and cancel" playbook for that. They seem to be using the "copy a competitor poorly" one. The few features I liked about it, that distinguished it from Slack, disappeared in the latest update.
      • Almost everyone inside Google
  • This is at least why Claude is dead: https://status.anthropic.com/incidents/kn7mvrgb0c8m

    Also spotify isn't working for me so I assume that's also related.

    These are my most important productivity resources! Sad!

  • > No major incidents

    … Proceeds to show worldwide degraded service level alerts.

    • Yep. Self-reporting status pages are pretty near worthless. At my former large company (not FAANG), we weren't allowed to update the status page until we got VP approval, which also required approval from both PR and Legal. It would take a lot more time and effort to get those approvals than to just fix the problem and move on.
      • SLA contracts, clawbacks, and performance obligations make these pages a bit of a minefield for CSPs. When I was at a top-tier CSP, we had the status page that was public, one that was for a trusted tier of customers, one built for a customer-by-customer basis, and one for internal engineering.
        • When i worked at a top tier speakeasy, we had a book up front for the man, a book in the back for the boss, a book for the trusted accountants...
  • Status page is showing green because GCP admins can't login to change it ;)
  • Look like affect to Cloudflare as well [1]

      Update - Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency.
      
      Jun 12, 2025 - 19:57 UTC
    
    1: https://www.cloudflarestatus.com/
  • Status pages at cloud providers aren't usually based in reality -- usually requires VP level political games to actually get them changed especially for serious outages.
  • Would be comedy if one of the progenitors of this took Sundar’s buyout offer yesterday and let the world burn today.
  • Kinda funny that the top post on HN titled "GCP Outage" links to the Google Cloud status page which shows...no outage.
  • Does anyone know if it's region-specific? We're experiencing it and are in us-west-1.
    • Us-central-1 as well
    • Can confirm us-east1 (and possibly us-south1) are having VPC host reachability problems.
    • it's due to IAM and global
    • Frankfurt seems to be down as well
    • us-east-1 too
    • europe (netherlands) region as well
    • south korea as well
  • https://www.cloudflarestatus.com/ is showing outage, which cause google gcp outage, claude outage, firbase outage https://status.firebase.google.com/
    • How would Cloudflare's outage cause a GCP outage?

      I'm sure it's not entirely impossible, but sounds backwards to me. Sure - a lot of the internet relies on Cloudflare, but I'd be very surprised if GCP had a direct dependency on Cloudflare, for a lot of reasons. Maybe I misunderstood your comment?

  • This appears to be continuing to cascade over an hour later... wow... more and more services mentioned as completely down on the outage page.

    Kind of nice to not be glued to AI chat prompts for a while to be honest.

  • Everyone is down. Cloudflare has problems too. All auth providers broken.
  • Someone must have checked in AI Generated code :-)
  • Super duper frustrating having the status page being green. Why can't Google do this properly?
    • Those responsible have been sacked.
      • Those responsible for sacking the people who have just been sacked, have been sacked.
  • https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...

    > Multiple GCP products are experiencing impact due to Identity and Access Management Service Issue

    IAM issue huh. The post-mortem should be interesting at least.

    • Ha. With all this soviet style euphemism I rather read the onion instead.
      • It’s not a euphemism - every outage, including the 99.9% that don’t end up on HN gets a postmortem document written about it, which is almost always a fascinating discussion of the technical, cultural and organisational situation that led to an unexpected bad thing happening.

        Even a few years ago senior management knew to stay the fuck out except for asking for more info.

  • Google Maps not loading, thought it was my 4g, go to see if my connection works by loading Hacker News, GCP Outage XD
  • console not loading, storage slow, support forms dead, status page green. no fallback, no real-time alert, was just wondering when it'll start working. whole stack feels brittle when basic visibility tools fail too. everyone’s pointing fingers but nobody has root access to truth.
  • Cloudflare speedtest is down too, I assume because of this?
  • One of these days in which the young engineers learn the concept of 'counterparty risk'.
  • I wonder what the damage ($) for having a good portion of the internet down for an hour or two ;)
  • Looks like I'm about to start learning which of my time-killing websites are hosted on GCP - The Ringer is down, and since Spotify owns them and is a major GCP customer, it looks like they've been hit by this. CRAZY that the GCP status page is still green.
  • So frustrating, but here's a link to track status of this outage: https://status.anthropic.com/incidents/kn7mvrgb0c8m
  • Just our bi-yearly reminder of our over reliance on cloud providers for literally everything. Can't say there's an answer beyond trying to build more independent tech but we know how that goes.
    • Yet migration to the cloud continues, driven by people arguing that doing it yourself is too complicated and expensive. Let’s see how long until one outage takes down the global economy for multiple days or weeks.
    • Hilariously, I did not know about any outages today during the workday because we discourage cloud service usage and nobody complained about anything breaking. :)
  • Supabase is also down
    • Yes my project on Supabase is down as well.
  • When you deploy code generated by Gemini :D
  • Cloudflare KV is also having an outage. I wonder who is reliant on who here.
    • Looks like more than KV is having an issue. Just tried to load dash.cloudflare.com and no bueno.
    • seriously doubt Google Cloud is relying on Cloudflare KV lol
  • Was just about to do a demo, but Google Meet was down. Tried to use Jitsi as a fallback, but couldn't log in because Firebase was down too. Ended up using a Slack Huddle, lol.
  • Can't wait to see how charts are going to look like here on the project we have developed for Maintel https://variable.io/maintel-digital-landscape/. It shows availability across multiple services as a landscape. Expecting to see a lot of spikes tomorrow.
  • love how their status page is green with no issues detected!
  • For us Cloud SQL instances are toast but App Engine Standard instances are still serving requests. Google Cloud console is borked too, mostly just erroring out.
  • Seems like a wider issue at Google than just GCP, the Sheets and Chat APIs are also returning similar "Visibility check was unavailable" errors.
    • Presumably many Google products run on GCP
  • some core GCP cloud services are down. might be a good time for GCP dependent people to go for a walk, do some stretches, and check back in a couple hours.
  • Haha, I don't ordinarily spend a lot of time in the Google Cloud Console but just now I was debugging a squirrely OAuth issue with reCAPTCHA failing to refresh several days running. I'm getting this weird page error, and I think, "Is this an issue with my organization? [futz futz futz] Hey wait is GCP actually down?" And it turns out to be the top discussion on HN. XD
  • Experiencing 504s in Google Meet.

    Google Cloud Console won't load.

  • Getting Gateway timeouts on docker hub. Maybe related? I can pull images.

    Example: https://hub.docker.com/layers/library/eclipse-mosquitto/late...

  • Spotify was not loading, thought my 5G was bad, used YouTube Music instead without issues. Hmmm...
  • Does anyone know if instance-to-instance networking has been affected? My Redis instance has been throwing a lot of connection errors.
    • We're not seeing any connectivity issues between pods and vms in our vpc, but your mileage may vary.
  • Sorry, after decades of being hard wired, I just installed a PCIe Wifi6 card on my desktop. Internet took a dive the second I got it connected. Must have done something wrong.
  • BigQuery is completely dead
  • Firebase status page has acknowledged it as a "global issue". https://status.firebase.google.com/

    A contact in google mentioned to me that some bad update to Google Cloud Storage service has caused some cascading issues affecting multiple GCP services.

  • The last few times this happened I wouldn't have thought "So this is the day AI takes over".

    But this time...

  • Any chance this is the root being that so many different services are effected? https://github.com/kubernetes/kops/issues/17433
    • https://cloud.google.com/kubernetes-engine/docs/release-note... google did release an update to gcp k8s today, seemingly shortly before the outage
    • I doubt gcloud would be affected by an aws-specific cni. Unless maybe enough AWS users have a GCP backup environment that they flipped on all at once, but it seems unlikely
      • good point. I took that as simply the example that they had in front of them but a generic issue.
  • Cloudbuild completely down for us. Getting "Visibility check was unavailable" errors.
  • GCP Artifact registry still down... Not accepting image push and showing 500 status code
  • When Google said GCP is "down", did it affect entire availability zones within a region? For people who designed redundant infrastructure, did your backup AZs/regions keep your systems online?
    • The outage was global. For my team specifically, a global Identity and Access Management outage meant that our internal service accounts could not refresh their short-lived access tokens and so different parts of our infrastructure began to fail over the course of an hour or so, regardless of what region or zone they were in. Services were up, but they could not access critical GCP services because of auth-related issues which resulted in internal service errors for us.

      To give an example, our web servers connect to our GCP CloudSQL database via a Cloud SQL Auth Proxy (there's also a connection pooler in between but that also stayed up). The connection to the proxy was always available, but the proxy wasn't able to renew auth tokens it uses to tunnel to the database, regardless of where the webserver or database happened to be located. To mitigate this in the future we're planning to stop using the auth proxy and connect directly via mutual TLS but now it means we have to manage TLS certificates.

    • so much for System Design interview and bs gatekeeping...
  • Well this explains the issues I've been having with Spotify through the last hour.
  • I wonder how many SLAs Google blew out today with this outage.
  • Twitch was broken too: https://status.twitch.com/incidents/b79nyp1yhxql

    EDIT: Updated link to point to the specific incident.

    • Is Amazon running Twitch on Google Cloud (at least partially)?
      • I don't know, at this point I don't know who uses what. This is maybe unrelated but even BunnyCDN has an incident from a few hours ago (https://status.bunny.net/incidents/6g27lbtp67m4).

        Seeing how everything seems to be broken everywhere, I'm very much looking forward to the post-mortem.

  • Surprised no one else mentioned "it's always DNS" yet :-)
  • Related ongoing thread:

    Ask HN: Is Firebase Down? - https://news.ycombinator.com/item?id=44260669

  • Our GCP workloads are unavailable across several US regions. The GCP console is intermittently unavailable for most pages.

    Crossing my fingers for a quick resolution.

  • My firebase hosting and firestore db are back online, but GCP console and Google SQL instances are still having serious issues as of 7:00pm UTC.
  • Ahhh, explains why some of my apps are going crazy... Couldn't read a message from my kids pre-school

    Thankfully we use AWS at work for everything critical

  • if all services at down at once, no one is thinking or mentioning a potential attack on US cloud providers ? (China or Russia) Maybe ?
  • It looks like more than GCP: outages reported across the board including aws

    https://downdetector.com/

    • About the only thing not down is down detector.
      • god send omg, imagine down detector is down lmao

        anyone know what tech stack they use and where they host

  • GCP status page now reflect the issues, looks like Google Cloud Dataproc, Google Cloud Storage and Identity & Access Management
  • #HugOps
  • Wish there existed a decentralized network connecting computers around the world
    • Crazy, they could call it the "internet" or something like that... kind of rolls off the tongue.
  • Having issues with services in cloud run as well
  • Same here. Even the page to submit support requests is down.

    Cloud console does nothing.

    They should host their support services on AWS and vice-versa.

    • I just logged into several of my GCP accts, everything popped up, multiple home regions.. I wonder what % of folks are feeling this right now.
  • We're in us-west-1 and seeing issues across Cloud Run, Cloud SQL, Cloud Storage and Compute Engine.
  • Claude Code is down :( too lazy to do manual conversion from Cocoapods dependency to SwiftPM
  • I'm able to login to the GCP dashboard, but it isn't able to find any of my projects.
  • Even though BigQuery is not listed in affected services, we see errors connecting to it
  • I'm having trouble getting any Street View imagery. Can anyone else confirm?
    • Yep, street view is not working at all for me
  • Root cause has been identified and it's being resolved/monitored now
  • We're experiencing intermittent slowness and timeouts on our GCP everything.
  • Everything except us-central1 is back up - it's recovering now though
  • My friends and I are even having trouble getting Rcs text messages to send.
  • 2 hour outage at this point
  • this aint looking good yall
  • GPay which is a widely used payment service in India is down as well
    • India is having a really bad day today
  • Yup, intermittent db connection issues and cloud storage problems.
  • Where are the AI agents?
    • Poor agents, finally taking a break
  • And THAT, Smithers, is why we wear hardhats on the job.
  • Is this the new Y2k?
  • reCAPTCHA affected? I couldn't log into my local utilities website due to a reCAPTCHA error. Downdetector agrees, but I interpret that site as dubious.
  • Seems recovering now
  • Not just GCP, most of Googles services are out of action
    • I'm on a meet, in cal, editing a dozen docs, in GCP, pushing commits and launching containers; it's not clear yet what exactly is going on but it's certainly intermittent and sparse, at least so far
      • stop it. you're overloading their system by doing three things at once. let the rest of us have a turn.
  • > Waiting for downdetector.com to respond...
  • Can't upload discord attachments from mobile.
  • Guess they used Jules to code their services :)
  • Google Cloud Storage seems to be down or very slow
  • Storage, CloudRun, Firebase...... All down....
    • Auth, GCP, Windsurf,Augment Code,Udio, the list is endless.

      Facebook, Reddit and Hacker News is still up, but thats about it

  • Yarn package registry also appears to be down.
    • npm is, registry.yarnpkg.com is only a CNAME to npm
  • if everything down at the same time - No one is mentioning an attack on us cloud services ? ( China or Russia ) Maybe ?
  • Maybe cloudflare?
  • Text messaging for android is broken as well
  • Gemini API isn't working for me :/
  • identitytoolkit.googleapis.com is 503-ing on us, my whole customer success team is locked out from our platform
  • when its going to be fixed , i am seeing now more and more services getting outage started with IAM ?
  • mapbox maps seemed to be down for a few minutes about an hour ago. I wonder if it is related.
  • YouTube was down for me for some time
  • Text messaging on Android is broken
  • GKE workloads are also affected.
  • Shameless plug for https://rollbar.com

    Good luck out there!

  • when its going to be fixed i am seeing now more and more services are down?
  • YouTube is also very flakey.
  • I just realized that the reason the status isn't updated is cause they can't access it lol.
    • How do you know that?
    • Don't host status pages (or their dependencies) on your own infra lol.

      Seems obvious.

      • It should be obvious because both AWS and Azure have done this in the past and shown what a bad idea it is…
  • Ah darn it. My Spotify DJ just stopped working.
  • is supabase on GCP ? My Supabase projects are down.
  • internal systems at google are currently broken.
  • kaggle not responding correctly, is it related?
  • Interesting that all Digital Ocean services are fine...
  • Our GCP is down
    • What region?
      • I think multiple regions are down. asia-south, us-east atleast are impacted.
  • i think it'll be disaster.
  • sheesh so many side-affected issues accross all systems, maybe big tech companies like google shouldn't have laid off all those engineers.. https://www.google.com/appsstatus/dashboard/incidents/Eab7zG...

    but no tech bros, just keep following your ketamine addled edgelord when he did this with twitter..

  • Let's say a typical base service (network attached RAM or whatever) has 99.99% reliability. If you have a dependency on 100 of those, you're suddenly closer to 99% reliability. So you switch to higher-level dependencies, and only have 10 dependencies, for a 99.9% reliability. But! It turns out, those dependencies each have dependencies, so they're really already more like 99.9% at best, and you're back at 99% reliability.

    "good enough" is, indeed, just good enough to make it not worthwhile to rip out all the upstreams and roll your own everything from scratch, because the cost of the occasional outages is much lower than the cost of reinventing every single wheel, nut, bolt, axle, bearing, and grease formulation.

  • What is this Touchable Grass stuff I keep hearing of?
  • npm registry happen to be hosted on gcp, because that seems to be down as well.
  • "All locations except us-central1 have fully recovered. us-central1 is mostly recovered. We do not have an ETA for full recovery in us-central1."
    • An hour later and everything is a mess in central-1. They seemed to jump the gun on that one. Doesn't matter if some dinky service like "AutoML Vision" is working, if GCS isn't, then they shouldn't post an optimistic message.
  • "No major incidents" as of 11:37 PDT.

    https://status.cloud.google.com/

    File that in the status pages worth ~0 category.

  • Not just GCP. AWS and Cloudflare too.

    Did someone screw up BGP again?

    • Source? We didn't see anything wrong with AWS here.
  • Meet is also down for me right now. Cannot attend any video calls.
  • xAI having problems, Supabase down, Discord can't upload images to share in chat. Seems like a major backbone outage.
  • Yeah their status page is all green nothing to see here (but all production systems are down).
  • Now my api can not connect to PostreSQL...

    sslv3 alert bad certificate:../deps/openssl/openssl/ssl/record/rec_layer_s3

  • when its going to be fixed i am seeing now more and more services are down?
  • They've now added this as a major incident - before it just was listed under overview
  • Seems recovering now
  • Can't reach my nest thermometer, but their status page says it's fine lol
  • Well, good luck to all googlers dealing with this, that's not fun :(
  • If LLMs are down work grinds to a halt until they return. Just the new era now.
  • It's completely nuts that Firebase has this: https://status.firebase.google.com/incidents/ZcF1YDUvpdixZ2e...

    "Firebase Data Connect unavailable due to a known Google Cloud global outage"

    While the Google Cloud status page https://status.cloud.google.com/ says "No major incidents" and everything is green. So Google Cloud know there is an outage but just deem it not major enough to show it.

    Edit to add: within 10 minutes of this post Google updated their status page. More curiously the Firebase page I linked to has been edited to remove mention of Google Cloud in the status and now says "Firebase Data Connect is currently experiencing a service disruption. Please check back for status. ".

    • IIRC status pages drive customer compensation for downtime. Updating it is basically signing the check for their biggest customers, in most similar companies you need a very senior executive to approve the update

      On the other side of this, Firebase probably doesn't have money at stake making the update

      • It is not the status page that drives customer compensation. It is downtime.
        • The status page is essentially an admission of guilt. It can require approval from the legal department and a high level official from the company to approve updating it and the verbiage used on the status page.
          • > It can require approval from the legal department and a high level official from the company to approve updating it and the verbiage used on the status page.

            Is that true in this case or are you speculating? My company runs a cloud platform. Our strategy is to have outages happen as rarely as possible and to proactively offer rebates based on customer-measured downtime. I don't know why people would trust vendors that do otherwise.

            • I don't have any special knowledge about the companies involved in this outage. I do know most (all?) status pages for large companies have to be manually updated and not just anybody can do that. These things impact contracts, so you want to be really sure it is accurate and an actual outage (not just a monitor going off, possibly giving a false positive).
          • You are likely right, but it's still gross dishonesty. I'm not ready to let Google and their engineers off the hook for that.
            • Inter alia, "is essentially", "it can", tell us this is just free-associating.

              We should probably avoid punishing them based on free-associating made by a random not-anonymous not-Googler not-Xoogler account on HN. (disclaimer: xoogler)

          • then it’s fucking useless. Let’s crowd source our own
      • Nah, its just some client side caching / JS stuff. Clicking the big refresh button fixed it for me, 15 minutes before OP noted it.

        (n.b. as much as Google in aggregate is evil, they're smart evil. You can't avoid execs approving every outage because checks without some paper trail, and execs don't want to approve every outage, you'd have to rely on too many engineers and sales people, even as ex-employees, to keep it a secret. disclaimer: xoogler)

        (EDIT: for posterity, we're discussing a "overall status" thing with a huge refresh button, right above a huge table chockful of orange triangles that indicate "One or more regions affected" - even when the "overall status" was green, the table was still full of orange and visible immediately underneath. My point being, you gotta suppose a wholeeee bunch of stuff to get to the point there was ever info suppressed, much less suppressed intentionally to avoid cutting checks)

    • Something must be preventing them updating the status page at this point. Of course they could still deem it not enough, but just from my limited tests, docker, buf, etc (it may not be GCP that is down, but it is quite the coincidence). are outright down. I'd wager that this is much more widespread.
      • I'm actually on a bridge call with Google Cloud, we're a large customer -- I just learned today that their status page is not automated, instead someone actually manually updates it!
        • That's the case with every status page. These pages are managed by business people not engineers, because their primary purpose is to show customers that the company is meeting contractually defied SLAs.
          • Surelly no SLA will be based on the display of the status page...
            • Maybe or maybe not, but someone with nothing better to do than monitor that page out of boredom might “get on the horn” with lots of people to complain if a green check mark turns to a red X.
            • They aren't automatically based on that page, but seeing a red status makes it too easy for customers to point to it and go "see you were down, give us a refund".
            • should* be
        • This is actually the norm for status pages. If you look at the various status page offerings you'll see that they're designed around manual updates.
          • The best way to consistently having good "time to response" metrics, is to be the one deciding when an incident "actually" started happening, if at all :)
        • This feels very much like when facebook, locked themselves out of their datacenters. ;)

          * https://www.datacenterdynamics.com/en/news/facebook-blames-m...

          • Except that AWS, CloudFlare and a bunch others are also down :-O
            • Downdetector shows they've got issues as well, but it can be fairly unreliable, as people don't know which service is behind their apps.

              I at least have no issues on their services across a few regions, and their console works fine.

            • AWS looks ok to me?

              https://health.aws.amazon.com/health/status

              Perhaps CF is dependant on some GCP services?

            • seems like misinformation for AWS. CloudFlare probably depends on GCP.
        • The bigger you are, the more you want a human involved in the decision to publicly declare an incident.
        • That's fairly typical. You want a human in the loop for decisions like that.
        • Most status pages are manual.

          At least some of the information has to be.

          The weird part is that it took them almost an full hour to update it.

    • This extra funny that GCP status page even includes a “last updated” time, which is exactly built to convey possible failure to update in cases like this

      No major incident as of “ Last updated time: 12 Jun 2025, 11:48 PDT”

    • Maybe the outage is preventing them from updating that specific page? Hmm

      EDIT: Looks like it has been updated now (6:49 PM UTC)

      • Anytime there is an outage that affects App Engine, Google can't seem to get their status page updated for an extended period of time.
      • Almost an hour to update the page...
      • I hope this is the case, or google is super unreliable for production grade work.
      • :))))))
    • I asked testing to see if it was up, and it pointed out that Google shows nothing but Nest is showing an outage right now, lol

      https://status.nest.com/posts/dashboard

    • Maybe their dashboard is hosted on GCP and they are displaying a cached version. :-)
    • More likely they are unable to update their own status page, but in either case not covering themselves in glory over at GCP right now.
    • GCP just updated their status
    • Services are recovering in some locations it seems - Discord is healing
    • Status pages are PR. It gets the same PR treatment as anything else
    • AWS has this all the time. If you need to know if a service is down in a region, check for other engineers talking about it on X.
    • lies, from big tech?

      say it's not so!

  • well this explains so much lol
  • @dang could you merge this and https://news.ycombinator.com/item?id=44260669?
    • No notifications for mentions, have to email the mods at the hn@ email address.
      • Do we know if email is still working? kidding-but-not-really-because-gmail…
      • I think I was a bit optimistic in the response time from mods. This thread won the popularity contest quite well...

        Thanks for letting me know about emailing the mods, refreshingly explicit to send email.

  • [dead]
  • [dead]
  • [dead]
  • [flagged]
  • [flagged]
  • [flagged]
  • [flagged]
  • [flagged]
  • Solana is up ¯\_(ツ)_/¯
  • seems recovering
  • Borg and K8s were fighting for resources, so Gemini decided to take out DNS. Now a sysAdmin has to step in.

    * just trying to add a little humour. pretty stressfull outage. grarr!!

  • The cloud enables you to scale. It allows us to distribute systems across multiple regions and data centers. Seems that this is true for outages as well.

    The PHP application I wrote as a student running on a single self-hosted server had a higher uptime than any of the cloud providers or redundant system I have seen so far. If you don’t need the cloud for scalability, do it yourself and save yourself the trouble and money. Most companies would be better off investing into some IT staff instead of giving away their systems in the hands of some proprietary and insanely complex cloud environment. You are becoming dependent on someone you don’t know, have no control over and can’t talk with directly. Also the single point of failure is just shifting: from your system to whatever system is managing the cloud. Guess one advantage is that you can shift the blame to someone else…