• Disclosure, I work for Datadog:

    https://updog.ai/status/amazonaws

    Looks fine for now.

    • Piece of UX feedback for the product team behind Updog: company logos are not searchable. It should be easy to Ctrl-F and find a relevant cloud on that detector, instead of scrolling alphabetically.
    • You all should add EC2 - extra bonus if you have some way of tracking performance in addition to errors (right now we're seeing EC2 instances in us-east-1c not transition out of Pending status).
    • This is cool, does this actually hit all the services directly (in each region) instead of pulling from AWS Status?
      • Which uptime checker tool would be based on status pages (owned by the marketing department)? That defeats the whole purpose.
        • I've run a business in this space since 2021, I am yet to meet a business that lets their marketing team own their status page.

          You'll find most engineering teams will start owning a status page to centralise updates to their stakeholders, before eventually growing into the customer success/support org owning it to minimise support tickets during incidents.

          Marketing has nothing to do with status pages.

        • I highly doubt AWS health dashboards are owned by marketing
      • https://updog.ai/status/openai issue history looks terrible. Wonder how you ping openAI for this; with a completion attempt on a particular model?
        • It is based on the impact on Datadog's customers, not on synthetic queries / pings
      • From the page:

        > API health is inferred by analyzing aggregated and anonymized telemetry from across the Datadog customer base.

    • What's updog?
  • We've been observing EC2 instances launched in us-east-1c (use1-az2) remain in Pending status for a very long time / indefinitely, starting at around 16:00 UTC.
    • We were seeing ECS Fargate capacity weirdness in us-east-1 earlier.
  • Had some very weird behaviour from cloudfront used purely to serve images from s3. Mostly huge slowdowns and outright failures on endpoints. Was about 15 hours ago that I noticed it by chance.

    Was nothing on the aws status pages and no alerts/errors in my console. Eventually it sped up again.

    • We noticed massive latency from cloudfront spent the first part of my day migrating services out.
  • I'm Ben from https://downforeveryoneorjustme.com/

    We are not seeing anything right now... keeping an eye out but things are normal.

  • I would think that data centers scale up horizontally and a failure of one node should only affect a limited number of customers. Barring any centralized DNS mess up of-course.
  • I'm currently in the process of spinning up a k8s in us-west-2 and no issues, but, as others have said, us-east-1 is the problem child so I guess we'll see.
  • It's wonky for sure, but only to certain IP ranges.
  • Yes, aws was done. I am very irritated with aws now. This is 3rd time. Pricing is also very high…

    Thinking to switch to another platform.

  • Don't see any issues in us-east-2 (Ohio) with my infra, but typically issues arise in us-east-1.
  • Down detector is much much higher when there is a real problem.

    There might be something, but wouldn’t be widespread.

  • I think it may be down, showing early signs based on location services (geofencing) warning.
  • I was thinking the same thing as some sites like Google were taking a LONG time to load.
    • Google probably isn't using AWS for any of their infrastructure.
    • Internet has been very sluggish for me today too. Something may be going on (not necessarily AWS)
  • I got a down status on https://leetcode.com/..It may be related.
  • still slow down