• I find it really confusing that the worker AI models on here: https://developers.cloudflare.com/workers-ai/models/ do not have full overlap with the ones on here: https://developers.cloudflare.com/ai/models/

    Yes, you can see the same "hosted" ones on there, but when you look at the models endpoint, there are much less options at the "workers-ai/*" namespace. Is that intentional?

    • To better clarify, I don’t see "workers-ai/@cf/google/gemma-4-26b-a4b-it" in the /models enpoint in gateway.ai.cloudflare.com but it does seem to exist as a hosted model. Same with "workers-ai/@cf/nvidia/nemotron-3-120b-a12b" which I would expect to see
      • Hey James.

        Thanks for the feedback, and good catch. Looks like that endpoint is pulling from a slightly out of date data source. The docs/dashboard currently are the best resources for the full catalog, but we'll update that API to match.

  • This actually looks very useful. Cloudflare seems to be brining together a great set of tools. Not to mention, D2 is literally the only sqlite-as-a-service solution out there whose reliability is great and free tier limits are generous.
    • Agreed -- except that all of their docs and marketing pitches it for use cases like "per-user, per-tenant or per-entity databases" -- which would be SO great.

      But in practice, it's basically impossible to use that way in conjunctions with workers, since you have to bind every database you want to use to the worker and binding a new database requires redeploying the worker.

      • If you want to dynamically create sqlite databases, then moving to durable objects which are each backed by an sqlite database seems to be the way to go currently.
    • D1 reliability has been bad in our experience. We've had queries hanging on their internal network layer for several seconds, sometimes double digits over extended periods (on the order of weeks). Recently I've seen a few times plain network exceptions - again, these are internal between their worker and the D1 hosts. And many of the hung queries wouldn't even show up under traces in their observability dashboard so unless you have your own timeout detection you wouldn't even know things are not working. It was hard to get someone on their side to take a look and actually acknowledge and understand the problem.

      But even without network issues that have plagued it I would hesitate to build anything for production on it because it can't even do transactions and the product manager for D1 openly stated they wont implement them [0]. Your only way to ensure data consistency is to use a Durable Object which comes with its own costs and tradeoffs.

      https://github.com/cloudflare/workers-sdk/issues/2733#issuec...

      The basic idea of D1 is great. I just don't trust the implementation.

      For a hobby project it's a neat product for sure.

    • Yeah but the 10GB limit for D1 is crazy, can you really start building on that? Other than toy projects?
      • Really depends on what you’re putting in the DB. Cloudflare is clear that these are supposed to be very localized DBs. Per user or tenant.
    • * D1, but agreed. I wish Cloudflare would offer a built-in D1-R2 backups system though! (Can be done with custom code in a worker, but wish it was first-party)
    • > For those who don’t use Workers, we’ll be releasing REST API support in the coming weeks, so you can access the full model catalog from any environment.

      Cloudflare seems to be building for lock-in and I don't love it. I especially don't understand how you build an OpenRouter and only have bindings for your custom runtime at launch.

  • Good to see their purchase of Replicate paying off!
  • Not seeing any pricing info on the models[1] page. Wonder how much of a lift this is over paying providers directly. Perhaps Cloudflare is doing this at cost? Also interesting that zero data retention is not on by default, and is not supported with all providers[2]. Finally, would be great if this could return OpenAI AND Anthropic style completions.

    [1] https://developers.cloudflare.com/ai/models/

    [2] https://developers.cloudflare.com/ai-gateway/features/unifie...

    • Hey! I'm one of the engineers who built this :)

      We'll be adding prices to the docs and the model catalog in the dashboard shortly.

      In short: currently the pricing matches whatever the provider charges. You can buy unified billing credits [1] which charges a small processing fee.

      > Finally, would be great if this could return OpenAI AND Anthropic style completions.

      Agreed! This will be coming shortly. Currently we'll match the provider themselves, but we plan to make it possible to specify an API format when using LLMs.

      [1]: https://developers.cloudflare.com/ai-gateway/features/unifie...

      • excellent! please make sure to include rate limit details as well.
      • Thanks, I don't see pricing for foundation models however, such as GPT-5.4
  • Big, could be a viable Bedrock alternative. Probably better uptime than Anthropic or AWS, too.
  • Sadly no mention on regions.
    • It will work great in Spain! /s
  • Can't wait for the free tier!
    • Workers AI had a free tier since it launched, I think? See the pricing page I linked to above.
  • Anthropic gonna acquire Cloudflare for stock. Solves their infrastructure problems in one shot.
    • No way! Cloudflare will buy anthropic when the economy begins self-correcting. Looking forward to Workers AI getting all those H100s to run more Qwens
    • I'm not ready to for another rug pull, so please no :( I really enjoy Cloudflare's CDN.
  • What is Cloudflare trying to be? Everything everywhere all at once?
    • They want to be an edge networking platform. Anything that would be useful doing on an edge node close to the end user is in scope.
    • A CSP.
  • don’t attach to a single AI provider when you can attach to cloudflare as your single AI gateway provider!

    rant aside, they are greatly positioned network wise to offer this service, i wonder about their princing and potential markup on top of token usage?

    i presume they wont let you “manage all your AI spend in one place” for free.

    • > i presume they wont let you “manage all your AI spend in one place” for free.

      Of course they will. In return they get to control who they’re routing requests to. I wouldn’t be surprised if this turns I to the LLM equivalent of “paying for order flow”.

      • i got shivers thinking about a future ai dynamic pricing and automatic gateway choosing the cheapest provider available
        • Openrouter already does this, unless I've misunderstood the premise.
        • shivers? as in it frightens you? i believe there is no way around tokens being prices like gasoline at the gas station - it changes every hour. Any other system means you are either over- or underspending.
  • No spending limit / no ability to set a budget, unlike Google or OpenAI. Be prepared for an eye-watering invoice if you have a bug or get hacked.

    edit: Why downvote? It's correct, and it's a risk that competitors handle better, including for their CDN products (compared to Bunny CDN). Maybe you are just used to the risk and haven't felt the burn yourself yet. Or you have the mistaken notion that there is no price at which temporary downtime is worthwhile to avoid paying.

    • I just added some credits to my account. You can set a daily $ spend limit as well as add credits without auto-refill
  • Can I set a hard cost limit ? Else I'm not interested, don't be like googles mess of billing.
    • Seems like it. I just added some credits to my account. You can set a daily $ spend limit as well as add credits without auto-refill
  • Can I set a hard cost limit per day ? With no drift, else I'm not interested.
    • I think you should look at OpenRouter. It has budget controls
  • A few weeks ago, I ran into a bug with Cloudflare's DNS server not detecting when I updated the records with the registrar. The bug was 100% on their end, entirely unsolvable by me, yet they have made it literally impossible to contact them to file a bug report. Their standard user help workflow dead-ended by forcing me to talk to their absolutely useless AI help chatbot, which proceeded to regurgitate their FAQ (inaccurately, uselessly), then referred me to a phone number that was disconnected/not in service, then gave me an email address that auto-replied it was no longer in use, then just looped back to the FAQ. There was no way for me to even send them an email to let them know they have a major bug.

    I immediately pulled all my sites off of Cloudflare and I will never use that godawful nightmare of a company for anything ever again. If they can't even host a generic help bot without screwing it up that badly, why would I ever use them for anything at all, never mind an AI platform?

    • What was the bug? I configure DNS for both public and private networks on cloudflare semi-frequently and always see changes in minutes or less.