• Note that you can't use this mode to get the most out of a subscription - they say it's always charged as extra usage:

    > Fast mode usage is billed directly to extra usage, even if you have remaining usage on your plan. This means fast mode tokens do not count against your plan’s included usage and are charged at the fast mode rate from the first token.

    Although if you visit the Usage screen right now, there's a deal you can claim for $50 free extra usage this month.

  • It doesn’t say how much faster it is but from my experience with OpenAI’s “service_tier=priority” option on SQLAI.ai is that it’s twice as fast.
  • Looking at the "Decide when to use fast mode", it seems the future they want is:

    - Long running autonomous agents and background tasks use regular processing.

    - "Human in the loop" scenarios use fast mode.

    Which makes perfect sense, but the question is - does the billing also make sense?

  • I’d love to hear from engineers who find that faster speed is a big unlock for them.

    The deadline piece is really interesting. I suppose there’s a lot of people now who are basically limited by how fast their agents can run and on very aggressive timelines with funders breathing down their necks?

    • If it could help avoid you needing to context switch between multiple agents, that could be a big mental load win.
  • It's a good way to address the price insensitive segment. As long as they don't slow down the rest, good move.
  • AFAIK, they don't have any deals or partnerships with Groq or Cerebras or any of those kinds of companies.. so how did they do this?
    • Inference is run on shared hardware already, so they're not giving you the full bandwidth of the system by default. This most likely just allocates more resources to your request.
    • Could well be running on Google TPUs.
  • I’m curious what’s behind the speed improvements. It seems unlikely it’s just prioritization, so what else is changing? Is it new hardware (à la Groq or Cerebras)? That seems plausible, especially since it isn’t available on some cloud providers.

    Also wondering whether we’ll soon see separate “speed” vs “cleverness” pricing on other LLM providers too.

    • There are a lot of knobs they could tweak. Newer hardware and traffic prioritisation would both make a lot of sense. But they could also lower batching windows to decrease queueing time at the cost of lower throughput, or keep the KV cache in GPU memory at the expense of reducing the number of users they can serve from each GPU node.
    • > It seems unlikely it’s just prioritization

      Why does this seem unlikely? I have no doubt they are optimizing all the time, including inference speed, but why could this particular lever not entirely be driven by skipping the queue? It's an easy way to generate more money.

      • Until everyone buys it. Like fast pass at an amusement park where the fast line is still two hours long
        • At 6x the cost, and it requiring you to pay full API pricing, I don’t think this is going to be a concern.
        • It's a good way to squeeze extra out of a bunch of people without actually raising prices.
    • I wonder if they might have mostly implemented this for themselves to use internally, and it is just prioritization but they don't expect too many others to pay the high cost.
    • > so what else is changing?

      Let me guess. Quantization?

  • While it's an excellent way to make more money in the moment, I think this might become a standard no-extra-cost feature in several months (see Opus becoming way cheaper and a default model within months). Mental load management while using agents will become even more important it seems.
    • Yeah especially once they make an even faster fast mode.
  • Could be a use for the $50 extra usage credit. It requires extra usage to be enabled.

    > Fast mode usage is billed directly to extra usage, even if you have remaining usage on your plan. This means fast mode tokens do not count against your plan’s included usage and are charged at the fast mode rate from the first token.

    • It has to be. The timing is just too close.
    • After exceeding the increasingly shrinking session limit with Opus 4.6, I continued with the extra usage only for a few minutes and it consumed about $10 of the credit.

      I can't imagine how quickly this Fast Mode goes through credit.

  • The one question I have that isn't answered by the page is how much faster?

    Obviously they can't make promises but I'd still like a rough indication of how much this might improve the speed of responses.

  • Will this mean that when cost is more important than latency that replies will now take longer?

    I’m not in favor of the ad model chatgpt proposes. But business models like these suffer from similar traps.

    If it works for them, then the logical next step is to convert more to use fast mode. Which naturally means to slow things down for those that didn’t pick/pay for fast mode.

    We’ve seen it with iPhones being slowed down to make the newer model seem faster.

    Not saying it’ll happen. I love Claude. But these business models almost always invite dark patterns in order to move the bottom line.

  • The pricing on this is absolutely nuts.
    • For us mere mortals, how fast does a normal developer for through a MTok. How about a good power user?
  • Where is this perf gain coming from? Running on TPUs?
  • Give me a slow mode that’s cheaper instead lol
  • I pay $200 a month and don't get any included access to this? Ridiculous
    • The API price is 6x that of normal Opus, so look forward to a new $1200/mo subscription that gives you the same amount of usage if you need the extra speed.
      • I always wondered this, is this true/does the math come out to be really that bad? 6x?

        Is the writing on the wall for $100-$200/mo users that, it's basically known-subsidized for now and $400/mo+ is coming sooner than we think?

        Are they getting us all hooked and then going to raise it in the future, or will inference prices go down to offset?

    • Well, you can burn your $50 bonus on it
    • ..But it says "Available to all Claude Code users on subscription plans (Pro/Max/Team/Enterprise) and Claude Console."

      Is this wrong?

      • It's explicitly called out as excluded in the blue info bubble they have there.

        > Fast mode usage is billed directly to extra usage, even if you have remaining usage on your plan. This means fast mode tokens do not count against your plan’s included usage and are charged at the fast mode rate from the first token.

        https://code.claude.com/docs/en/fast-mode#requirements

      • I think this is just worded in a misleading way. It’s available to all users, but it’s not included as part of the plan.
  • Interesting, output price is insane/Mtok
  • > $30/150 MTok Umm no thank you