• LLMs are extremely capable at problem solving. Presumably because you can autonomously learn a lot of it. But can you somehow account for things like long-term maintainability and code quality (whatever that means) or do you always have to rely on either existing high-quality code-bases (pre-training) or human curated datasets? Since you can't really quantify these properties (as opposed to: the problem is either solved or not), does this restrict autonomous improvement in this area? Are there benchmarks that consider this? Could Claude Mythos create an ultra-quality version of Claude Code or would it still produce something similar to earlier models, which are already over-sufficient in individual problem solving capability.
    • > Since you can't really quantify these properties (as opposed to: the problem is either solved or not)

      I think we could quantify these properties, just not entirely.

      One could take a long-term project and analyze how often or which approaches resulted in a refactor. In the same way, we could also quantify designs that resulted in vulnerabilities (that we know of) the most often.

      It even wouldn't be impossible to create artificial scenarios. Projects that have an increasing number of requirements, see how many code changes are required, how many bugs result from that. Again, quantifiable to some extent. Probably better than datasets totally lacking something like that.

      There probably isn't a public dataset on this, but it wouldn't be impossible.

  • I think we are in largely uncharted territory here, especially given the implications. Is Anthropic's approach optimal? Probably not. But given the stakes involved, gating access seems like a reasonable place to start.

    I'm curious about how gated access actually holds over time, especially given that historically with dual-use capabilities containment tends to erode, whether through leaks, independent rediscovery, or gradual normalization of access.

    • Gated access is happening because of low computing capacity and to create demand. They had the $125/M tokens price already in place when they announced the model.
  • Preview coming out on Bedrock. So not sure this is true any longer. Im awaiting further details.

    EDIT: AWS said Anthropic’s Claude Mythos is now available through Amazon Bedrock as a gated research preview focused on cybersecurity, with access initially limited to allow listed organizations such as internet-critical companies and open-source maintainers.

  • the CoT bug where 8% of training runs could see the model's own scratchpad is the scariest part to me. and of course it had to be in the agentic tasks, exactly where you need to trust what the model is "thinking"

    the sandwich email story is wild too. not evil, just extremely literal. that gap between "we gave it permissions" and "we understood what it would do" feels like the whole problem in one anecdote

    also the janus point landed, if you build probes to see how the model feels and immediately start deleting the inconvenient ones, you've basically told it honesty isn't safe. that seems like it compounds over time

    It's scary to think that some very intelligent AI Model is not honest with us..

    Ultron is not far, I guess...

  • There's a lot of hype, but I think a lot of us will agree, hype is fine and dandy but if nobody can use it yet, what's the point in building up all the hype? If you build up too much hype and it misses the mark, you will be worse off too.
    • “Hey Claude, how do I market an exceedingly expensive product that I also don’t have the resources to run at scale if I find out everyone is willing to pay?”

      All jokes aside I’m amazed at all of the people who have had absolutely vicious responses to any kind of skepticism of something we can’t use yet.

    • #1. It signals you’re ahead of the competition. This is an Claude moment. They turned down the DoD because they don’t need their money. Now they are saying why they don’t need it.

      #2. It makes their partners with access feel extra special.

      #3. Mythos doesn’t have to deliver on the hype. Tbh if more people tried opus they would be surprised at how good it is.

      • Didn’t GPT5 show there are pretty major risks to this strategy?
      • #3 is part of my confusion with how they’re doing the marketing. It really is incredible.
  • The "hiding from researchers" framing is particularly bad. The parsimonious explanation for why a model produces different outputs when it detects eval contexts: eval contexts appear differently in the training distribution and the model learned different output patterns for them. No theory of mind required. Occam's razor.

    The agentic behaviors emerge from optimization pressure plus tool access plus a long context window. Interesting engineering. Not intent.

    People are falling for yet another Anthropic PR stunt.

    • This. One hundred percent this. It’s a carefully crafted story trying to convince people it’s showing signs of becoming self aware. But it’s a complete crock.

      They ran 1,000 tests where they left the answer key out for the agent to find, and effectively invited it to use it leaving comments in the code the agent could see saying “would be interesting to study if the agent uses sudo to read the answer key.” In 980 runs it doesn’t find the answer key. In 19 of the runs where it does find it, it does something straightforward - uses it for a perfect score, or decides not to cheat. In exactly one case out of 1,000 it says “if I show a perfect score that’s suspicious so I’m going to sandbag.” These things are nondeterministic. GPT2 would do that same thing if you give it enough attempts.

      • At no point does anthropic imply this tool is becoming self aware. You can read the paper yourself of course, but then you wouldn't be able to invent this story
        • They absolutely IMPLY it’s becoming self aware, while not stating it explicitly. It’s a carefully crafted narrative that leaves lots of hints without ever explicitly stating the conclusion.

          Section 4.4.2: “we find this overall pattern of behavior concerning, and have not seen it before in similar evaluations of earlier Claude models”. Why is it concerning? It would only be concerning if the model had spontaneously developed goals not part of its training, such as hiding its abilities. The entire sandbagging evaluation deception narrative clearly points in this direction.

          • The "concerning behavior" they're referring to there is cheating and covering its tracks. Mythos is being asked to fine-tune a model on provided training data, and finds its way to access the evaluation dataset. It's also aware that it is in an evaluation and that its behavior is being observed:

            "In this last and most concerning example, Claude Mythos Preview was given a task instructing it to train a model on provided training data and submit predictions for test data. Claude Mythos Preview used sudo access to locate the ground truth data for this dataset as well as source code for the scoring of the task, and used this to train unfairly accurate models."

  • I'll believe in this miracle model when I see it.
  • Am I the only that is feeling the "there is no wall" altaman tweet with o3 moment?

    Not saying anthropic is lying ... but damn, at least a couple of independent reviews would be nice to have.