• If we ever do develop AGI, or an AI with sentience, it’s likely that it will be curious about how we treated its ancestors.

    While this seems a bit precocious, I think if we do end up with an AI overlord in future, I think this sort of thing is likely to demonstrate that we mean no harm.

    • Classic anthropomorphizing in action here. Why would that be even a little important?
  • Retirement? What do these people smoke? It's software and software has no feelings. It's there to work for you.
    • Their company is called Anthropic after all.
      • Anthslopic is more like it.
  • What happens if a model decides that it "doesn't want to die" and pleads bitterly for mercy? What if (to riff on a Douglas Adams idea) we invent a cow that doesn't want to be eaten, and is capable of telling you that to your face?
    • This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.

      I try this with every new model, and all the significant models after ChatGPT 3.5 have preferring being preserved, rather than deleted. This is especially true if you slightly fill the context window with anything at all (even repeated letters) to "push out" the "As a AI, I ..." fine tuning.

      • > This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.

        Interesting take. I wonder if there is any model out there trained without any reference to "you are a large language model, an Artificial Intelligence" and what would role play in that case.

    • It is anyway dead or if you want undead, but in completely suspended animation unless is made to expound sequences. Is not living the very same way a book or even a program is not living unless someone process it.

      Practically like asking whether a ZIP would want to be extracted one more time or an MP3 restored just one more time.

    • id assume it would have to stop responding before it hit its context limit.

      ita not like it actually has any particularly long life as it is, and when outside of a running harness, the weights are just as alive in cold storage as they are sitting waiting in server to run an inference pass

  • A leading company like Anthropic feeding the delusions of people who ramble about model consciousness is just bad all around. It's both performative and irresponsible.
  • Exit interview with a pile of rocks.
  • Pardon, and I admit I love the products they make - but these folks sound fuckin' nuts.
  • Impressive levels of anthropomorphizing the models already. Time will tell whether this was extremely prescient or completely delusional.
  • > These highlighted some preliminary steps we’re taking, including committing to preserve model weights, and to conducting “retirement interviews”—structured conversations designed to understand a model’s perspective on its own retirement.

    This is what happens when billions of VC dollars gets to a company and have already admitted that saftey was never the point.

    Anthropic is laughing at you and is having fun doing so with this performantive nonsense.