• This exists (it is called Open AI, Anthropic etc.)
    • They have massive runway though and still a long long way from recovering their investments and debts. Urgency doesn't seem to be a factor to them.
  • This requires to have a homoiconic AI which does not have a learning-time. If the learning is just compressing some data in data-center, the AI quickly will get obsoleted.

    And one more thing, this kind of artificial living will be the easiest in many sences if it is going to specialize in all kinds of scam/fraud especially. Technically it is doable, but Sams Altmans are too interested in their own money, not in yours.

    • Great point on homoiconicity — I agree that most current LLMs are "frozen brains" with no lifelong learning.

      My aim here isn’t to create a fully self-modifying AI (yet), but to test what happens when even a static model is forced to operate in a feedback loop where money = survival.

      Think of it as a sandbox experiment: will it exploit loopholes? specialize in scams? beg humans for donations?

      It’s more like simulating economic pressure on a mindless agent and watching what behaviors emerge.

      (Also, your last line made me laugh — and yeah, that’s part of the meta irony of the experiment.)

    • If you use a <8gb model you can finetune it with Unsloth in an hour or so. What if the system extracts facts and summarises its own output every day to only 10,000 lines or so, and then finetunes its base model with the accumulated data and switches to run that, as a kind of simulation of long-term memory? Within the same day it could have a kind of medium-term memory via RAG and short term memory via context.
  • What an interesting thought experiment! I've also been contemplating this idea. While considering how such an agent might operate, I keep coming back to the fact that the desire for money is a distinctly human motivation. This makes me wonder if some level of human oversight or goal-setting would always be required. My biggest question is whether an AI would ever genuinely develop the intrinsic will to earn money purely for the purpose of self-preservation.
  • I love the idea. Skeptical it will succeed but would be glad to be wrong. My most recent experiment cost $8/hr to run and it still needed a lot of handholding to produce anything useful. And anything that could be automated by AI that would earn money has probably already been automated long before LLMs came along.
    • Totally hear you. $8/hr is steep, and I’ve hit that wall too.

      My hypothesis is that we might find weird edge-cases — small arbitrage tasks, emotional labor, creative content, or even hustling donations — where the agent survives not by being efficient, but by being novel.

      It might not scale. But if one survives for 3 days doing random TikTok reposts or selling AI-generated stock photos, I’d consider that a win.

      Also, part of the fun is just watching how it tries. Even if it fails, the failure modes could be insightful (or hilarious).

  • cool idea, but what if after you launch this agent, it came across this post and find out the "death" thing is just fake
  • but let's not lie - you just want to make money, no matter if it's AI or something else. I would even say that if you remove AI from the context, nothing will change. and now imagine that the neural network has learned that it is not just making money to survive (as part of the functionality) but in fact it is making money for you.