• There is a real risk but probably not directly from someone targeting you. Your agent reading a webpage or email that happens to contain injected instructions is a risk. It is really a surface area problem. I would suggest you ask claude/whatever to scan your OC dirs regularly.
  • https://www.notion.so/Inside-OpenClaw-Deploying-Sniffing-and...

    Lab 2 there shows you how you can use socat to intercept that data passing between OpenClaw and LLM. It's interesting to look at all the tooling (and modify it if you like) around the user prompt. --Might help if you are interested.

  • I've been on the defense side for a while, and the "it hasn't happened yet" argument is dangerous territory. The surface area for attack definitely increases with agentic systems.

    The comment about malicious package installs is a much more realistic threat, as an example. Prompt injection is one angle, but defending against a supply chain compromise or an agent being tricked into exfiltrating secrets should be a higher priority. That's a more direct and exploitable vector.

  • I've seen more bad use from the AI itself than from prompt injections. e.g. someone's instructions exceeded context and it started deleting all her mail.

    On stuff in the wild, we also very rarely see prompt injections and hacks. Not to say they're not a problem, but somewhere around #6 on the list of issues to be worried about.

  • I think the more likely attack vector in OpenClaw is convincing it to install a malicious npm package or script, have that siphon all machine/env secrets, and then watch those secrets get abused. (Cloud API key -> crypto mining. Wallet key->theft. Npm credentials->worm publishes more copies of itself. GitHub key->more theft and malicious code upload. Email API key->IP theft and password reset on other systems) Almost all of this can be automated, so the attacker doesn’t have to know who you are.

    It’s not targeted per se.

  • Leaked API keys is something that really concerns me. Especially if you don't have proper usage limits configured, but I agree prompt injection paranoia feels overblown.
  • > I find it a bit irrational to pretend that open claw is a genuine security risk.

    Except that it is an actual security risk, no pretending is needed. In general, agents expand the security surface and attack vectors, regardless of framework.

    Your argument that it hasn't happened, therefore it doesn't exist is a well known cognitive bias.

    See the Lethal Trifecta for one way in which security requires more thoughtfulness.

  • [dead]
  • [dead]