- They may seem like small details, but I think a couple novel design decisions are going to prove to be widely adopted and revolutionary.
The biggest one (as Karpathy notes) is having skills for how to write a (slack, discord, etc) integration, instead of shipping an implementation for each.
Call it “Claude native development” if you will, but “fork and customize” instead of batteries-included platforms/frameworks is going to be a big shift when it percolates through the ecosystem.
A bunch of things you need to figure out, eg how do you ship a spec for how to test and validate the thing, make it secure, etc.
How long before OSs start evolving in this way? You can imagine Auto research-like sharing and promotion upstream of good fixes/approaches, but a more heterogenous ecosystem could be more resistant to attacks if each instance had a strong immune system.
- > having skills for how to write a (slack, discord, etc) integration, instead of shipping an implementation for each
I'm not sure what is the advantage. Each user will have to waste time and tokens for the same task, instead of doing it once and and shipping to everyone.
- Agreement, excellence in one domain does not confer it to others. If you've ever worked with researchers, you know for the most part they are not engineers. This is bad advice / prediction by people with hammers imo.
OCI is a good choice of reuse, they aren't having the agent reimplement that. When there is an existing SDK, no sense in rebuilding that either. Code you don't use should be compiled away anyhow.
- Except it's not 'once' though.
In order for it to be 'once': all hardware must have been, currently be, and always will be: interchangeable. As well as all OS's. That's simply not feasible.
- I don't see, how is it relevant in this case. We are talking about writing an integration with an HTTP API (probably) in a high level language (TS/JS, Python, etc). We have already abstracted hardware away.
- I get the appeal but I disagree
The strength of open source software is collaboration. That many people have tried it, read it, submitted fixes and had those fixes reviewed and accepted.
We've all seen LLMs spit out garbage bugs on the first few tries. I've written garbage bugs on my first try too. We all benefit from the review process.
I would rather have a battle tested base to start customizing from than having to stumble through the pitfalls of a buggy or insecure AI implementation.
- Troubleshooting "works on my machine" issues most be fun when no two people have exactly the same implementation.
Also seems like this will further entrench the top 2 or 3 models. Use something else and your software stack looks different.
- > We've all seen LLMs spit out garbage bugs on the first few tries.
I’m assuming here an extrapolation of capabilities where Claude is competitive to the median OSS contributor for the off-the-shelf libraries you’d be comparing with.
As with most of the Clawd ecosystem, for now it probably is best considered an art project / prototype (or a security dumpster fire for the non-technical users adopting it).
> The strength of open source software is collaboration. That many people have tried it, read it, submitted fixes and had those fixes reviewed and accepted
I do think that there is room for much more granular micro-libraries that can be composed, rather than having to pull in a monolithic dependency for your need. Agents can probably vet a 1k microlibrary BoM in a way a human could never have the patience to.
(This is more the NPM way, leftpad etc, which is again a security issue in the current paradigm, but potentially very different ROI in the agent ecosystem.)
- I have thought of this ship a spec concept. What is we are just trading markdown files instead of code files to implement some feature into our system?
- I wish I could find the GitHub repo, but yes, I have seen at least one library written in Markdown to be used with Claude. Not a Claude skill, but functionality to be delivered.
- You must explicitly state what your threat model is when writing about security tooling, isolation, and sandboxing.
This threat model is concerned with running arbitrary code generated by or fetched by an AI agent on host machines which contain secrets, sensitive files, and/or exfoliate data, apps, and systems which should not be lost.
What about the threat model where an agent deletes your entire inbox? Or sends your calendar events to a server after prompt injection? Bank transfers of the wrong amount to the wrong address etc. all these are allowed under the sandboxing model.
We need fine grained permissions per-task or per-tool in addition to sandboxing. For example: "this request should only ever read my gmail and never write, delete, or move emails".
Sandboxes do not solve permission escalation or exfiltration threats.
- > We need fine grained permissions per-task or per-tool in addition to sandboxing. For example: "this request should only ever read my gmail and never write, delete, or move emails".
Yes 100%, this is the critical layer that no one is talking about.
And I'd go even further: we need the ability to dynamically attenuate tool scope (ocap) and trace data as it flows between tools (IFC). Be able to express something like: can't send email data to people not on the original thread.
- You mean like the section which goes into the threat model?
The Security Model: Design for Distrust I wrote about this in Don’t Trust AI Agents: when you’re building with AI agents, they should be treated as untrusted and potentially malicious. Prompt injection, model misbehavior, things nobody’s thought of yet. The right approach is architecture that assumes agents will misbehave and contains the damage when they do… - I built an agent framework designed from the ground up around policy control (https://github.com/sibyllinesoft/smith-core) and I'm in the process of extracting the gateway from it so people can provide that same policy gated security to whatever agent they want (https://github.com/sibyllinesoft/smith-gateway).
My posts about these aspects of agent security get zero engagement (not even a salty "vibe slop" comment, lol), so ironically security is the thing everyone's talking about, but most people don't know enough to understand what they need.
- That's a great question, and it reminds me of something I read today:
https://entropytown.com/articles/2026-03-12-openclaw-sandbox...
The core issue, to me, is that permissions are inherently binary — can it send an email or not — while LLMs are inherently probabilistic. Those two things are fundamentally in tension.
- > We need fine grained permissions per-task or per-tool in addition to sandboxing. For example: "this request should only ever read my gmail and never write, delete, or move emails".
We already have: IAM, WIF, Macaroons, Service Accounts
Ask you resident SecOps and DevOps teams what your company already has available
- [dead]
- I like NanoClaw a lot. I found OpenClaw to be a bloated mess, NanoClaw implementation is so much tighter.
It's also the first project I've used where Claude Code is the setup and configuration interface. It works really well, and it's fun to add new features on a whim.
- Amen, my OpenClaw instance broke last week.
Some update broke the OpenRouter integration and I haven't been able to fix the issue. I took a quick look at the code, hoping to narrow it down and it's pretty much exactly what you would expect, there's hidden configuration files everywhere and in general it's just a lot of code for what's effectively a for loop with Whatsapp integration (in my case :)).
Not to mention that their security model doesn't match my deployment (rootless and locked down Kubernetes container) so every Openclaw update seemed to introduce some "fix" for a security issue that broke something else to solve a problem I do not have in the first place :)
I've switched to https://github.com/nullclaw/nullclaw instead. Mostly because Zig seems very interesting so if I have to debug any issues with Nullclaw at least I'll be learning something new :)
- what workflows do you implement in Nanoclaw that wouldn't be straightforward to build in Claude?
- Docker sandboxes sound exactly like what Apple is doing with their `container` framework. It's missing several Docker features still, but if I were to pick a minimal, native runtime, it would probably be that, not the multi-gigabyte monster that is Docker for macOS.
On Linux, however, I absolutely don't want a hypervisor on my quite underpowered single-board server. Linux namespaces are enough for what I want from them (i.e. preventing one of these agent harnesses to hijack my memory, disk, or CPU). I wonder why neither OpenClaw nor NanoClaw seem to offer a sanely configured, prebuilt, and frequently updated Docker image?
- I use Apple's Container tool on macOS, and Podman on other OSes. I really like Apple's Container. The only issue I have currently is that there are some annoying networking bugs, but to my knowledge, the developers are aware of them. So, hopefully the bugs will be fixed before too long.
Every time I create/start a container, I have to override the container's default DNS server or access to the Internet is blocked/Domain Names will not resolve. A work around exists, and is not too bad, so I still get a lot of value of Container. There is no way I am installing Claude Code nor Node.js on my host machine, and thankfully, I am not forced to.
- > Fine-grained permissions and policies. Not just what tools an agent can access, but what it can do with them. Read email but not send. Access one repo but not another. Spend up to a threshold but no more.
If nailed this is going to be interesting.
All the other solutions I've been sumbling around are either very hard to customize or too limited.
Docker sandboxing is kinda nice, but not enough to trust an LLM even with my messaging accounts.
- The main issue is not so much if it needs to run inside a container or not (and to be honest there are even better isolation models, why not firecracker vm). The main issue is what are you going to do with it.
It does not really matter.
IMHO, until you figure out useful ways to spend tokens to do useful tasks the runtime should be a second thought.
As far as security goes, running LLM in a container in just simply not enough. What matters is not what files it can edit on your machine but what information it can access. And the access in this case as far as these agents are concerned is basically everything. If this does not scare you you should not be thinking about containers.
- This is exactly right. The container conversation is a distraction from the harder problem.
The more interesting security model for local AI is: don't give the model access to anything external at all. Run the model on your own hardware, feed it the specific task, get the output, verify it in an isolated sandbox. No API keys, no network access, no credentials in the environment. The model generates, a sandbox executes, verification happens outside the model's reach.
It's a much more constrained and arguably boring approach than the "agent that manages your life" vision, but it's actually secure by construction rather than by policy. The blast radius of a misbehaving model is zero when it literally cannot reach anything.
- Docker sandboxes are a neat way to contain AI agents. It spins a dedicated microVM and its Docker daemon for each agent container together with a flexible egress proxy to go with it. I've spent some time reverse engineering it and it's an interesting piece of implementation.
- I've been working on a similar idea to the "claws" but rather than integrating with messaging apps, just make the TUI available e2e encrypted where-ever you are. https://wingthing.ai/ / https://github.com/ehrlich-b/wingthing
I've been thinking about how docker support would work, so I'll check this out!
- What I found interesting is nanoclaw isn’t a working product out of the box. You must use a coding agent to complete it with features you want. For example add iMessage support, etc.
In other words, Claude is the compiler.
- I’m old enough to remember when one checked the assembly a compiler generated because early on they produced terrible assembly. Eventually they got good enough to not needing to check.
Coding agents are not close to that yet, but it’s interesting watching history repeat itself.
This narrative of the coding agents being so much better now over the last few months seems VERY exaggerated. I’m still spending a lot of time telling Claude: No, that didn’t fix the problem. Again. Can you handle this task or do I have to give it to codex?
- What are the most obvious use cases for Nano/Open-Claw. I can't imagine anything obvious that I'd want to use it for. Is it supposed to run your digital life for you?
- It's simply just either a LLM-cron job OR a telegram/email/etc chat connector to a sandboxed LLM. For former can be solved with regular cron jobs and the later can be done via manual code or Gemini Gems (if you use google).
- Email summarization, calendar notifications, briefing documents, .. the list goes on. Think of anything knowledge-based that's moderately repetitive you don't want to do, and just ask
- Hooking it up to your todo app and texting your bot to manage things. Assuming you’re a heavy todo app person that could benefit from such things.
- What do you mean by "manage things"? If you mean adding/updating/completing tasks, why not just do that directly in the app? Or do you mean that it will take your tasks and perform them for you?
- The non-answer is anything you want.
For me, it's my diet and workout buddy. It knows my goals, keeps me on track, does meal planning for me, gives me grocery lists, logs what I eat, when I exercise... anything I want so I don't slack off.
I've enhanced Nanoclaw quite a bit. Moved it to Apple containers (shipped with this Skill already). Then I wrote an API for Nanoclaw to use (food log, workouts, etc), then implemented long-term memory using LanceDB (because I was tired of repeating myself!).
- As an aside, app descriptions that just say "a lightweight alternative to X" are very unhelpful. That tells me nothing if I don't know what X does, and I don't want to have to go down a rabbit hole just to understand your product. It's particularly bad in this case, because even OpenClaw's Github page doesn't clearly tell me what it actually does; just that it's some kind of assistant that I can communicate with via WhatsApp etc. I appreciate that many people are already familiar with OpenClaw, but you shouldn't assume.
It's better if your app's description just tells me what it does in a direct way using plain language. It's fine to tell me it's an alternative to something, but that should be in addition to rather than instead of your own description.
- It would be interesting to have nanoclaw adapted to the Pi coding agent rather than Claude Code, which would combine two minimalist approaches.
- I hope they never drop the Apple container mode. I vastly prefer it because of the lower overhead on limited RAM.
- Does getviktor use NanoClaw?
- Why people run these? What does it do and what could be the use?
I install it and then what?
- 1) install nanoclaw in docker 2) ??? 3) profit
More seriously, set it up as you would a junior employee with a high quality getting started guide, guardrails, and clear feedback loops that it's doing tasks correctly (otherwise it will just suck). Then delegate tasks to it, start simple and grow in complexity as it demonstrates it does a good job on the simple tasks.
What role it does for you depends on your business, and what is best fit for automation. Purely digital roles with good feedback loops are the ones I focus on.
- All the sandboxing stuff is neat but the weakest link in these claw setups is not root access on the machine but root access to your life (Gmail, calendar, etc)
- why give it root access to your life? i don't use these tools but it seems like you should never give anything that access. if a claw needs email, set up a google account just for it and forward relevant stuff to it. share your calendar with it. whatever, just don't let it "be" you.
access control, provisioning, and delegation have been solved for a very long time now.
- How do you control access or delegate with typical web apps like Gmail, Calendar, Expedia?
- This is true, but the attack surface on your life is decreased by better security around the entire setup.
But I fundamentally agree that there is just too much overlap between what makes claws useful and what makes them insecure.
- The next step to this is using a better tool to access containers (BuildKit), like Dagger, where you can track every step as a new container layer, time travel, share via registries...
This has been my setup since early this year, not even that much code: https://github.com/hofstadter-io/hof/tree/_next/lib/agent/se...
The bigger effort is making it play nice with vscode so you can browse and edit the files and diffs.