- This looks really cool and I've definitely been feeling this pain. I've been building out a solution for myself on top of docker. What are the advantages of using coasts over docker?
- Hey thanks! To be clear it does use docker. It's a docker-in-docker solution.
I think there's a quite a few things:
1) You need a control plane to manage the host-side ports. Docker alone cannot do that, so you're either going to write a docker-compose for your development environment where you hard code dynamic ports into a special docker-compose or you're going to end up writing your own custom control plane.
2) You can preserve your regular Docker setup without needing to alter it around dynamic ports and parallelized runtimes. I like this a lot because I want to know that my docker-compose is an approximation of production.
3) Docker basically leaves you with one type of strategy... docker compose up and docker compose down. With coasts you can decide on different strategies when you switch worktrees on a per service basis.
4) This is sort of back to point 2, but more often than not you want to do things like have some shared services or volumes across parallelized runtimes, Coasts makes that trivial (You can also have multiple coast configs so you can easily create a coast type that has isolated volumes). If you go the pure docker route, you are going to end up having multiple docker-composes for different scenarios that are easily abstracted by coasts.
5) The UI you get out of the box for keeping track of your assigned worktrees is super useful.
6) There's a lot of built in optimizations around switching worktrees in the inner bind mount that you'll have to manually code up yourself.
7) I think the ergonomics are just way better. I know that's kind of a vibesey answer but it was sort of the impetus for making Coasts in the first place.
8) There's a lot of stuff around secrets management that I think Coasts does particularly well but can get cumbersome if you're hand-rolling a docker solution.
- > docker-in-docker solution
Goodbye Mac users.
- Why do you say that?
It works fine on mac (that's what we developed it on) and it's not nearly as much overhead as I was initially expecting. There's probably some added latency from virtual box but it hasn't been noticeable in our usage.
- The mount propagation approach is clever. Using umount -l + mount --bind + rshared to hot-swap worktrees inside a running DinD container is a nice trick that avoids the full teardown/rebuild cycle most people fall back on.
One thing I'm curious about: how do you handle state drift when agents are working on the same service across different worktrees? For example, if two agents are both making schema changes to a shared database service, do you have any coordination primitives, or is that left to the orchestration layer above? In my experience the runtime isolation is the easy part - the hard part is when agents need to share state (like a test database) without stepping on each other.
Also, the per-service strategy config (none/hot/restart/rebuild) seems like the right abstraction. Most of the overhead in switching worktrees comes from unnecessary full restarts of services that don't actually care about the code change.
- >One thing I'm curious about: how do you handle state drift when agents are working on the same service across different worktrees? For example, if two agents are both making schema changes to a shared database service, do you have any coordination primitives, or is that left to the orchestration layer above? In my experience the runtime isolation is the easy part - the hard part is when agents need to share state (like a test database) without stepping on each other.
Great question! You can configure multiple coasts, so you could have a coast running with isolated dbs/state and also a shared version (you can either share the volume amongst the running coasts or move your db to run host-side as a singleton). So its sort of left to the orchestration layer: you put rules in your md file about when to use each. There's trade-offs to each scenario. I've been using isolated dbs for integration tests, but then for UI things I end up going with shared services.
>Re: For example, if two agents are both making schema changes to a shared database service
Obviously things can still go wrong here in the shared scenario, but it's worked fine for us and I haven't hit anything so far. It's just like having developers introducing schema changes across feature branches.
>Also, the per-service strategy config (none/hot/restart/rebuild) seems like the right abstraction. Most of the overhead in switching worktrees comes from unnecessary full restarts of services that don't actually care about the code change.
Totally, at first switching worktrees for our 1m+ loc repo was like 2 minutes. Then we introduced the hot/none strategies and got it down to like 8s. This is by far one of the best features we have.
- This is pretty cool, have personally felt this limitation many a time.
Basically been relying on spinning up cursor / niteshift / devin workflows since they have their own containers but this could be interesting to keep it all on your main machine.
- Thanks!
Yeah, I think there's a ton of great remote solutions right now. I think worktrees make the local stuff tricky but hopefully Coasts can help you out.
Let me know how it goes!
- HN questions we know are coming our way:
1) Could you run an agent in the coast?
You could... sort of. We started out with this in mind. We wanted to get Claude Max plans to work so we built a way to inject OAuth secrets from the host into the containerized host... unfortunately because the Coast runtime doesn't match the host machine the OAuth token is created on, Anthropic rapidly invalidates the OAuth tokens. This would really only work for TUIs/CLIs and you'd almost certainly have to bring a usage key (at least for Anthropic). You would also need to figure out how to get a browser runtime into the containerized host if you wanted things like playwright to work for your agent.
There's so many good host-side solutions for sandboxing. Coasts is not a sandboxing tool and we don't try to be. We should play well with all host-side sandboxing solutions though.
2) Why DinD and why not mount namespaces with unshare / nsenter?
Yes, DinD is heavy. A core principle of our design was to run the user's docker-compose unmodified. We wanted the full docker api inside the running containerized host. Raw mount namespaces can't provide image caches, network namespaces, and build layers without running against the host daemon or reimplementing Docker itself.
In practice, I've seen about 200mb of overhead with each containerized host running Dind. We have a Podman runtime in the works, which may cut that down some. But the bulk of utilization comes from the services you're running and how you decide to optimize your containerized hosts and docker stack. We have a concept of "shared-services". For example if you don't need isolated postgres or redis, you can declare those services as shared in your Coastfile, and they'll run once on the host Docker daemon instead of being duplicated inside each containerized host, coasts will route to them.
- [flagged]
- [dead]