7 points by fnimick 23 hours ago | 9 comments
- Likely a good deal of test coverage. At the far end of this is something like Facebook, which has everything monitored by A/B tests. If it breaks something that changes something serious, the alarm triggers. Move fast break things isn't a new way of doing things, so might as well pick up a framework that works.
- You get confidence in things by doing them. If you don't have experience doing something, you aren't going to be confident at it. Try vibe coding a few small projects. See how it works out. Try different ways of structuring your instructions to the 'agents'.
- Are there public examples of "good instruction" and an iteration process? I have tried and have not been very successful at getting Claude Code to generate correct code for medium sized projects or features.
- I had Claude write a piano webapp (https://webpiano.jcurcioconsulting.com) as a "let's see how this thing works" project. I was pleasantly surprised by the ease of it.
I actually just put together a write up showing my prompts and explaining what was generated after each, if you're interested at all https://jcurcioconsulting.com/posts/how-i-used-claude-code-t...
- Anthropic has a short training course https://www.coursera.org/learn/claude-code-in-action. There isn't really a lot of best practices at this point because the technology has improved significantly in 2025.
- Heads up, this is a paid course.
- Sometimes it feels like there is an awful lot of software out there that shipped without much review. This was happening long before AI arrived on the scene.
Hard to tell if anyone was 'comfortable' with that.
- You don't. Whoever's telling you those stories has a very long nose.
- 100% agree with the "you don't", but I wouldn't be surprised if young startups or highly stressed teams delivering low-risk products will do just that and deliver unreviewed code