• jjcm
    This is entirely tangential to the article, but I’ve been coding in golang now going on 5 years.

    For four of those years, I was a reluctant user. In the last year I’ve grown to love golang for backend web work.

    I find it to be one of the most bulletproof languages for agentic coding. I have a two main hypotheses as to why:

    - very solid corpus of well-written code for training data. Compare this to vanilla js or php - I find agents do a very poor job with both of these due to what I suspect is poorly written code that it’s been trained on. - extremely self documenting, due to structs giving agents really solid context on what the shape of the data is

    In any file an agent is making edits in, it has all the context it needs in the file, and it has training data that shows how to edit it with great best practices.

    My main gripe with go used to be that it was overly verbose, but now I actually find that to be a benefit as it greatly helps agents. Would recommend trying it out for your next project if you haven’t given it a spin.

    • Interesting. I've only dipped my toe in the AI waters but my initial experience with a Go project wasn't good.

      I tried out the latest Claude model last weekend. As a test I asked it to identify areas for performance improvement in one of my projects. One of the areas looked significant and truth be told, was an area I expected to see in the list.

      I asked it to implement the fix. It was a dozen or so lines and I could see straightaway that it had introduced a race condition. I tested it and sure enough, there was a race condition.

      I told it about the problem and it suggested a further fix that didn't solve the race condition at all. In fact, the second fix only tried to hide the problem.

      I don't doubt you can use these tools well, but it's far too easy to use them poorly. There are no guard rails. I also believe that they are marketed without any care that they can be used poorly.

      Whether Go is a better language for agentic programming or not, I don't know. But it may be to do with what the language is being used for. My example was a desktop GUI application and there'll be far fewer examples of those types of application written in Go.

      • You need to be telling it to create reproduction test cases first and iterate until it's truly solved. There's no need for you to manually be testing that sort of thing.

        The key to success with agents is tight, correct feedback loops so they can validate their own work. Go has great tooling for debugging race conditions. Tell it to leverage those properly and it shouldn't have any problems solving it unless you steer it off course.

        • +1 half the time I see such posts the answer is "harness".

          Put the LLM in a situation where it can test and reason about its results.

          • I do have a test harness. That's how I could show that the code suggested was poor.

            If you mean, put the LLM in the test harness. Sure, I accept that that's the best way to use the tools. The problem is that there's nothing requiring me or anyone else to do that.

        • If that’s what you have to do that makes LLMs look more like advanced fuzzers that take textual descriptions as input (“find code that segfaults calling x from multiple threads”, followed by “find changes that make the tests succeed again”) than as truly intelligent. Or, maybe, we should see them as diligent juniors who never get tired.
          • I don't see any problems with either of those framings.

            It really doesn't matter at all whether these things are "truly intelligent". They give me functioning code that meets my requirements. If standard fuzzers or search algorithms could do the same, I would use those too.

        • TDD and the coding agent: a match made in heaven.

          It is Valentine's Day after all.

        • I accept what you say about the best way to use these agents. But my worry is that there is nothing that requires people to use them in that way. I was deliberately vague and general in my test. I don't think how Claude responded under those conditions was good at all.

          I guess I just don't see what the point of these tools are. If I was to guide the tool in the way you describe, I don't see how that's better than just thinking about and writing the code myself.

          I'm prepared to be shown differently of course, but I remain highly sceptical.

          • Just want to say upfront: this mindset is completely baffling to me.

            Someone gives you a hammer. You've never seen one before. They tell you it's a great new tool with so many ways to use it. So you hook a bag on both ends and use it to carry your groceries home.

            You hear lots of people are using their own hammers to make furniture and fix things around the home.

            Your response is "I accept what you say about the best way to use these hammers. But my worry is that there is nothing that requires people to use them in that way."

            These things are not intelligent. They're just tools. If you don't use a guide with your band saw, you aren't going to get straight cuts. If you want straight cuts from your AI, you need the right structure around it to keep it on track.

            Incidentally, those structures are also the sorts of things that greatly benefit human programmers.

            • "These things are not intelligent. They're just tools."

              Correct. But they are being marketed as being intelligent and can easily convince a casual observer that they are through the confidence of their responses. I think that's a problem. I think AI companies are encouraging people to use these tools irresponsibly. I think the tools should be improved so they can't be misused.

              "Incidentally, those structures are also the sorts of things that greatly benefit human programmers."

              Correct. And that's why I have testing in place and why I used it to show that the race condition had been introduced.

          • Okay. If you’re being vague, you get vague results.

            Golang and Claude have worked well for me, on existing production codebases, because I tell it precisely what I want and it does it.

            I’ve never found generic “find performance issues” just by reading the code helpful.

            Write specifications, give it freedom to implement, and it can surprise you.

            Hell once it thought of how to backfill existing data with the change I was making, completely unasked. And I’m like that’s awesome

            • "Okay. If you’re being vague, you get vague results."

              No. I was vague and got a concrete suggestion.

              I have no issue with people using Claude in an optimal way. The problem is that it's too easy to use in a poor way.

              My example was to test my own curiosity about whether these tools live up to the claims that they'll be replacing programmers. On the evidence I've seen I don't believe they will and I don't see how Go is any different to any other language in that regard.

              IMO, for tools like Claude to be truly useful, they need to understand their own limitations and refuse to work unless the conditions are correct. As you say, it works best when you tell it precisely what you want. So why doesn't Claude recognise when you're not being precise and refuse to work until you are?

              To reiterate, I think coding assistants are great when used in the optimal way.

        • If only there was a way to prevent race conditions by design as part if the language's type system, and in a way that provides rich and detailed error messages that allow coding agents to troubleshoot issues directly (without having to be prompted to write/run tests that just check for race conditions).
    • I don't believe the "corpus" argument that much.

      I have been extending the Elm language with Effect semantics (ala ZIO/Rio/Effect-ts) for a new langauge called Eelm (extended-Elm or effectful-elm) and both Haskell (the language that the Elm compiler is written in) and Eelm (the target language, now we some new fancy capabilities) shouldn't have a particularly relevant corpus of code.

      Yet, my experiments show that Opus 4.6 is terrific at understanding and authoring both Haskell and Eelm.

      Why? I think it stems from the properties of these languages themselves: no mutability makes it reason to think about, fully statically typed, excellent compiler and diagnostics. On top of that the syntax is rather small.

    • Go’s design philosophy actually aligns with AI’s current limitations very well.

      AI has trouble with deep complexity, go is simple by design. With usually only one or two correct paths instruction wise. Architecturally you can design your src however but there’s a pretty well established standard.

    • One of the things that makes it work so well with agents is two facts. Go is a language that is focused on simplicity and also the gofmt and go coding style makes that almost all go code looks familiar, because everyone write the code with a very consistent style. That two things makes the experience pleasant and the work for the llm easier.
    • I have had good experience with Go, but I've also had good results with TypeScript. Compile-time checks are very important to getting good results. I don't think the simplicity of the language matters as much as the LLM being generally aware of the language via training data and being able to validate the output via compilation.
    • Yeah in my experience Claude is significantly better at writing go than other languages I’ve tried (Python, typescript)
      • Same goes for humans. There are some wild exceptions, but most Go projects look like they were written by the same person.
    • I wonder how is the experience writing Rust or Zig with LLMs. I suspect zig might not have enough training data and rust might struggle with compile times and extra context required for borrow checker.
      • I found Opus 4.6 to be good at Zig.

        I got it to write me an rsync like CLI for copying files to/from an Android device using MTP, all in a single ~45 min sitting. It works incredibly well. OpenMTP was the only other free option on macOS. After being frustrated by it, I decided to try out Opus 4.6 and was pleasantly surprised.

        I later discovered that I could plug in a USB-C hard drive directly into the phone, but the program was nonetheless very useful.

      • > I wonder how is the experience writing Rust or Zig with LLMs

        I've had no issues with Rust, mostly (99% of the time) using codex with gpt-5.2 xhigh and does as well as any other language. Not sure why you think compile times would be an issue, the LLM doesn't really care if it takes 1 minute or 1 hour to compile, it's more of a "your hardware + project" issue than about the LLMs. Also haven't found it to struggle with borrow checker, if it screw up it sees the compilation errors, fixes it, just like with any other languages I've tried to use with LLMs.

    • I'm having similarly good results with go and agents. Another good language for it is flutter/dart in my experience.
    • [dead]
  • Perfectly happy with Go, my "Go should do X" / "Go should have Y" days are over.

    But if I could have a little wish, "cargo check" would be it.

    • Enums is mine.

      Going on year 4 working at $DAY_JOB and just last week we had a case where enums and also union types would have made things simpler.

  • I always have the unfounded feeling that the go compiler/linker does not remove dead code. Go binaries have large minimal size. Tinygo in contrast can make awesome small binaries
    • It's pretty good at dead code elimination. The size of Go binaries is in large part because of the runtime implementation. Remove a bunch of the runtime's features (profiling, stacktraces, sysmon, optimizations that avoid allocations, maybe even multithreading...) and you'll end up with much smaller binaries. I would love if there was a build tag like "runtime_tiny", that provides such an implementation.
      • There is Tiny Go, which is similar to what you seek
    • I think it depends on the codebase. There are some reflection calls that you can make that can cause dead code elimination to fail, thought I believe it's less easy to run into than it was a few years ago. One common dependency, at least in my line of work, is the Kubernetes API and it manages to both be gigantic and trigger this edge case (last I looked), so yeah, the binaries end up pretty big.

      Another thing that people run into is big binaries = slow container startup times. This time is mostly spent in gzip. If you use Zstandard layers instead of gzip layers, startup time is improved. gzip decompression is actually very slow, and the OCI spec no longer mandates it.

    • Go has a runtime. That alone is over a megabyte. Tinygo on the other hand has very limited(smaller) runtime. In other words, you don't know what you're talking about.
  • I can see no difference to an ordinary linker. Anyone care to explain it to me.?
    • Yes, it is not specially different from other linkers. It has some tasks building the final binary including special sections in the binary, and is more aware about the specifics of the go language. But there is nothing that is extremely different from other linkers. The whole point of the series is to explain a real compiler, but in general, most of the parts of the go compiler are very widely used in other languages, like ssa, ast, escape analysis, inlining...
      • when does golang create the final dynamic dispatch tables? isn't that the one thing that in golang needs real compute at final link time, beyond what a C linker would do? and where C++ has all information at compile time, while golang can only create the dispatch tables at link time?
        • Yes, there is some information that is written by the linker in the final data section of the binary, the itab, that is the interface table for the dynamic dispatching. AFAIK, it is done there because you need to know other packages structs and interfaces to have the whole picture and build that table, and that happens using the build cache.
          • yes, the interface tables! that was the word I didn't remember. and that is some computation going on there not "just" merging sections, and, in a normal static linker, wiring exports to imports, and not pulling in unneeded definitions (dead code elimination).

            the interface table computation is a golang speciality, a fascinating one.

            and the implementation of interface magic is disturbingly not mentioned in the article.

    • The difference is that Go has its own linker rather than using a system linker. Another article could explain the benefits of tighter integration and the drawbacks of this approach. Having its own toolchain I assume is part of what enables the easy cross compilation of Go.
      • You can actually make go spit out .o files and link it with your favorite linker. Bazel does this, if you ask it to.

        I played a lot with experimental linkers when I was trying to get build time down for our (well, $JOB-1's) large Go binary, but they didn't help that much. The toolchain that comes with Go is quite good.

    • What is there to explain? The author did not claim there is a difference in the article.
    • Why should it be one?
    • The title is misleading
      • Misleading in what way? This is the linker part of a serie of posts about understanding the go compiler. I think there is no much space to be misleading.
  • It's always fascinating to dive into the internals of the Go linker. One aspect I've found particularly clever is how it handles static linking by default, bundling everything into a single binary
    • The Go tooling is heavily based on the Inferno tool chain which was based off the highly portable Plan 9 tool chain. Plan 9 by default is statically linked as dynamic libraries are supported but not implemented anywhere. The idea was that librarys should instead be implemented as a service that runs local or on a remote machine.
  • Why not skip linker at all and generate single optimized exe file?
    • In fact it generate single optimized exe files, but it does in multiple steps for multiple reasons, one of them is separation of concerns, but also, one of the main reasons is speed. The linker is linking (normally statically linking) different already build cached libraries, including the runtime. Without the linking ability, you would need to compile everything every time. Not only that, the linker has other responsibilities, like building some metadata that goes into the binary, for example the dynamic dispatch table.
  • I'm impressed with how approachable the explanation is!