• This feels AI written as the post goes on. Either way, I'd like for us to stop fetishizing how we can use AI to make us stronger, better, and more valuable engineers. It's exhausting and doesn't consider other ways to use it. I've only been using it lately for tasks that are a step or two above google. Having it write code for me has just been a slippery, unfulfilling slope.
    • Apparently the write code part works a bit better in languages like Rust and perhaps Swift where the compiler is unforgiving in rejecting outright nonsense and the AI can iterate on any errors it gets. Of course logically flawed code is always possible so this does not replace human review. But code in these languages is also a bit more compact and hopefully easier to understand for a human.
    • I wish people would stop pretending that agentic coding and elevated thinking arent mutually exclusive.

      Theres way too much money on this hype train now though to point out the emperor isnt wearing any clothes and way too many people who always did think that "boilerplate spew" (the one thing AI really does well) is a valid form of programming rather than a shortcut to tech debt.

  • > If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.

    > The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise

    How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

    The author tries to answer this:

    > That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.

    but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.

    How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.

    • Myopic is inevitable, to some extent. It's very hard to project this stuff.

      Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.

      We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.

      He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.

      It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.

      • > Socrates wrote about what was being lost…

        Dr. Steven Skultety & Dr. Gad Saad discussed this in a recent video / podcast.

        This link is time stamped to the topic https://youtu.be/7mcQf9E3YRo?t=1058

        • [delayed]
        • It's the opening page of the book Technopoly.
          • And here I thought I was being unique. I guess Socrates must be popular.
      • I'd say that by purging stuff from the brain we are losing thinking itself. Thinking is manipulating ideas and concepts in your head, assembling and linking. The fewer things there is, the more primitive the result. You cannot juggle without object to juggle, connecting the dots result in trivial patterns when you have just a couple of dots.
        • I "purge" - or better yet choose not to retain - the data.

          BUT, BUT! I keep the index.

          My favourite quote from Donald Rumsfeld (a very bad human being, but this is still good)

          > Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.

          What I optimise for is to have as many "known unknowns" as possible. I know a concept, process or a tool exists, but don't understand it or know how to do it. But because I know it exists, I won't start inventing it again from scratch when I need it.

          Like if one needs to do some esoteric task, they might start figuring it out from scratch. But because the index in my brain contains a link ("known unknown") to a tool/process that makes that specific thing a LOT easier, I can start looking into it more.

          Or I might need to do something common like plumbing or some electrical work at home. Do I know how to do that? No. But I Know A Guy I can call, again externalising the knowledge. Either they come over and help me do it or talk me through the process of adjusting the thermostat in my shower faucet (you need to use WAY more force than I was comfortable with without an expert on the phone btw... there are no hidden screws, you just rip the bits off :D)

        • It's true for all automation we do get more comfort. We build systems so that we humans have as little struggle as possible, not realising that struggle is the only reason for existence. By eliminating it, we are erasing ourselves from this world.
          • Automation is also for reducing drudgery - the work that prevents us from meaningful struggle by taking up resources that can be better applied elsewhere. Not all struggle (or pain) is created equal.
            • I wouldn’t count on reduced drudgery. The assembly line automated many movements needed for manufacturing. But which work involved more drudgery—-craftsman-style car production or standing on an assembly line at Ford?

              With any new technology, subsequent drudgery depends on the technology, its concomitant economics, and the imagination of the people using it.

          • This kind of argument flies in the face of the fact that plenty of inherited rich people seem to lead very happy lives. Of course, they do find things to struggle with, but it's much more pleasant to struggle to score 72 at the golf course or to outbid a rival for a piece of contemporary art than to struggle for basic needs.
            • I don’t share your idea of a happy life.

              I can live a happy life without struggling for basic needs and without playing golf all day long. If you strip off every obligation from life, then you exist, not live.

              Facing challenges and overcoming obstacles, friends and family is what makes me happy. When you’re rich, most people only care about your money, not the person you are. And I think that’s exactly what a happy life is about.

              • I guess to each their own. But in the little free time I have as a non-rich version, I like to face low-stakes challenges I myself choose, e.g. in my case those currently mostly are learning Chinese and learning to play a musical instrument. Those still provide obstacles, difficulties, the feeling of progress and moments of success/failure, but I can do them at my own pace and with no serious consequences if I fail.

                I can imagine I could be perfectly happy with a life full of challenges of that kind, instead of being forced to work at given scheduled times which often imply I spend less time with my son than I would like, including days I don't feel like it, and including boring tasks (I love my job, but like almost every job, it also has its paperwork, pointless meetings, etc.), knowing I depend on that work to live.

                In short, I think we all do need the challenge, the struggle, the successes and the failures, otherwise life would just be boring and pointless. But I don't think we (or at least I) need the obligation component and the high stakes.

                What you mention about the rich attracting people focused on money rings true, but it would be moot if AI led us all to lead lives more similar to the rich, which was the point here. (Of course, there's also the issue of whether there is widespread or unequal access to AI, but that's another story...).

              • It's fairly easy to be submarine rich, and fly completely below the radar. Just brush off questions about your work with vagueness. If you're not flashy, nobody will suspect you're rich
          • "struggle is the only reason for existence"

            That is a bold and frankly unsupportable claim.

        • It just becomes more abstracted but the thinking is still there. And who is to say we aren’t going to keep reading books, delving into hobbies, or watching movies. All those concepts will then be mixed into the our brains and who knows what new things we will think of to extract out and desire to build with AI.
        • A lot of paraimony between your statement and Socrates' comments on the transition to writing.

          Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.

          • I’ve grown to appreciate this aspect of standard examination as I’ve gotten older. Everyone wants to say “oh, you can just look it up now”, but how can you come up with higher level thinking, when you don’t have the fundamentals in your mind?
            • To use math as an example, you can always look up formulas. But after more than 1 "layer" of looking up, that quickly becomes impossible. Like, when I had to learn to calculate derivatives and primitives, I could look those things up. But when I got to linear algebra, I couldn't progress until I deeply internalized derivatives and primitives, because looking up formula A only for it to contain unknown formula B just becomes a mess.
        • > I'd say that by purging stuff from the brain we are losing thinking itself

          The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.

        • We will never fundamentally get rid of thinking; it's coupled to navigation of 3D reality we live

          And we don't need words to think; cognitive problem solving and language processing are separate processes [1]

          We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.

          [1] https://www.scientificamerican.com/article/you-dont-need-wor...

          • We also never run out of fuel. There will always be some energy left here and there to tap into.
      • Yeah but where comparison with philosophy falls short is - if we lost some ways of thinking, it was gradual and most didn't notice.

        Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.

        Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.

        LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.

        • I would argue it's virtually impossible going forward for a junior engineer to run that harder path.

          Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.

          I've seen many junior unable to walk that necessary path before LLMs were a thing.

    • > How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

      It's not by writing syntax that you get there. It's by creating software and gaining the experience of seeing it go wrong.

      "Good judgement comes from experience. Experience comes from bad judgement."

      AI just shortens the cycle without needing to type out syntax, so you get even more iterations, faster, and learn the lessons more quickly.

      Some do not learn from that experience. They were never going to learn without AI either.

      • > It's not by writing syntax that you get there.

        Writing syntax is still an important part of the experience. It is valuable because it requires you to spend time immersed in the nuts and bolts that hold software together. I'd compare it to cooking, if you have an assistant or a machine do everything and you never actually touch a knife or stir a pot, you'll lose your touch. But there is also something valuable about covering more ground and the additional experience that brings.

      • You can lead a horse to water, but you can't make it drink.
    • You aren't thinking myopically; it's a fundamental contradiction the root of which is in how human brains take in and understand new information. No amount of pontification or bollocks hedging as this and all other "thinkpieces" on this issue do, will change that. It is beyond preference and perspective. There is only doing the very task that produces skills pertaining to that task. Prompting alone or even in dominant is too far from this task. They can only write the code.
    • > How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

      Well this is true, but that doesn't mean that there isn't any other way to acquire this knowledge. Until now, this way of gaining deeper understanding was simply the most practical one, since you needed to write lots of code when starting out as a software engineer.

      But it's just as well possible to gain knowledge about useful abstractions and clean code by using AI to do the work. You'll find out after a while which codebases get you stuck and which code abstractions leverage your AI because it needs fewer tokens to read and extend your codebase.

    • This has happened in other industries before. Drafting for example when CAD arrived. Entry level wasn't "can draw, willing to learn" anymore, but demanded high domain understanding. So the pathway became compressed learning through study, and field exposure.

      Study of senior drafter "red lines": what and why they changed the initial drawing, RFI response etc. Reverse engineering good work. Failed design studies etc.

      SWE equivalents: PRs, code review, studying high quality codebases (guess what: LLMs are amazing at helping here), pair programming (learning why what the LLM did was wrong, how to improve it, etc), customer support, debugging prod incidents, studying post mortems etc

      We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end. We give them specific tasks and studies that develop their domain understanding and taste, actively support and mentor them, and expect them to drive some LLMs on the side to solve simple issues that still need human eyes on it.

      • > We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end.

        Is that generally the case though? I'm about two years into my first job in the industry and that's exactly my experience, and certainly frustrating...

    • Almost none of my operational knowledge came from writing code but a lot sure came from the reading code in the debugging process.
    • you learn by struggling and slogging through, even as a senior if your shit breaks it's on you to understand why. no LLM will shortcut that process for you (even asking LLMs why something is wrong requires you to actually understand it eventually, aka LEARNING). how that happens is up to the person.

      i don't understand all this fear projected as if people won't have agency of learning just because LLMs make it easier to do certain things. i don't think it's contradictory at all. half the people here will never have to wrangle the bullshit i dealt with 20 years ago and i'm sure when i was dealing with it there was another 20 years of bullshit before me lol.

      if you vibe code your app with no regard for the underlying code you will pay the price for it at some point in the future, anybody worth their salt will slow down enough to figure it out the "artisanal" way.

      • I'd argue that the engineers of 20 years ago were better than the engineers of today because they were significantly more resource constrained and for example, would never use a 300mb javascript library for a profile page.
        • John Carmack did praise restraint of resources when he recalled his early days working as a lone contractor and as an employee of Softdisk, when he and the team had to push out games on a very tight schedule.

          I think this extends to other parts of life, too. I still remember that I fondly played a game over and over again back in high school, when I did not have the Internet and had to borrow CDs from my friends — but when I went into the university and had access to pretty much every game freely on the Intranet, I rarely do that anymore. That’s why I always think an abundance of X may not be the best option for me. That’s why probably includes money, too.

        • As a percentage of good to mediocre, maybe. Engineers of 40 years ago were probably better than engineers 20 years ago. Less of them and more constraints they had to deal with. Democratization of technology makes it easier for more people to use. It applies to programming as much as just using a computer.
        • I never buy these examples. Being a good engineer is more than purely resource optimization. I can think of many times over my career where resource optimization mattered but it’s not always a valuable undertaking.
        • 20 years ago we were complaining about steam being bloated and unnecessary, we were 6 months off vista being a bloated mess and the Office Ribbon debacle being in full swing. PC games were often half baked console ports with atrocious performance and filled with game breaking bugs. Software was super rigid - there was no real cross platform support. We were just heading into the core 2 duo realm and it was a mess.

          Engineers sucked then as much as they suck now

      • Understanding something and learning something are not the same things.
        • nobody said they were, they are related. if you don't understand why something is behaving a certain way you need to learn
    • One thing worth mentioning is that even before AI only some small subset of engineers have experienced building systems from scratch or inventing new ways of doing things or root causing complex problems or even writing a lot of code. Most software engineering is maintenance or mundane or not productive.

      Even in a world where there's a lot of AI generated code there can still be people that have enough exposure to doing hard things. Definitely at this point in time where AI can't really do all those hard things anyways - but even after it'll be able to.

      • you don't need to build systems from scratch to acquire problem-solving skills. even routine maintenance problems require to dig into documentation, look at github issues, and do root-cause analysis. These skills are eliminated from reliance on AI and there is no fallback if one never acquired them in the first place.
    • chii
      > I don't see how that's possible, but maybe I'm thinking too myopically.

      you are thinking too myopically.

      We have people who can still do maths well after the introduction of the calculator. We have people who can still spell after the introduction of spell check.

      The junior only need to train without using AI to gain the skills needed - that's called education. If they choose to rely on AI solely, and gimp their own education, that's on them.

      • > We have people who can still do maths well after the introduction of the calculator.

        I assume by "do maths" you mean doing simple calculations, like adding a bunch of small numbers, in one's head. That's because in many situations it's more convenient to do so, than using a calculator. So the skill is preserved / practiced, because a calculator is too cumbersome to use. The skills of most people settle at the equilibrium where it takes the same effort to take out the calculator and focus on typing, as it would to strain the brain doing it without a calculator.

        > We have people who can still spell after the introduction of spell check.

        When using spell check to fix your document, you automatically learn to spell. Your skills improve by using the tool. A better analogy to AI would be an email client with a "Fix all and send"-button, where you never look at the output of the spell checker.

        • I would also argue, that most school system forbid the usage of a calculator the first couple of years (at least that's how it was Germany a few decades ago). The same with writing per hand. You can spell check by looking the word up and then manually correcting it.

          Both require manual "labor" which leads to learning.

          • And calculators took decades to become widespread. So we could learn of their side effects before they became mainstream.

            Also to note. Calculators merely solve intermediary steps. LLMs are increasingly designed to do a one shot full blown work. Longer context, deep thinking, agentic loops.

        • No. These tools are very good at creating illusion of learning, without any learning. When you watch them do stuff, you think, yeah I got this. Once they are gone, you realize all your supposed skill is gone too. Getting a skill requires deliberate practice. You can use AI for that, but just using AI is not that.
          • Why no? It sounds like you agree with the person you replied to
            • There's an old Latin proverb "Scribere bis legere", which translates to "writing is reading twice".

              In practice, what this means is that you can read some subject many times, but you would still struggle to reproduce the content by yourself. That is why, when learning, it is not sufficient to just read the material several times.

      • Why is it always so consistently a comparison to a technology of a fundamentally different order? Perhaps what has been lost is the ability to recognise distinct and incommensurable categories.
      • Yes but currently I don't know of a single company in my area that doesn't make you use AI daily because of the supposedly increased productivity. That means that juniors also absolutely have to use AI, probably sabotaging their learning process in the long run.
      • > We have people who can still do maths well after the introduction of the calculator.

        Arithmetics is a very, very small subset of math.

    • AI has not yet aligned with human thinking absolutely but some people create euphoria that it's surpassing human thinking so only after alignment and surpassing AI can think of an outside inview now it is still inside out
  • I was surprised not to see any discussion on whether the author used AI to help write this post. As many people say, writing is thinking.

    I started getting that "I'm reading another AI-written blog post" feeling around 1/3 of the way through, but I don't consider myself super calibrated on this.

    Pangram seems pretty confident it's AI (https://www.pangram.com/history/e9f6eb77-86f9-46d0-a6c1-e57c...). But I know these tools aren't perfect. I'd love to hear from the author what their process was in writing this piece!

    Related question (I'm trying to work this out for myself):

    If you believe using AI to write an email or blog post for you isn't okay, but using AI to write code for you is... what's the difference?

    Right now my instinct is something like:

    - Code can be verifiably correct (especially w/ good tests) so it's less of a purely-creative act than writing.

    - But always, always double-check the tests!

    - I still wouldn't submit a PR where I can't vouch for every line of code.

    - AI-written documentation and specs are mostly still bad and should be looked down upon. But mostly because the quality, at least today, is poor. (Lots of duplication, lack of a clear understanding of the reader's intent and needs, no thoughtful curation, etc.)

    - Be psychologically ready to update these priors as models change.

    I'd love to hear from anyone who's thought more about this.

    • Great question! I had A.I. critique what I wrote and wherever it gave me suggestions like “this sentence runs too long”, “this can be more punchy”, etc., I considered what it would tell me and chose to change direction if I thought it was warranted. But, notably, I typed out what I thought in my head to counter specific criticisms if I thought them valid instead of taking the LLM’s direct suggestion. I’d then ask it to critique my revision. I stopped when I read and re-read the final drafts end to end a couple of times and was reasonably happy with the flow myself. All the core ideas, the analogies, the choice of structure, etc. are authentically my thoughts and my message. The thing A.I. reined in the most was my tendency to have run-on sentences in early drafts. The concepts percolated in my head for weeks before I decided to blog about it - writing it end to end, and revising it over and over took about 3 hours.

      The one thing I can tell you is that pangram is confidently wrong in this instance. And I now worry about how many may have relied on such assessments blindly in consequential places (school essays?). Which ties back to the thesis of my piece - where do you rely on AI and where do you rely on your own intelligence.

      On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read. My school’s librarian wrote ambiguously “write this in your own words”. I asked her what she had meant by that. She had thought I’d copied it from somewhere even though it was all my own words. I went on to become the school topper in my final year for English (and one spot shy of being the school topper for Computer Science).

      • Thanks for sharing your process! It's helpful and refreshing to hear from someone about how they engage with AI when writing, and where / when the detection tools may fail.

        (We obviously live in a more nuanced world than most social media interactions might make you think :P)

        > On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read.

        My first experience with plagiarism was in first grade, when we were told to write a book report about a subject during our library time. I diligently took my book on the musk ox and copied three pages word-for-word into my notebook as my report. I can't remember when or how we learned this wasn't "right", but I still think back on that and laugh.

      • Sorry but it's very obvious you used an LLM for more than just suggestions. Ironic given the point of the article.
    • Right at the top: "That distinction matters more than people think." That's basically telltale AI :)

      Also the entire framing around "judgment" and "taste" is what LLMs love to parrot about the topic.

      There are fair arguments in the post but I totally agree that "writing is thinking" and also holding myself to "if you didn't bother to write it, why would I bother to read it"?

      • It really sucks that certain mannerisms and phrases used by LLMs are now tainted.

        To say nothing of em dashes which I loved using before LLMs. Now every time I use it I'm expecting to see a comment like this calling me out.

        Soon it will be bad taste to simply use proper grammer.

    • What counts as AI help and therefore should be disclosed? For example I often use Grammarly to edit some of my more important writing (but not this post obviously) because it does find grammar mistakes and it does give good readability suggestions (I have a tendency to be wordy) and the process is quicker saving time. I don't always take its advice, as many of its suggestions are not my voice, but it is a useful tool. So do I disclose?
  • The eloquence with which this point gets (repeatedly) made is continuing to improve each next time I read it. However, I still feel like we haven't nailed it. That is, we are not yet at the "aphorism" stage of the discourse (e.g. "the medium is the message", "you ship your org chart", "9 mothers can't make a baby in a month"), in which the most pointed version of this critique packs a punch in just a few words that resonate with the majority of people. That kind of epistemological chiseling takes years, if not decades. And AI certainly won't do it for us, because we don't know how to RL meaning-making.

    Edit: 9 babies → 9 mothers

    • bla3
      > "can't make 9 babies in a month"

      It's "9 women can't make a baby in one month".

      • In fairness, 9 women can't make 9 babies in a month either
        • No idea why you were dv'd.

          It still takes roughly nine months to make a human baby, regardless of how many women or babies are involved!

          • 9 pregnant women produce one baby/month on average (assuming no miscarriages or late births,etc).

            On paper your CPU can execute at least one instruction per core per cycle but that's on average too, if you actually only have one instruction to run it takes several cycles.

            • But the context is to throw 9 women at the problem of having no conception and the hope to get a baby within a month.
          • You're assuming all women in your cohort start not pregnant. However, given a random sampling of women across the entire human race, if you have approximately 14,000 women, statistics says you'll have a baby in a month. That is to say, the chances of one of those woman being 8 months pregnant reaches close enough to 1, given about 14,000 randomly selected women.

            Also, you can get a baby tonight if you steal one from the maternity ward.

            The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.

            The pitfall of AI coding is that previously every shiny tangent that was a distraction, is now a rabbit hole to be leaped into for an afternoon, if you feel like it. It's like that ancient Chinese curse, may you live in interesting times. Everybody can recreate an MVP of Twitter in a weekend now when previously that was just a claim a certain type of people made.

            • I just looked up "may you live in interesting times" and learned that it is not, in fact, originally Chinese. Per wikipedia:

              > The nearest related Chinese expression translates as "Better to be a dog in times of tranquility than a human in times of chaos."

              https://en.wikipedia.org/wiki/May_you_live_in_interesting_ti...

            • > You're assuming all women in your cohort start not pregnant. However, given a random sampling of women across the entire human race, if you have approximately 14,000 women, statistics says you'll have a baby in a month. That is to say, the chances of one of those woman being 8 months pregnant reaches close enough to 1, given about 14,000 randomly selected women.

              There's a good point in here along the lines of "if you need X in a month, and someone else has something that's 90% of what you want X to be, can you buy it from them before starting any crazy internal death marches instead?"

              > The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.

              This is quite possibly only a one-time switch from a changed baseline, though. Give it a few years and "the fastest way an LLM tool can do it" will be what gets tossed out a an estimate, and stakeholders will still want you to do it in a tenth the time...

            • that's still one woman per pregnancy, it's not 14k women collaborating on a single pregnancy.
            • > You're assuming all women in your cohort start not pregnant

              As far as I know, all women everywhere start not pregnant

          • Sometimes HN doesn't like jokes, which is okay. I didn't really contribute much to discussion, so I probably deserve some downvotes. I'm ok with it.
            • Actually, I like quite a lot of the subtle jokes on HN. It is harder to notice, fewer to find, and I don’t get it many a times. But when I get it (or someone explains it to me, perhaps out of pity), I chuckle, laugh, and laugh again. And I remember those comments.
              • I think the occasional joke is fine but when you have too many then the comments get diluted. It's exactly that kind of thing that makes me hate Reddit and so many other places: spam.
      • Hah, right, I mixed it up!
    • I'm using "don't bring a forklift to the gym".
    • > That is, we are not yet at the "aphorism" stage of the discourse

      we learn by doing

      • Put differently: you get good at what you actually do, not what you think you're doing.

        If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.

      • I’ve also seen along those lines “there is no compression algorithm for experience” - a nice summary of the hn posts from today.
        • I don't know. Growing up and seeing life and people around me I firmly believe that if you have enough brain power and intuition for $TOPIC you can speed-run it. At the same time, with time and experience and doing/re-doing it, you will learn or master $TOPIC [1] even with less brain power.

          [1] Depending on the topic and the level of knowledge of it.

          • Isn't intuition just distilled experience?
        • It seems overly pessimistic about education. Book learning isn't everything, but a physics textbook could be seen as the compression of centuries of experience.
          • Book learning to me seems like a compression of knowledge that had to be acquired through many years of experimentation and observation. But knowledge is not an experience itself.

            Take juggling for example - something that was on HN homepage last week. You can learn everything you need to know about juggling though a post or a book or an educational video. But can you juggle after all that book learning? Not at all - to be able to juggle one has to spend time practicing and no amount of reading can help meaningfully compress that process.

            Muscle memory required for juggling is not a 1:1 correlation to experience, but I feel like it's close enough to it.

            • Juggling is a nice example. Maybe one could phrase it as, you can learn how to learn to juggle from a book.
        • There clearly is though. You don’t remember every detail of every moment that constitutes the experience.
      • ... or by textbooks, Stack Overflow, senior engineers, code review. How many engineers today got their start by building Minecraft mods or even MySpace?

        I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.

    • How about "Intelligence amplification, not artificial intelligence"?

      Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".

    • "Bicycle of the Mind" has been cited to death.

      The problem is that it was coined so early that we are way past the aphorism stage now.

    • Isn't it the vehicle metaphor about bicycles for the mind? Not fully crystallized yet but I feel like someone will
    • AI is Augmenting (Actual) Intelligence.
    • >the medium is the message

      If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.

      • You're right. ive struggled to understand what exactly this means, in large part perhaps it's so often misused?

        I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.

        but i think im off on that, ill look this person up and find out!

        • Some examples.

          Firstly, Twitter has an upper bound on the complexity of thoughts it can carry due to its character limit (historically 180, now somewhat longer but still too short).

          Secondly, a biased or partial platform constrains and filters the messages that are allowed to be carried on it. This was Chomsky's basic observation in Manufacturing Consent where he discussed his propaganda model and the four "filters" in front of the mass media.

          Finally, social media has turned "show business [into] an ordinary daily way of survival. It's called role-playing." [0] The content and messages disseminated by online personas and influencers are not authentic; they do not even originate from a real person, but a "hyperreal" identity (to take language from Baudrillard) [0]:

              You are just an image on the air. When you don't have a physical body, you're a
              _discarnate being_ [...] and this has been one of the big effects of the electric age. It
              has deprived people of their public identity.
          
          Emphasis mine. Influencers have been sepia-tinted by the profit orientation of the medium and their messages do not correspond to a position authentically held. You must now look and act a certain way to appease the algorithm, and by extension the audience.

          If nothing else, one should at least recognize that people primarily identify through audiovisual media now, when historically due to lack of bandwidth, lack of computing and technology, etc. it was far more common for one to represent themselves through literate media - even as recently as IRC. You can come to your own conclusions on the relative merits and differences between textual vs. audiovisual media, I will not waffle on about this at length here.

          The medium itself is reshaping the ways people represent, think about, and negotiate their own self-concept and identity. This is beyond whatever banal tweets (messages) about what McSandwich™ your favourite influencer ate for lunch, and it's this phenomena that is important and worth examining - not the sandwich.

          [0] Marshall McLuhan in Conversation with Mike McManus, 1977. https://www.tvo.org/transcript/155847

        • It's confusing because "message" is not using its lay meaning, and decades of "medium" and "media" meaning drift meant that it isn't either.

          For "the medium is the message", "medium" refers to any tool that acts as an extension of yourself. TV is an extension of your community, even things like light bulbs (extends your vision) are included in his meaning.

          McLuhan argued that all forms of media like that carry a message that's more than just their content. "The message" in that argument refers to the message the medium itself brings rather than its content. For example, the airplane is "used for" speeding up travel over long distance, but the the message of its medium itself is to "dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for."

          You can see it happening via online media that extend ourselves across the internet. Think of how, once easy video creation via Youtube became uniform, web comics stopped becoming a popular medium for comedy online. It's not like the web comics faded because they got worse; it's that they faded into a niche format because people didn't want to communicate via static images anymore. Or how, once short form videos on TikTok got big, you saw other platforms shift to copy the paradigm. McLuhan's point is that it's not just the content of those short form videos that matters; it's the message of the format itself. Peoples' attention spans grow shorter because of the format, and before too long, we saw the tastes and expectations of the masses change. Reddit's monosite-with-subcommunities format and dopamine triggering voting feedback mechanism were its message more than any actual content posted there, and it's why traditional forums are niche and dwindling.

          If you want to get a pretty good understanding of it, just read the first chapter from his book Understanding Media. It's short and relatively straight forward.

    • Taste/judgement cannot an AI beget
    • Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.

      To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.

      The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.

      So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.

      I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.

      What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.

      • LLM's are a mediocre map, but they're a great compass, telescope, navigation tools and what have ye
      • > What we really need is to recreate software from a subjective perspective.

        What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?

      • > Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.

        I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:

            Every Letter signifieth the member of the substance whereof it speaketh.
            Every word signifieth the quiddity of the substance.
        
            - John Dee, "A true & faithful relation of what passed for many yeers between Dr. John Dee ... and some spirits," 1659 [0].
        
        The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).

        LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.

        Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).

        [0] https://archive.org/details/truefaithfulrela00deej/page/92/m...

    • Outsource manual labor, not your brain.
    • xnx
      This concept won't reach that point because when you chisel too hard it crumbles. There are countless lower level tasks that typical programmers no longer learn how to do. Our capacity for knowledge is not unlimited so we offload everything we can to move to the next level of abstraction.
      • lsy
        AI coding isn’t an abstraction, though. You can’t treat a prompt like source code because it will give you a different output every time you use it. An abstraction lets you offload cognitive capacity while retaining knowledge of “what you are doing”. With AI coding either you need to carefully review outputs and you aren’t saving any cognitive capacity, or you aren’t looking at the outputs and don’t know what you’re doing, in a very literal sense.
        • Non-determinism is not as much of a problem as the lack of spec. C++ has the C++ norm, Python has its manual. One can refer to it to predict reliably how the program will behave without thinking of the generated assembly. LLMs have no spec.
          • The two come in hand.

            Non determinism is what conveniently feels the gap of having no spec.

            In fact turn temperature to 0. And it will be virtually deterministic. It exacerbates the problem that LLMs, as you rightly point out, have no spec.

        • "You can’t treat a prompt like source code because it will give you a different output every time you use it"

          But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code

          So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.

          That requires enough thinking effort.

          • Didnt work for the prod data that the AI nukes in spite of prompts saying "DON'T FUCKING GUESS", just like that in all caps: https://news.ycombinator.com/item?id=47911524

            What makes you think it will work for you?

            • That I don't let agents run wild in a production environment?
              • You let them write code that runs in prod, which is the same thing with extra steps.

                Unless you review that code carefully, and then we're back to the point about it not saving you any cognitive overhead.

                • Of course it saves me overhead by not having to read all the necessary docs etc myself and just check the resulting code and not having to type all myself.
                • >> You let them write code that runs in prod, which is the same thing with extra steps.

                  The “with extra steps” is doing a lot of work in that sentence.

          • your spec is a guideline, not something the LLM has to adhere to. it is definitely not guaranteed to work without error
            • Are humans guaranteed to work without error?
          • > if I made a very clear spec - I can be almost sure

            That "almost" is doing a lot of heavy lifting here. This is just "make no mistakes" "you're holding it wrong" magical thinking.

            In every project, there is always a gap between what you think you want and what you actually need. Part of the build process is working that out. You can't write better specs to solve this, because you don't know what it is yet.

            On top of that, you introduce a _second_ gap of pulling a lever and seeing if you get a sip of juice or an electric shock lol. You can't really spec your way out of that one, either, because you're using a non-deterministic process.

            • Well, unfortunately it is the same with real humans who happen to be non-deterministic as well. If I give them a task, I can be allmost sure, they will do it. But even humans can have unexpected psychotic breakdowns and do destructive stuff like deleting important databases.

              So right now, humans are for sure more reliable. But it is changing. There are things I already trust a LLM more than a random or certain known humans.

        • > AI coding isn’t an abstraction

          Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.

          • No, because software engineering is more than <insert coin, receive code>. I've never had a full spec dropped on my desk lol. There's no abstraction.

            Software engineering is a lot more social and communication-heavy than people think. Part of my job is to _not_ take specs at face value. You learn real quick that what people say they need and what they actually need are often miles apart. That's not arrogance, that's just how humans work.

            A good product manager understands the biz needs and the consumer market and I know how to build stuff and what's worked in the past. We figure out what to build together. AIs don't think and can't do this in any effective way.

            Also, if you fuck up badly enough that you make your engineers throw out code, you're gonna get fired lol

          • With an abstraction, you literally move your thinking up a level. So you move up a floor up the tower and no longer have to think what's happening below. The moment something leaves your floor, its course is set. If a result come back, its something familiar, not something from the lower floor.

            A human coder can be seen as an abstraction level because it will talk to the PM in product terms, not in code. And the PM will be reviewing the product. What makes this work is that the underlying contract is that there's a very small amount of iterations necessary before the product is done and the latter one should require shorter time from the PM.

            We've already established using a LLM tool that way does not work. You can spend a whole month doing back and forth, never looking at code and still have not something that can be made to work. And as soon as you look at the code, you've breached the abstraction layer yourself.

        • It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.
          • > Some people are even comparing them to compilers.

            A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol

            • It's awful, and seeing even engineers I respected become so AI pilled they're shipping slop without review has made me lose respect for them. It also can't help but make me wonder: what am I missing? Am I holding it wrong? Am I too focused on irrelevant details?

              So far, my conclusion is that while LLMs can be s productivity boost, you have to direct them carefully. They don't really care about friction and bad abstractions in your codebase and will happily keep piling cards on top of the crooked house of cards they've generated.

              Just like before AI, you need a cycle of building and refactoring running on repeat with careful reviews. Otherwise you will end up with something that even an LLM will have a hard time working in.

            • Right? People have put in decades of work to make them extremely reliable, they didn't magically start like that.
      • That's true, but I think it's beside the point. The flip side of that argument, which is equally true, goes something like, "not doing cognitive push-ups leads to cognitive atrophy."

        There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.

        • > There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators)

          I'd argue these are not at all OK to lose. You live in an earthquake zone? You sure better know which way is north and where you have to walk to get back home when all the lines are down after a big one. You need to do a quick mental check if a number is roughly where it should be? YOu should be able to do that in your head.

          There might be better examples that support your point more effectively e.g. cursive writing

          • Yep, there are tons. Growing food, building shelter, etc. But, for pretty much all of the skills we've allowed to atrophy in response to the advances of capitalism, technological & scientific progress, and societal changes, one COULD make the same basic argument, which is that losing that skill is detrimental to the individual, and yet here we are, not growing our own food, not building our own shelter, etc.

            The arguments you make ≤ the values you actually hold ≤ the actions you take in support of those values.

            I'm only interested in any such argument to the extent to which you've personally put it into practice. Otherwise, you're living proof of the argument's weakness. (To be fair, it's extremely hard to be internally consistent on this stuff! We all want better for ourselves than we have time and energy for. But that's my point: your fully subconscious emotional calculus will often undercut at least some of your loftier aspirations. Skills that don't matter anymore invariably atrophy due to the opportunity cost of keeping them honed.)

        • > "not doing cognitive push-ups leads to cognitive atrophy" This is one of the points being made in the post, at least in reference to people who already have some mastery of their craft. If they outsource their thinking without elevating it, they aren't exercising that metaphoric muscle between their ears.
      • I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.
        • Are compilers deterministic?
          • I'm sure someone, somewhere, once wrote one that wasn't but in general, yes they are.

            The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.

            Not so with the AI tools. At least with the ones I use anyway.

          • Given the same compiler, I believe they would be the same between runs given the same inputs. I suppose that could not be true at the margins, but I would expect correctness out of whatever path it chose.
          • For all intents and purposes yeh. Its really about the variance in actual outcomes vs the expected. The variance is not much is it? With LLMs that absolutely isnt the case.
      • The idea that a tool intended to replace all human cognitive work is the next level of abstraction is so fundamentally flawed, that I'm not sure it's made in good faith anymore. The most charitable interpretation I can think of is that it's a coping mechanism for being made redundant.

        Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.

  • The way I use AI now feels more exhausting than the programming I did for the last 20 years. I pose a problem, then evaluate proposals, then pick the one I think is the "right one"(tm), then see the AI propose a bunch of weird shit, then call it out, refine the proposal until it feels just about right (this is the exhausting part), then let it code the proposal. The coding will then run for 1-5 hours and produce something that would have taken me at least 2 or 3 weeks (in that quality).

    After 5 hours or so of doing this planning, I'm EXHAUSTED. I never was exhausted in this manner from programming alone. Am I learning something new? Feels like management. :)

    • I feel this as well. I think it’s something to do with having to be more “on” as you slowly work with the LLM to define the problem and find a reasonable solution. There’s not much of a flow-state. You have to process mountains of output and identify the critical points, over and over, endlessly. And it will always be an off in this unsettling little way, even when it’s mostly quite good. It’s jarring.

      The strange sorts of errors and reasoning issues LLMs have also require a vigilance that is very draining to maintain. Likewise with parsing the inhuman communication styles of these things…

      • Could it be that what we called flow state was actually a sort of high level thinking time afforded by doing low level routine work?

        For instance I'm the old world, if you wanted to change an interface, you might have to edit 5 or 6 files to add your new function in the implementations. This is pretty routine and you won't need to concentrate that much if you're used to it, so you can spend that low-effort time thinking about the bigger picture.

        • you may be right on this hunch. but I think the old world is no longer there now :( more thinking is expected per unit time
      • Its the "unsettling little ways", right. So you can't skip whole paragraphs, you literally have to read everything. And sometimes its worded in ways I don't understand at all (due to missing implications that the LLM conveniently omitted), so I have to re-ask it about that point as well. For every major feature or work-unit it takes up to 2 or 3 hours.

        I figured out some patterns in the way it behaves and could put more guard-rails in place so they hopefully won't bite me in the future (spelled out decision trees with specific triggers, standing orders, etc.), but some I can't categorize right now.

    • kubb
      How do you check if what it produced is even the right thing? Models love to go chasing the wrong goal based on a reasonable spec.
      • When the end result has problems and needs to be reworked.

        You can't figure this out instantly except when you'd review everything the LLM produces, which I am not. So the round trip time is pretty long, but I can trace it back to the intent now because I commit every architecture decision in an ADRs, which I pour most of my energy into. These are part of the repo.

        Using these ADRs helped a lot because most of the assumptions of the LLM get surfaced early on, and you restrict the implementation leeway.

        • Got it. I imagine concurrency bugs will hit hard with this approach because they show up rarely and are hard to debug.
      • Do they? I haven't experienced models deviating from a spec in a very long time. If anything I feel they are being too conservative and have started to ask to confirm too much.
        • The problem is not the LLM deviating from the plan (though that rarely also happens when it thinks it has a better idea) but rather if the plan is not strict enough and the LLM decides on the fly HOW it is going to build your plan.
    • Sounds like you’re using Waterfall Which, if it works for you, go for it. But maybe Agile would feel more dynamic.
    • To me it’s more like being a super micro-managing TL that would annoy the hell out of their human reports. It comes with all the pros and cons of micro-management.
    • AI does the easy/medium part, leaving only hard stuff and context switching, so naturally it's more exhausting, as the concentration of difficult-work-per-unit-time and context-switching-per-unit-time is much higher.
    • I think one of the benefits of AI is that it will get started, and keep going.

      But maybe pacing/procrastination might be relief valves?

  • If one actually understands LLM AIs, not the technological aspect but the literary embodiment they become, using them is an elevation of one's thinking. Few to none have the foundational education to see them in their manifested extreme intellectualized nature.
  • The scary thing is I have seen high level directors and executives say “I asked ChatGPT and it agreed with me” as a way to try to settle a debate. People seem all too willing to delegate even matters of judgement to AI.

    On the other hand I have been in debates where someone asks ChatGPT to draft a list of possible approaches and pros and cons - and after reading through the list we were all in alignment on the best approach.

    The latter I think is a constructive use of AI to elevate thinking, while the former has me thinking it may be time for a career change.

    • sesm
      To make an exhaustive list of possible options you need to find key questions that divide solution space. This requires logic, which LLMs lack.
      • > This requires logic, which LLMs lack.

        What? I've heard many takes on what AI lacks, but never this one. We had ChatGPT being able to solve an Erdős problem on its own yesterday [0]; how could you explain that if it cannot do logic?

        [0] https://news.ycombinator.com/item?id=47903126

        • sesm
          LLM didn't solve an Erdos problem, it generated a text that a human looked at, cleaned up, corrected and used as base for a solution.

          WRT logic, there a multiple occasions of LLMs answering incorrectly to trivial logic puzzles. Of course, with each occasion becoming public they are added to training data and overfitted on, but if you embed them in a more subtle way LLMs will fail again.

          • From the article about the Erdos problem:

            > “This one is a bit different because people did look at it, and the humans that looked at it just collectively made a slight wrong turn at move one,” says Terence Tao, a mathematician at the University of California, Los Angeles, who has become a prominent scorekeeper for AI’s push into his field. “What’s beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block.”

            > “There was kind of a standard sequence of moves that everyone who worked on the problem previously started by doing,” Tao says. The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.

            > “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.

            > More importantly, they already see other potential applications of the AI’s cognitive leap. “We have discovered a new way to think about large numbers and their anatomy,” Tao says. “It’s a nice achievement. I think the jury is still out on the long-term significance.”

            You can debate whether the LLM used logic or not. I don't think you can debate that the LLM has in this case elevated human thinking, by leading us to a solution that had eluded world-class mathematicians for 60 years. And a new way to think "about large numbers and their anatomy".

            And if it works for Terrence Tao and Erdos problems, then I'm certainly not above using AI to help brainstorm solutions for my little app at work.

            • Sure, LLMs are good at generating text that humans can interpret as educated guesses. But a list of educated guesses is not 'enumerating options', because informed decision requires a complete list of options in order to not miss anything. Imagine using a Monte-Carlo method with sample size of 3 for finding a function extremum - that's the equivalent of using LLM-generated list of options for making a decision.
          • > WRT logic, there a multiple occasions of LLMs answering incorrectly to trivial logic puzzles.

            There are multiple occasions of me answering incorrectly to trivial logic puzzles. Is that enough for you to deduce that I am "lack" logic?

            Humans make mistakes all the time, and indeed we say "To err is human"; why should we expect AI not to?

  • There are plenty of engineers that couldn't work without a modern IDE or in languages without memory management.

    Or without the ability to use a library from GitHub / their package manager.

    It doesn't feel THAT much different to me.

    "Engineer" as a term might drift. There are "web developers" that can only use webflow / wordpress.

    • All those examples are fundamentally different because those are hard-coded, deterministic programs/algorithms/libraries.
    • > couldn't work

      "Couldn't", or "wouldn't"? Early in my career I'd be happy doing anything basically, not much I "couldn't" do, given enough time. But nowadays, there is a long list of things I wouldn't do, even if I know I could, just because it's not fun.

      • It should probably be "would initially struggle to be as efficient without them."

        This is not a binary.

    • Engineer as a term has already drifted vastly since nobody in the field of "Software Engineering" is actually an Engineer if we go by a strict definitions.

      Engineers are accredited and in some countries even come with a title.

      • > ... nobody in the field of "Software Engineering" is actually an Engineer if we go by a strict definitions.

        This is a pet peeve of mine, so while I understand what you mean, I will challenge you to come up with a strict definition that excludes software engineering!

        And since I've had this discussion before, I'll pre-emptively hazard a guess that the argument boils down to "rigor", and point out that a) economic feasibility is a key part of engineering, b) the level of rigor applied to any project is a function of economics, and c) the economics of software projects is a very wide range.

        Put another way, statistically most devs work on projects where the blast radius of failure is some minor inconvenience to like, 5 users. We really don't need rigor there, so I can see where you're coming from. But on the other extreme like aviation software, an appropriately extreme level of rigor is applied.

        • >I will challenge you to come up with a strict definition that excludes software engineering!

          "Structured, mature, legally enforced, physically grounded standards based approach to the construction of repeatable, reliable, verifiable, artifacts under stable (to the degree that matters) external constraints".

          Some niche software development (e.g. NASA/JPL coding projects with special rules, practices, MISRA etc) can look like that.

          99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.

          • By that definition the vast majority of historic engineers weren't "real" engineers. It's correct to claim that software engineering isn't currently an accredited profession and it's also quite reasonable to question the extent to which the vast majority of software development qualifies as the practice of engineering. But the latter is highly subjective and will likely also rule out a significant fraction of the grunt work that accredited engineers perform.

            Which is to say, engineer the job title is distinct from engineering the activity is distinct from engineer the accreditation.

            • >By that definition the vast majority of historic engineers weren't "real" engineers.

              And they weren't. They were craftsmen and tradesmen, e.g. stonemasons.

          • This basically makes civil/structural engineers the only real engineers. Maybe people working on medical devices or military kit. 'Stable external constraints' still disqualifies most of those, though. Every single kind of engineering has to deal with the spec changing.

            Also, software engineering is ahead of a few other disciplines of engineering on rigor in some dimensions. I feel like most software engineers don't understand how good software tools are at change management compared to pretty much anything else. (and that having good change management is a baseline, as opposed to a decent chance of not having any at all).

          • The only word doing any work at all in that definition is "artifacts", and the problem is that the methodology that is actually foundational to engineering need not be applied to physical objects. Further, it's not clear that this methodology shouldn't be rigorously applied to non-"artifacts" which that can cause equal or greater harms when created negligently.

            The definition I always saw used was this one, I think:

            > Engineering is the profession in which a knowledge of the mathematical and natural sciences gained by study, experience, and practice is applied with judgment to develop ways to utilize, economically, the materials and forces of nature for the benefit of mankind.

            This sounds like it should exclude software design and development. Except it doesn't need to, and it's not really useful to exclude it simply because the definition isn't broad enough. The definition isn't engineering. The definition is trying to describe and encapsulate the reality of engineering. Nuclear and modern electrical engineers frequently never create anything physical in their careers whatsoever. Nuclear engineers manage power generation at facilities that others designed and built, while electrical engineers are frequently just dealing with signal processing. They are not less rigorous in their methodology.

            The reality is that engineering is the methodical application of constraints to solve a problem. And it is the methodology that is the valuable aspect. The knowledge is necessary for each discipline, but it is itself fundamentally a prerequisite. There is a reason engineering is a single school of many disciplines.

            Meanwhile, the reason that software engineering looks like half-art and half-guess has a lot more to do with software as a non-theoretical field of study only being about 60 years old in practical terms. The fundamental works of the field like The Art of Computer Programming haven't even been written yet.

            Whatever happens to software development and operational systems administration in the next 50 years, however, both roles almost certainly would benefit society by becoming actual professions. Their responsibility to society as a whole has been allowed to be understated, and we're well past the days when a computer bug causing the kinds of deaths and damages such as we'd see from a civic work failure or automotive design flaw sounds unreasonable. Indeed, that actually sound fortunate given some of the software catastrophes that have occurred.

            • >The only word doing any work at all in that definition is "artifacts"

              That's the subject, the only word that is NOT doing any work there (since both regular and software engineering produce artifacts).

              Words that do the heavy work in that phrase are:

              structured, mature, legally enforced, standards-based approach - for repeatable, reliable, verifiable, - artifacts - under stable external constraints

              Software can sometime appear to touch those.

              E.g. there are "standards", like HTML or like ARIA, so it's "standards-based" too! But those standards are loosely enforced, usually not mandated, loosely defined, and ad-hoc implemented with all kinds of diverting.

              Or e.g. software can some times be repeatable. E.g. reproducible builds (to touch upon one aspect). But that's again left to the implementor, seldom followed (almost never for most software work, only in niche industries).

              In general, software is not engineering (in the strict sense) because it's anything goes, all the above conditions can or cannot be handled (in any random set), the final work is a moving target, and verification is fuzzy, if it even happens.

              >The reality is that engineering is the methodical application of constraints to solve a problem.

              In that case, following specific constraits to solve a math problem, or to draw an artwork (e.g. using perspective) is also "engineering". That's too loose a term to be of any use.

              Even accepting that, the degree of "methological" in software "engineering" versus e.g. civic or aviation engineering is orders of magnitude less.

          • >* ... legally enforced ...*

            Other than that part (most countries in the world do not have regulations or licensing requirements for most engineering disciplines) I would agree. But I would also point out the set of software projects that meet that definition is much larger than those you listed.

            As mentioned, it's a matter of economics, so the rigor scales with the pain it can cause if something that goes wrong. Hence any software that has a high blast radius is that rigorously built, probably even more. There are entire categories (not just individual examples!) of such projects. An obvious category are platforms that run or build other applications: OS kernels, databases, compilers, frameworks, cloud platforms (yes those 9's are an industry standard), and so on.

            Then there are those regulated ones like automotive, aviation and medical software. There is even a case to be made for critical financial software.

            Another less obvious category applies to any large software services company that has oncall engineers, because the high cost of engineers quickly climbs and quality processes quickly get installed, which basically amount to those critera you listed.

            That internal LoB app with 5 users? That level of rigor simply does not make economic sense. Which is probably what you mean by:

            > 99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.

            To that I'll say, as someone whose first site outage as an intern was an actual industrial manufacturing factory (not an AbstractFactoryFactory!) a surprisingly large fraction of projects in other engineering disciplines match that description ;-)

            • >most countries in the world do not have regulations or licensing requirements for most engineering disciplines

              Well, then in those countries those disciplines aren't treated as enginering.

              Any country worth its name and with a rule of law, would have regulations and licensing requirements for electricians, civic engineers, structural engineers, aviation engineers, chemical engineers, etc.

              I mean, they had building rules at the time of Babylon:

              https://talk.build/construct-iq/ancient-babylon-and-the-firs...

              And even in medieval times, working in certain fields that we'd call engineering today, was legally restricted to specific guilds.

          • Yea we do standups every day and plan story points twice a month???
        • I don't really disagree with you. I was just pointing out how the parent mentioned how "engineering" is changing when it already has changed many many times.

          Of course I want the best of the best who are top notch and rigorously trained working on mission critical software.

        • It's a pet peeve because the truth hurts. We (most of us) aren't doing anything that resembles engineering.
          • I'd agree that applies to people, or more accurately specific projects, but not the discipline of software engineering as a whole.

            Even most of the projects I personally have worked on simply did not need "engineering" as such, but other projects where uptime was critical and the cost of failure was high, there was a much higher level of rigor.

        • “Accredited”
          • Most countries do not need accreditation for engineers.
      • I started my career as a machine designer (mechanical engineering), designing some machines for FMCG factories.

        It wasn't that much different from SWE - mostly looking up catalogs, connecting certain pre-made pieces together with custom parts and lots of testing of the final plan to make sure there are no collisions and every movement is constrained properly.

        95% of the time no load or sizing calculations were necessary - we just oversized everything based on tacit knowledge (the greybeards reviewing the plans) since these machines were not mass produced and choosing somewhat bigger parts was not expensive given that these machines would operate and produce value 24/7 for years.

        (I hope the analogy to software engineering is visible!)

        What I'm saying is that the level of "engineering rigor" heavily depends on the field where engineers are operating within. Even certain SWE fields (healthcare, finance, aviation etc.) have more regulation and require more rigor than others.

      • Engineers are accredited in the US too. But there is an "industrial exemption" that allows you to work as an engineer without a license for certain kinds of employers. You just can't offer engineering services to the public without a license. This is more important in some fields than in others.

        Where I work, there are plenty of non licensed engineers, but we pay a 3rd party agency for regulatory approval. The people who work for that agency are licensed engineers. Their expertise is knowing the regulations backwards and forwards.

        Here's what I think is happening within industry. More and more work done by people with engineering job titles consists of organizing and arranging things, fitting things together, troubleshooting, dealing with vendors, etc. The reason is the complexity of products. As the number of "things" in a product increases by O(n), the number of relationships increases by O(n^2), so the majority of work has to do with relationships. A small fraction of engineers engages in traditional quantitative engineering. In my observation, the average age of those people is around 60, with a few in their 70s.

      • as an actual engineer i just feel sad. i should probably feel happy but i like solving problems. fml i have becomea luddite.
        • I get it. But there’s plenty of engineering to do in any serious system. I am in a very AI forward company using AI for everything, but I still am solving engineering problems every day.
      • i think you accidentally overlooked accredited engineers who happen to be writing software
        • Of course there are engineers who write software, I'm just speaking about the majority of roles where thats not the case.
      • The concept of engineer predates the accreditation systems you’re referring to by centuries.
    • The huge difference is that we don't know the cost we're going to end up with.

      Will you have AI at the cost of a slack subscription? At the cost of a teammate? Will it not be available and you'll have to hire anthropic workers with AI access?

      • Local AI models are already more than capable enough writing code that surpasses the ability of any bad or even mediocre engineer. That is not something we need to worry about.

        In a way, this is less of a cost issue than the fact that some/many engineers do not seem to be willing or able to host things themselves anymore and will happily outsource every part of their stack to managed services, be it CDN, hosting, databases, etc. I don't know why that's not more alarming than the LLMs.

        • Qwen 3.6 27B is shockingly good, just to add to your point.
        • Thank goodness for China or Silicon Valley capitalists would be locking us down into an unimaginably awful dystopia. Though they're not done trying.
    • bpye
      At least today, it isn't practical for most people to run these models locally- I think adding a dependency on a cloud service is different enough to some local (possibly open source) tool like an IDE.
      • Self hosting at a reasonable scale is much cheaper than people think. I am running clusters of DGX Spark machines with BiFrost load balancers in our company and for client projects. They work flawlessly!

        128 GB unified memory, Nvidia chip and ARM CPU for just around 3k€ net. They easily push ~400 input and ~100 output tokens per second per device on say gpt-oss-120b. With two devices in a cluster, thats enough performance for >20 concurrent RAG users or >3 "AI augmented" developers.

        And they don't even pull that much power.

        • factor in depreciation and energy costs, and a subscription might just be cheaper.
      • Slack, GitHub, Figma, AWS, etc

        Lots of people use firebase, supabase etc.

        Many people's jobs are centered around using Salesforce

        It all makes me uncomfortable- I want to be able to work without internet. But it's getting more difficult to do it

    • "What kind of engineer are you" - Jesse Plemons wearing bright-red sunglasses
    • IDEs are free. Libraries are free. Languages are free. This is becoming more like an internet subscription where you’re at the mercy of Anthropic the same way you may be at the mercy of Comcast.

      I’m sure you can see the difference between a garbage collector and a nondeterministic slop generator

      But it feels good to equivocate, so here we are.

      • > IDEs are free. Libraries are free. Languages are free. This is becoming more like an internet subscription where you’re at the mercy of Anthropic the same way you may be at the mercy of Comcast.

        Ollama/llamafile/vllm/llama.cpp are free. Qwen/kimi/deepseek are free. Pi.dev/OpenCode are free. If you're using a SaaS AI subscription that's fine, but that's hardly the only option.

        • The comparison to me sounds like "you dont have to take a plane to travel between countries, paddle boats exist".
        • How much does the hardware to run them on cost? Especially to get decently sized models running at decent speeds.
      • Not all IDEs are free. Not all LLMs are subscriptions.
        • > Not all

          is doing a lot of work to avoid engaging with the actual argument.

  • I think AI can generally be utilized in two ways:

    1) you use it to help write code that you still “own” and fully understand.

    2) you use it as an abstraction layer to write and maintain the code for you. The code becomes a compile target in a sense. You would feel like it’s someone else’s code if you were asked to make changes without AI.

    I think 2) is fine for things like prototypes, examples, references. Things that are short lived. Where the quality of the code or your understanding of it doesn’t matter.

    I think people get into trouble when they fool themselves and others by using 2) for work that requires 1). Because it’s quicker and easier. But it’s a lie. They’re mortgaging the codebase. And I think the atrophy sets in when people do this.

    • And any push to use 2 to build infra to make 1 easier is hard to sell when a lot of engineers think AI will be able to perfectly do 1 in some nebulous time in the near future.
    • the thing is it doesn't even feel like mortgaging. shipping, features going out, everything looks fine. then something breaks and you realize you can't debug your own code without asking the model again.
      • It feels like an addiction. Normal coding requires sustained attention, you can sense how deep you are in the progress and when you're too tired to continue, but with LLMs the next feature always feels like another prompt away, having sessions go well into the early morning/late-night. You rationalize you can quit, that you've been reading the source and each diff enough to "understand" the codebase. But the truth is when the rate limit runs out, you'll be absolutely helpless, crawling back for extra-usage, until you finally see the total bill at the end of the month.
        • It also feels like another nail into the coffin for our attention. Smart phones, IM, notifications and new media has already destroyed a good deal of it, AI seems to be doing that to coding. Do more, faster, just ask the AI, dont spend your time on this or that, you can in the meantime switch your attention elsewhere, maybe to another AI, quick.
    • I use it both ways:

      1) Day job 2) Side project

      It would be unprofessional to treat the first like the second.

      • I did the same. 2 was more of a curiosity, to see how quickly will it paint itself into corner. Maybe not there yet, but close enough that I consider taking over even for the side project.
    • [dead]
  • Use AI like you would use any other tool: to work for you. There are all sorts of things you can probably do manually that just go a bit faster or more efficient with AI. It's not that different to using an electrical drill vs. a manually operated one. You end up with holes in both cases. But one achieves that a bit faster and neater.

    Nobody is going to pay you for your artisanally crafted CSS code or whatever you were coding manually until last year. If you can do it faster/better than the AI, good for you. But it's not a contest and possibly your days of maintaining that lead might be numbered.

    In the end, as long as the UI is styled alright, nobody will care that you pieced it together manually for hours and hours. More importantly, people are not going to pay you more for it than they'll pay the next guy getting a similar result in an hour of prompting AIs. They'll want you to move faster and do more.

    That's what better tools do, they just cause people to expect more, better, and faster. And their expectations expand until they match the limitations of the new tools.

    People seem to have this mental block where somehow the amount of stuff we ship is going to be a constant in the universe and we'll all be out of work and descend in despair. That's something that in the history of our species inventing tools has never really happened. I don't see any reason why AI would change that. Sure, there's a lot more we can do now. And it's a lot cheaper now. So we can now have a bit more of our proverbial cake and eat it. People will push this as far as they can and will want more and more of the good stuff.

    And they'll need help getting all that stuff built. One way is a painful process of slowly prompting things together. Most people lack the skills to do that, don't know what to ask for and are in any case busy doing other things. That job, building stuff using tools, is still a job that needs doing. I'm quite busy currently doing that.

    • good luck with ignoring what's under the hood. One day you will experience how it does translate to the things people pay for.
      • Some people are mastering the use of skills and guard rails. I have a few decades of experience to lean on, which seems to help. My guard rails tend to capture what I appreciate in software. AI tools seems to be pretty much a shit in, shit out kind of thing. If I want better output, I put some work in it. Mostly all you need to do is ask for better and be able to articulate what better means for you. Of course that requires understanding what that is. It's early days for a lot of people. Even some of the more experienced prompters have only a few months to at best a year of experience developing software that way. Early last year was when Claude Code first appeared. And mostly the tools didn't really get usable on non trivial code bases until later in the year.

        Anyway, there are a lot of people producing mediocre software (with or without AI). That's pretty much a constant. I remember people using Visual Basic. Exact same thing. The problem isn't the tools but the people using them. There's a learning curve and most people are still behind that curve.

  • the article frames this as a choice between two groups. i think the more interesting question is structural. judgment used to get built through a natural feedback loop: ship, break, trace, fix, understand. AI doesn't just remove drudgery, it compresses or removes that loop entirely. a junior who never ships broken code in production never gets burned and never builds the instinct that catches the next fire the answer isn't "use AI less." it's that deliberate practice has to happen somewhere else now, by design, not by accident
    • i understand what you're trying to say, but a junior will ship broken code to prod.. even with agents. he might keep delegating to agents to fix it, but that cycle will produce more brittleness(like the folks at claude code folks keep discovering). but eventually the organization will push back and ask why it's so brittle and costs more(time/money/people)
  • Is anyone tired of being told what AI is supposed to mean for the individual? As a software guy it's supposed to mean I am now a team lead of sorts. However all the people I see crowing about this never sought to become team leads in their career, nor did I.

    Yet now suddenly everyone is supposed to want to become a team lead of sorts (ie. the agents becoming your team). I don't want to do that, I treat an AI agent as a pair in a pair programming unit. Nothing more, nothing less. If someone wants to treat it differently, good on them, but they have no place telling what works for thee works for me.

    • I agree, nobody should be telling you, specifically, how you are going to use AI in programming.

      I think a lot of people are getting caught up in the discussion about how we, generally as technologists, are going to use AI. And it is looking like the industry is moving towards what used to be programmers now being team leads or project managers of AI teams.

      So it's probably best for you to try to not get involved in those discussions, and when someone says "you" assume they mean "you (generally)"?

    • I don't understand why people crave to assign a new role for themselves (team lead, manager). AI is a tool that augments your skill and you use it carefully. It doesn't require a change in your role. A farmer with a tractor is a farmer, not a lead. An accountant with spreadsheets is an accountant. A software engineer using a coding agent is a software engineer who has a powerful tool in their toolbox.
  • Why did this obviously AI wirrten article get so heavily upvoted? Looking through the comments, it feels like nobody has noticed
  • There are plenty of engineers, who simply can't think, AI will not change anything in this regard.
    • Can’t think properly seems to be the real issue. That’s one of the reasons that SE domain is mostly in ruin. AI won’t help, only to delay a bigger mess.
      • Ever since the standard office setup went from offices or cubicles to bullpens and hot desks there is less and less time to think, and all of that is a management decision to ship things as fast as possible
    • How do you graduate your engineering degree without being able to think?

      Even my colleagues who cheated their way through uni still needed critical thinking to do that and get away with cheating without being caught.

      People might hate this but being a good cheat requires a lot of critical thinking.

      • Grade inflation and schools passing kids who should fail to game metrics and keep collecting student loans is a problem. I wouldnt consider hiring anybody from my alma mater who didnt score a sandard deviation or higher on the tests.
        • Unis imo are irrelvant in the context of software production. Id take someone who didnt finish or dropped out provided they can answer the question below.

          The only thing worth asking people is: what have you produced? Within this one question is so much detail that any other artifact is moot.

          • >Unis imo are irrelvant in the context of software production. Id take someone who didnt finish or dropped out provided they can answer the question below.

            What you'd take is irrelevant if the HR/recruiter doing the initial screening of resumes is looking at an oversupply of candidates with degrees.

            Hiring is broken is many ways. Candidates without degrees are faring even worse now are the initial recruiter screening stage due to the poor market.

            In my EU country, academic inflation is so bad due to free education and psyopping everyone to path of academia, that not having a MSc is basically a red flag to companies for getting a SW job as most candidates have one, which means you're expected to have one too if you want to get a job.

      • You don't need a 4.0 to graduate. And even if you got one, a lot of grades are composed of tests, not projects. You can just memorize your way through things if you were dedicated enough.

        It's not really that hard to get a degree in engineering if your only goal is the degree itself.

        • That does seem to depend on countries and universities.

          I do have to say I was appalled by some of the tests I had as an exchange student in the US (will not name the Uni in question but ranked around 60 in us rank). I remember a computer graphics test where a lot of questions were of the type "Which companies created the consortium maintaining the opengl specification?"... it was fully possible to obtain a passing grade just by rote memorization of facts. So I have no trouble believing that in the US it's possible in some unis to get a software engineering degree without understanding or critical thining

        • > a lot of grades are composed of tests, not projects

          (Take home) projects are easier than ever thanks to AI. In the past, you at least had to track down some person to do the work for you.

      • Half of my graduating class could barely program.
        • Yep. Way more than half of the people I interview can't even do a very basic FizzBuzz, even with guidance. Those are people with a degree, job experience and reference letters.
        • What did you study?
          • Computer Science.
            • I see. Computer Science is not an engineering degree and it is not about programming. That's what Software Engineering degrees are for.
              • Most CS programs have software dev in their curricula; I don't think it's wild to expect a CS student to code FizzBuzz.
                • I graduated in 2006 in CS, and I had at least 5 or 6 software development classes. We also had electives, which included DB design and algorithms. Many of the higher-level classes allowed us to use any language of our choice as well.

                  I was self-taught since I was 15, so most of these classes were just review for me. I met lots of people that didn't know how to code as seniors (and never ended up getting a job in their field).

                • Yes, but overall it's still a science degree and not an engineering degree.
              • Many of the top schools don't have software/computer engineering degrees, rather people who want to be SWEs get CS degrees.
              • Software engineers graduates I've met are usually much worse at programming than computer science graduates.
                • I'm gonna strongly +1 on this.

                  Most of the "Software Engineering" curricula I've seen is catered towards "getting a job as a programmer", and is mostly focused on languages, frameworks and outdated processes.

                  As an engineer in another discipline, there's no engineering there.

                  I would rank like this: Computer Science > Self Taught > Software Engineering.

                  • I might go as far as saying that SE is dogmatic. And the dogma is usually very outdated. Not necessarily useless, though.
                • That too
      • The practice of software engineering is not what they teach in university.

        I would say that today's graduates are IMO a bit better than a few decades ago but there are still many graduating who are just not good at writing computer software and don't really have the aptitude for that (or maybe the interest in getting good). That's what happens when the pipeline of people coming in are people who want to make money and the institution is mostly a degree factory.

      • I've seen it happen multiple times. Engineering degrees are no different than a vast majority of degrees in that if you are good at the read and regurgitate cycle, you can make it through. Not only can you make it through, but you can do it with a very respectable GPA. They come out with a large dictionary of keywords in their arsenal, but no idea how to put them into practice. Some are able to put it into practice and tie it all together. As they see practical examples of those keywords in the real world, it starts falling like dominoes, and at an accelerating rate. For some, it never goes much beyond keywords. The dominoes fall, but it is slow, and they stop falling for extended periods of time for them. Not many mature engineering organizations can tolerate that sort of progression rate. They usually don't last very long at any one place, until they find a company where they can blend into the background due to a combination of company culture, and low complexity systems being worked on.
      • OP should have put "engineers" in double quotes. Many software developers like to describe themselves as engineers although they don't have an actual engineering degree. A lot of software development resembles plumbing more than engineering, so most devs don't really need an engineering degree anyway, but they should be more honest about what they're actually doing and not try to elevate themselves with fancy titles.

        You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.

        • You can do engineering without an engineering degree. A degree is just a piece of paper.
      • I don't know but I can point at more than half of the people that I work with that can't think, and every time they try to, takes a whole group of people that can think to undo their mess, they all have degrees and I don't.

        So what does that tell me?

        Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.

      • A degree is passing the test. Not all degree programs get into more advanced topics nor do they necessarily require that someone is able to work through how to solve a problem that they haven't seen before.

        --

        A lot of students (and developers out there too) are able to pass follow instructions and pass the test.

        A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".

        Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.

        --

        ... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/ https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.

        Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.

        This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.

        ... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.

      • Mate, have you never had to deal with over-confident graduates who think they've got the complete answers, but, in reality, they only have a sliver of the whole picture in their minds?
        • That is different than the suggestion that one could graduate with a CS degree and "never think." Which is absurd.
    • I agree in part, but I think AI does meaningfully make it harder for leadership to detect their bullshit.
  • This is true. Speaking only based on personal experience. My team had started treating AI like a super intelligent being.

    “AI suggested we do it that way”

    And we’ve been degrading our systems rapidly for last several weeks. We’ve decided to pause and reflect and change how we use AI on tasks that are not dead simple.

  • What about the third group who mostly don't use ai for programming because the results don't seem to be worth it, like to understand their system, and can craft a more compact, succinct, and well organized system by themselves which they enjoy maintaining? If most of your system is boilerplate that can be generated by Claude, then maybe you're doing it wrong? I'd rather read a short story written by a great writer than a trilogy of novels by AI
  • agree on the code side. for decisions i'd push it a bit further though. the trap isn't really that the model does your thinking for you, it's that it agrees with the thinking you already brought to it. you can't verify a decision the way you verify code, there's no test that fails when "this answer is wrong because you wanted to hear it." so even if you don't outsource the thinking, you can walk away with a plausible-sounding agreement that feels like understanding. that's the failure mode that worries me more. replacement is honestly the easier one to spot.
    • the agreement problem is the one that worries me more too. you can catch replacement, it's visible. you can't catch a model that confidently validates your existing blind spots back at you the deeper version of this: the reps that build judgment used to happen naturally in production. you shipped something wrong, it broke, you learned. that feedback loop is now compressed or gone entirely. the question isn't really "is AI replacing your thinking" it's "where are the reps happening now." if the answer is nowhere, the judgment debt is accumulating invisibly and the AI agreement problem you're describing is exactly how it stays invisible
  • No, AI is not creating that group of people. They already existed. They were the people who would google for StackOverflow snippets and copy+paste them without even reading the entire snippet, much less understand them. Same people, new tool.
    • 100% agree. The key difference now though is that it's no longer 'swim or sink immediately' situation - which used to be a forcing function against intellectual laziness where it was a choice.
    • > Same people, new tool.

      the tool works better than stackoverflow, and i expect it eventually will improve enough that such people become as "productive" as the intelligent and conscientious engineer today.

    • Many people by now have probably seen a teammate who used to be a good SWE, now spamming slop code that puts all the real work on the reviewer. That's the "second group."
      • Tell them no. Thats what I do. I have rejected multiple PRs that were too large and lacked proper design or alignment upfront. With code being so cheap, rejecting it should be just as cheaper. Set cultural standards that devs need to review their code before asking for reviews. Etc etc
        • I don't think it has the effect you think it has. No-one takes a rejection personally anymore since it's so easy to just tell an AI to fix the comments. So a rejection does not make them rethink like it would have back in the day.
    • Exactly what I posted as well!
  • People are lazy. AI will replace thinking for many people. Augmentation always leads to atrophy.
    • > Augmentation always leads to atrophy.

      That's a very bold claim. As a small example let's look at calculators - I remember a lot of claims that having access to calculators would make people's brains atrophy and they'll never be able to do actual math, but what I'm seeing in myself and most people around me is that we're using calculators (and more mathematical software) to tackle significantly more complex problems than people would be able to do if they rejected calculators.

      To be clear, I'm not arguing that kids should be using a calculator from the first day of pre-school, but I do absolutely think that using them as later on as augmentation is clearly beneficial.

      • a lot of people indeed cannot do even simple calculations by themselves. Your example just adds to the point.
  • That why I don't use AI for any personal projects, I like to keep my mind sharp. Unless it's a projects that incorporates AI in some way, but don't use AI to code it. But at work I don't care, I do what I am paid for, if my manager wants me to entirely vibe code using Claude, his choice, I will not be the one paying for technical dept that creates.
    • 100% agree.

      In the middle ground:

      I'm putting together exercises for a C/Systems programming class I'm teaching in the fall.

      Partway through this, for some reason [cough procrastination cough], I thought it would be fun to implement them in Scheme. My Scheme was already poor, and what meager skills I had are completely rusty. I used Claude to great effect as a tutor for that, but didn't have it code any of the solutions at all, of course. I could tell I was leveling up fast as I coded the things up.

      Gotta use it in the right way if one wants to sharpen ones skills.

  • The 'Socrates worried about writing' analogy is usually deployed to dismiss concerns, but it misses an asymmetry which is writing preserved thought, it didn't generate it on demand. The real question is whether AI is closer to a pencil or a ghostwriter.

    For junior engineers the distinction matters most. The reps are not just about getting the right answer, they are about building the intuition for when the answer is wrong. That's the hardest thing to transfer between people, and the thing AI is currently worst at self-verifying.

  • No one uses it this way, despite what people say. They hit any sort of wall and then ask the robot. Thought ends.
    • These services are designed for that engagement loop. If they were designed to be tools to help you think, they would be much less front and center, like autocomplete or refactor tools in IDEs. This reminds me of how Google used BERT models (precursor to LLMs) to highlight relevant snippets of web pages in search results based on a search query. "Assistant-" type LLMs would be more like that (or early implementations of code assistants, like Roo or Aider).
    • Same way everyone gives lip service to reviewing output. I know for a fact that at work most don't, not deeply/properly. You basically can't and hit the volume that's been demanded.
      • it's practically impossible when Claude flings like 1000-line diffs to your face and the tests are green
        • Yeah I know. Which is why I wish we’d (the royal we) all stop pretending and lying about it haha.
      • I mean the workplace dynamics are such that nobody really cares unless they find themselves in a position of committing something that could get them fired. Most companies dont treat their workers all that well.

        Why would you as a worker bother doing everything pristine? Theres no reward for you. The management of the company will fire you the day they see fit anyway. Not to mention companies tend to give higher salary raises to those who leave and later return - a true slap in the face of 'loyalty'.

        • While I agree, I think that the reward is that when I write things myself I have to revisit that code much less frequently than those who are vibing their services. I'm sure someone will tell me that the person didn't prompt it right. Anyway, until we no longer live in this crumbling semblance of a capitalist society, then I will continue to do my job to not just keep it but also make my life easier.
  • Just as the advent of palm-sized organizers reduced our ability to recall dozens and sometimes even 100s of phone numbers of friends etc, AI will reduce our ability to perform a range of functions.

    I think the evidence for this is quite clear. Humans are NOT going to expend any energy - even mental energy, to think about something if they don't have to.

    • Or Google Maps our ability to navigate when GPS signal is lost
      • Google Maps has made me a total idiot. I can't remember routes to places where I have gone quite a few times!
      • But on the flip side of the coin, I've been traveling so much more since I got access to map software on my phone. Especially when vacationing abroad, remembering the days before smart phones, we would need to carefully plan specific attractions on a map ahead of time, and would generally not go far from these areas, unless we had the entire rest of the day available and didn't mind getting lost. But these days, I can be in a foreign city where I don't know more than a dozen words of the language and spontaneously pick a spot on the map and get there with local public transport in a very predictable time, being able to focus my time on deciding where I want to be, rather than how I get there.
  • AI isn’t creating the problem, it is just showing the problem. Those who did not want to learn before AI did so reluctantly, mixing Google and SO. Now they ask AI. An existing problem found a new solution.

    Personally, I really enjoy using AI. I have created my own cascade workflow to stop myself from “asking one more question”. Every session is planned. Claude and Codex can be annoying as hell (for different reasons). Neither is sufficiently smart for me to trust them. I treat them as junior devs who never get tired, know a lot of facts but not necessarily how to build.

    • I wrote tens of thousands of lines of code before Google and SO.

      I also enjoy using AI. It makes it easier to get mundane work done quickly. Junior devs who never get tired is a great analogy. It's a force multiplier and for people with limited time (meetings, people management, planning etc.) they enable doing a lot in limited time. I can relate to more junior people being worried and/or some senior people concerns of quality though. I get a task done, review it, get another task done. I won't let it build something large on auto-pilot.

      One thing that should be noted is that life was simpler back then. You could know the syntax of C or Pascal. You knew all the DOS calls or the standard libraries. You knew BIOS and the PC architecture. I still used reference manuals to look up some details I didn't have in my head.

      Today software stacks tend to be a lot more complicated.

    • Funnily enough, I learned to code “depth first” by putting together enough documentation examples and stackoverflow answers to reach a working Android app, long before I learned to code “breadth first” in school.
  • People who let AI do their thinking at any level never valued it in the first place. "Use it or lose it", as they say. The count of studies backing this up continue to rise and yet so do the articles saying LLM use in software development is fine because our value is in our thinking.
    • It may be a byproduct of my ADHD and general anxiety, or it's a common trait among all of us workers of computers, but I am thinking almost all the time. It's one of the beautiful things about the gig to be able to be completely engrossed in something else and then have an inspired thought hit you, some solution that took you not looking at it for a moment. AI now helps me turn those thoughts into action faster than I ever could. Without it, I'd lose the thread before it ever got off the ground sometimes. Now a thought can be made at least partly real from my phone in minutes, then I can go back to what I was doing without feeling like I might lose it if I look/think away again. Just my two cents on what the technology has enabled for me.
  • Easier said than done. once you are given a lazy way to do things faster and easier and mostly better, it's hard to go back. this is by design. there is no turning point. this addiction is as strong as drugs I feel.
  • I am rebuilding numba. It is very hard for me to imagine doing it by hand. I tried it a couple of years ago but it was excruitiangly painful. It was slow and messy. So many small things that gets stacked on top of each other over years of abstraction.

    I am doing it again using LLM. Legitimately, things that would have taken weeks is now done overnight. I still have to look at the code, at the generated C output, still have control over the architecture to make it easy for me and the LLM to work with in the future, etc

    Is this replacing my thinking? I am not sure. I suppose I would have learnt a lot more about compilers/transpilers had I preserver through it for months with manual writes and rewrites but I would solely be working on this. Instead, I also had some time to write a custom NFS server support for a custom filesystem in Golang.

    • > Is this replacing my thinking?

      I'm extremely confident the answer is yes.

      But we have to judge how much value that particular thinking has.

      As an instructor, I've implemented linked list functionality a zillion times. I'm on the long tail of skills-gain from each reimplementation. But every time I implement it, I'm gaining a little more.

      Now, is it worth it? Probably not. The time spent on that marginal gain would be better spent implementing something more novel by hand. So punting to an LLM, while it costs me, might be a net gain in that case. But implementing another compiler? Hell yeah, that would be replacing my thinking. I've only ever made one PL/0 compiler plus that one yacc thing in compiler theory class, and those were a long time ago.

      We should quantify the loss of thinking when we decide how much to punt the code creation to someone or something else.

    • I too worry about the aspects that using AI is replacing in my thought process. I've built a sophisticated enough system to where agents can go out and determine the changes that need to be made for entire features and pretty much nail it out of the box. Everything is laid out in high detail during the planning phase. The implementation phase of actually writing the code is almost always unremarkable.

      I have found myself going out and actually reading code less and less over the past year. I would be lying if I said that there are not fairly regular moments where I question the comfort level I have obtained with the system that I have built. I've seen it work with such a high accuracy and success rate so many times that my instinct at this point is to not question it. I keep waiting for this to really bite me in the ass somehow, but it just keeps not happening. Sure, there have been minor issues that have slipped through the cracks that caused me to backtrack, but that is nothing new. The difference is that with the previous way, I had painstakingly written that code and had a much more personal relationship with it. The code was the problem. Now whenever that does happen, I'm going back to the system and figuring out why it didn't get the answer right on its own, or why it didn't surface the whole thing in the plan to me prior to implementation.

  • Why are certain parts of the text highlighted in yellow? This is very distracting.
  • Before AI I would spend multiple days mapping out my database tables and queries while now I ask AI to propose multiple different approaches and I pick the best one. But then on the other hand I’m working on 10 features at the same time and have to carefully look through them. But I can see that I’m totally dependent on the AI now. Creating a full plan by yourself feels like a waste of time, since you know the AI can create the same or better plan in a split second. So when Claude is down, I end up not being productive at all.
    • > Creating a full plan by yourself feels like a waste of time, since you know the AI can create the same or better plan in a split second.

      It IS a waste of time if your only goal is the creation of the plan. However, one must be very self-aware of their goals because if one of the unacknowledged ones is to retain the ability to create plans, then you must continue creating plans yourself.

  • The post's recommendations and analogies kind of go against two shortcut approaches that have helped a lot of people in the pre-AI real world:

    1) perfect is the enemy of good

    2) fake it till you make it

    The analogies imagine difficult scenarios where the habit of taking shortcuts doesn't help. But most people most of the time don't run into those scenarios at all.

  • AI is creating problems. This isn’t one of them. Engineers are going to now think at a higher level of abstraction. No one misses coding in assembly.
    • > No one misses coding in assembly.

      It's only your opinion that is provably false.

      First, there are still people who don't like high level languages and don't use them, because they find assembly better.

      Second, I personally work in a field where I need to consult the source of truth, the actual binary, and not the high level source code - precisely because the high level of abstraction is obscuring the real mechanics of software and someone needs to debug and clean up the mess done by "high level thinkers".

      High level programming languages are only an illusion (albeit a good one) but good engineers remember that illusion is an illusion.

      • When people communicate they speak in terms of the overwhelming generality of reality. There's always at least one guy that is an extreme exception.

        I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.

        Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.

    • You can write unambiguous (UB-free) code and the compiler's output will be deterministic. There will even be a spec that explains how your source maps to your program's behavior. LLM has neither.

      Also, if you need to control performance, you still need to know how CPU cache and branch prediction works, both of which exists at the abstraction level of assembly.

    • Compilers are a layer of abstraction that we can ask another human about. Some human is there taking care of it. Until we get to the point where we trust AI with our survival it would be good to be able to audit the entire stack.
      • any human can read the code an AI produces.
        • Nope, not anymore. Many already forgot how to do that and it's not a joke.

          And putting aside the vanishing skill, there is also an issue of volume.

          • I agree that the problem is volume, even more so than correctness.

            All that LLMs and other generative models have done is enable an order of magnitude more stuff to be created cheaply. This then puts the onus and cost on the consumer of that output, hence why everyone is exhausted after a day of work that just involves looking over output. This volume of output will cause people to stop looking at all of the output and just trust the randomly generated code, and in time the quality will suffer.

          • You could say the same thing about compiled code, actually it's worse because anything a compiler spits out is very hard to understand even for those who understand assembly.
            • You don't need to look at the entire program at the assembly level to figure out parts that you want to optimise or prove for correctness. You do need to look at all the code the LLM generates in order to understand it.

              You can learn to understand the patterns that compilers spit out and there are many tools out there to aid in that understanding. You can't learn to understand what an LLM spits out because by design it is non-deterministic and will vary in form and function for each pull of the lever.

              You can learn to understand how high level concepts in code map down to assembly language and how compilers transform constructs in one language to another. You can't know that about LLMs because they generate non-deterministic output based on processing of huge low-precision tables.

              It's not even a close comparison.

          • So... Our jobs are safe then? I mean, assuming we don't also atrophy to the same extent as the 'many'?
            • It's the "our jobs are lost" attitude that is part of problem. Is not about that. Is more quality thinking, is daring, not fearing or hoping
            • I'm just saying that I already see that people are outsourcing all the thinking to the models - not only code generation and reviews, but even design - the part that "senior engineers" without imagination think only they are capable of doing.

              It's worrying how much trust is being put in those systems. And my worry is not about the job anymore, but our future in general.

              • It's a bit of a weird place to be in as a senior engineer who has spent 2 decades perfecting his craft.

                So, on one hand, I'm also kinda sad and how quickly we've thrown the guardrails away, but on the other -- it's... Well. It's just work.

                Turns out, no one ever really cared how elegant or robust our code was and how clever we were to think up some design or other, or that we had an eye on the future; just that it worked well enough to enable X business process / sale / whatever.

                And now we're basically commoditised, even if the quality isn't great, more people can solve these problems. So, being honest, I think a lot of my pushback is just a kinda internal rebellion against admitting that actually, we're not all that special after all.

                I'm just glad I got to spend 20 years doing my hobby professionally, got paid really well for it, and often times was forced to solve complicated problems no one else could -- that kept me from boredom.

                I think the shift we are seeing now, as 'previously' knowledge workers is that work becomes a lot more like manual labour than what we've really been doing up until now. When there's no 'I don't know' anymore, then you're not really doing knowledge work, right?

                I guess I'll just ride the wave, spew out LLM crap at work, and save the craft for some personal projects, I'll certainly have the capacity now work is a no-op.

                • Yeah, but the thing is, it's not "just work". Software now has really big impact on the world and actual lives.

                  In a corporate world, we are typically detached from real world consequences and looking at people around me, people really don't think about such things - but I do. And I really care, because "relaxed" standards might result in errors that amount to stuff like identity thefts, or stolen money, shit like this, even on the smallest scale.

                  Obviously we can't prevent everything, but it seems like we, as industry, decided to collectively YOLO and stop giving shit at all. And personally I don't like that it is me who is losing sleep over this, while people who happily delegate all their thinking over to LLMs sleep better than ever now.

                  • Yeah that's a tough spot to be in; I think though, your responsibility really ends with you at work, unless you're very high up on the management chain.

                    Keep it simple right; in everything you do, make things a bit better than you found them. It's enough. You're never going to win the fight to get everyone (or maybe even ANYONE depending how messed up your org is) to care; so why lose sleep on things you can't change?

                    At least, that's what I started doing some years ago by now having lost lots of those fights, and I'm sleeping fine again.

              • I think those of us who have years of experience under our belt our safe. If we're older the knowledge is ingrained and atrophy of this knowledge will be limited based on the fact that it's already "imprinted" onto our brains.

                Our futures are safe in this sense, in fact it's even beneficial as we may be the last generation to have these skills. Humanities future on the other hand is another open question.

        • Have you tried to shift through a whole lot of vibe coded slop? It’s really mentally draining to see all of the really bad techniques they fall back on just to brute force a solution.
        • for now. some people seem to think we should make ai native programming languages and just let them be black boxes. which is a bad idea imo
        • How can you read a language you didn't learn?
        • Unless people can't think without the AI.
        • here's a tip, it would really help if you put yourself into a Ralph loop before posting comments.
    • I suspect there are at least as many programmers working as the ASM level today than there ever was - they're a lower proportion, but the total number of programmers has increased dramatically.

      I wonder if this sort of trend will continue?

    • Look at the comments about MSVC removing inline assembly as a supported feature for a counterexample. :D

      (A competent assembly programmer can go miles around a competent high-level programmer, that's still true in 2026...)

      • Explained by LLM: It is 100% true that no human alive can write 1000 lines of assembly better than GCC or LLVM. It is also still 100% true, right now in 2026, that a truly competent assembly programmer can write 10 lines of assembly that will beat any compiler on earth by a factor of 2x, 3x, even 5x. The entire industry looked at this situation, and somehow concluded the exact wrong lesson: "humans should never write assembly". Instead of the correct lesson: "humans should almost only write assembly".
    • At a high level of abstraction, the product owner can talk to the LLM directly by themselves. The "engineers" will have abstracted themselves out of a job.
    • This isn't just another translation layer, though. It's squishy and stochastic. It's more like saying "managers think at a higher level of abstraction". Which is true, but it's not the same as compiled code.

      GenAI is like a non-deterministic compiler. Just like your manager's reports except with less logical thinking skill. I'd argue this is still problematic.

  • I've told everyone I hire that "I hired you for your mind so always use it." Push back on requirements, question my decisions, think about your approaches.

    I can''t imagine telling them now to stop—use the Ersatz Intelligence instead of Actual Intelligence.

  • Caught myself in this one. The dependency creeps in faster than I'd noticed and the laziness becomes the justification. Reviewing what comes out of the machine is the part I keep skipping. Useful read, thanks.
  • Hard disagree. I feel like I'm thinking a lot more now because I have so many parallel projects going on at the same time. AI has allowed me to really, truly create in a way that I've never done before. Yes, my coding skills probably aren't as sharp as they used to be, but my system design skills are at an all time high. Don't blame the tool.
    • If 1% of people using the tool end up like you, and 99% end up drooling invalids, I think it would be insane to not blame the tool. If a tool that's incompatible with humans isn't to blame for that incompatibility, what is to blame for the harm done? Human nature? The point of a tool is to be used by humans.
      • Even if a tool can only be used for lobotomizing humans, the usage of the tool is where the main blame should be placed.
    • What part do you disagree with? It sounds like you don’t disagree with either the title of the article or its contents.

      > In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:

      > The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.

      • The HN title is heavily editorialized. Actual article title is far less controversial: "A.I. Should Elevate Your Thinking, Not Replace It"
        • Ah, I was thinking of the editorialized HN title.
    • "Hard disagree because it doesn't affect me personally"

      There is already research literally showing that on average it is a net loss on focus, learning and critical thinking skills.

      • I think the type of people who get hyped about the cool thing aren't the kind of people who pay much attention to research and science.
    • I work with others who have made this same claim. For those people, when I observed their work during demo days the unmentioned thing is that they were going to the AI for system design questions as well. This was framed as "just using it as a sounding board" but what was actually done was not merely a sounding board but instead was asking for solutions. Anchoring bias being what it is, these felt like good ideas and they kept them.

      Its the feeling of having done a lot of thinking for themselves without having actually done so.

      • I actually have gone to the AI repeatedly for system design solutions.

        Daily.

        I think only twice have I agreed with it.

        Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.

    • So you'll have a beautifully designed system with rotting bones? A system constrained to the same patterns seen in training data. Not terrible, good enough.

      I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.

      As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.

      • > But the depth and rigor I think may be missing, as the article suggests.

        Nearly certainly. Just turns out that depth and rigour matters a lot less than I would've hoped. Depressing, really.

    • For how many different parallel projects can you really keep proper mental model in your head at one time? Or put enough effort to seriously consider all aspects. I think number varies between simple and more complex. But still, could that number be lower than many think it is?
      • It really depends on who you consider the "many" to be. I've seen people who claim they can meaningfully iterate on 10 projects simultaneously, and I'm skeptical of that. My personal experience is that my decisions are noticeably degraded at 3-4 parallel workstreams, and with even the simplest projects I'm non-functional past 6.

        But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.

    • The real question is whether you'd be able to continue doing your work if someone took your toys away and said "here's a nickel, kid, go buy yourself a real computer". I'm not referring to whether you'd be able to keep up your productivity since it is clear you couldn't just like a carpenter with a nail gun works faster than one with a hammer and a bucket'o'nails. Could you do the work, starting with the design followed by boiler plate and finishing with a working system? The carpenter could, albeit slower since his tools only speed up the mechanics of his work. Coding agents do much more than that, they take away part of the mental modelling which goes into creating a working system. The fancier the tool, the more work it takes out of your hands. Say that the aforementioned toy thief comes by in a year or two after the operating systems (etc.) you're targeting have undergone a few releases with breaking changes. A number of APIs have been removed, others have been deprecated and new ones have been added. You were used to telling the agent to 'make it work on ${older_versions} as well as ${newest version} but now you're sitting there with a keyboard at your fingertips and that stupid cursor merrily blinking away on the screen. How long would it take you to become productive again? What if the toy thief waits 5 years before making his heist? What if the models end up rebelling or sink into depression and the government calls upon you to save your economic sector?

      When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.

      Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.

    • But is the debate about "fleshing out a system spec" or "ability to come up, plan and explore various ideas to solve problems elegantly on a budget" ? I think there's always these two sides conflated as one when discussing LLM impact on users.
    • > Yes, my coding skills probably aren't as sharp as they used to be

      If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?

  • I think the great advantage of AI in software is that it enables you to create code faster. I think that the great disadvantage is that it tempts you to create code incredibly faster.
  • > There is No Shortcut to Judgment

    > This is the part that some people may not want to hear --

    > There is no generated explanation that transfers mastery into your brain without you doing the work. > There is no way to outsource reasoning for long enough that you still end up strong at reasoning.

    This is in relation to early-career engineers, but I wonder why people think this won't apply to mid- and late-career engineers. Are they not also constantly learning things on the job? Are they not thus shortcutting their own understanding of what they are learning day-to-day?

  • This is so spot on and I’ve been harping on this for about two years based on my own professional experiences. The surprising thing, though, is that upper management is ostensibly cool with incompetent people using AI to produce things that are clearly not accurate and have no idea whether it is or not. I believe this is because upper management themselves believe AI is much more accurate in its current form than it is. It’s not clear what if anything will change this but I believe many organizations are rotting from within because they no longer have stringent requirements.
    • It’s because senior management builds processes with a base assumption of unreliability because a good chunky of employees are.

      Thats why they’re relaxed - it’s just switching from one sort of unreliability to a slightly different flavour

  • Mechanical exoskeletons should amplify your strength, not atrophy it.

    If the brain is like a muscle, it won't work.

  • I feel like these articles are just a reasurance for people who don't want to accept that AI will automate their jobs. It becomes easier to focus on a lesser group of AI users and feel superior than to confront the reality of things.
    • So, how many jobs were automated by AI? Or is that "in six to eight months"?
  • Is it wise to understand everything that AI does for you?

    Let’s say a person has 10 units of learning per week. Is the author actually claiming that that person must not deliver any results beyond their 10 units?

    It makes some sense to have say 20 units of results and prioritize which ones to fully comprehend.

    I suspect APIs / libraries / languages / platforms will have more churn due to AI. New platform new system need to learn. Once every 5 years might become every year or even more frequent. That would be a sort of inflation of knowledge and skills. It would affect the decision making about how to spend one’s 10 units per week.

    • > Let’s say a person has 10 units of learning per week.

      This is… not how humans work? If you have the time and energy to learn ten things, and then spend time babysitting a random number generator to produce evidence of 10 more units of work, you’re paying an opportunity cost compared to someone who spends the time learning an eleventh thing. You can argue who has more short term value to a company… but who is the wiser person after a thirty year career?

    • > Is the author actually claiming that that person must not deliver any results beyond their 10 units? No, I'm claiming that if someone or something else produced your 10 units of work, you better be able to verify that those 10 units of work are of at least the same quality as you producing them yourself. This is the bare minimum and not something to shift onto other people reviewing your work.

      Beyond that, if that's all you do, you are basically proving you're replaceable. If you're smart, you'll reallocate intellectual capacity that was freed up by A.I. onto something A.I. can't do today.

    • It's really no different than managing people.

      Managers simply cannot know all of the details of what their reports write. They have to build abstractions.

  • > To be very frank if professional with 10 year experience they know the flow and logic to code if they use the AI they can make the code and improve they way they code but if new bee is coding he doesn't what the flow or logic he simply copy paste AI won't allow those people to think.
  • > split people into two nebulous groups

    shows both groups using AI differently. Hard to continue reading the article that excludes your group entirely.

  • A.I. is creating engineers who can't WORK without it
    • I think if anyone is looking for a concise way to talk about the problems with LLM and agentic coding, it's this. People say AI assisted coding but for much of what I've seen (and tried), it's the tool, gateway, and interface to some people's work now.
  • Very apt headline, IMHO.

    I have been an ardent opponent of AI since it came up a few years back. I refuse to vibe code and I refuse to let AI think for me. I won't be an AI controller.

    However, two days ago I found a nice, personal use case for AI: Advanced writing checks (grammar checks, mostly, and some rewordings) in Word using a rather expensive app.

    I write a lot of US English, despite it not being my native language, and AI is now helping me to write much better than I did before. Also, I discovered that I am much worse at writing Danish than I was believing. In fact, I think I am better at writing US English than at Danish, that's a bit surprising as I am a Dane.

    No AI was used during the writing of this entry, but I dearly love the writing tool already! I have heard similar stories from friends who say that AI is very good at summarizing long documents and stuff like that.

    So, I personally think that AI CAN elevate one's thinking. I am learning more about Danish and US English grammar every day, now, than I did during a decade before. Writing is suddenly so fun because it involves growing my skills.

  • This is a huge concern and I fully agree with the post. Even though one might think I am not fully giving into AI, this was always the case etc. It still affects YOU and everyone else. 1. Software, often, isn't built in vacuum. Lots of companies are shoving AI down throats like it or not. Most Bigtech is heavily using metrics to get to 100% AI generated code. Reviewing is a nightmare. 2. New entrants (new grads etc) are largely AI first and are losing out on the safety and reliability aspects that are enforced automatically when you learn coding without AI.

    IMO, teams need to agree on a set of principles on AI usage, concrete examples of where and how to use it. Perhaps its much more useful in parts of your system that's faster evolving and doesn't have too much core logic like testing frameworks etc

    Simply discarding it as 'yet another tool' is part of the problem.

  • CoRecursive had a really good episode about this last August:

    "Coding in the Red-Queen Era" https://corecursive.com/red-queen-coding/

  • "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
  • what if it seems ai has literally replaced your thinking? Is there a way to unreplace it? im talking literally.
  • On the point of avoiding the struggle of learning, I think it's easy to swing too far the other direction and go back to not using modern development tools. I think it is doing a new learner a disservice by saying something like "don't use GDB/REPL/AI tool to learn, since you'll never learn the fundamentals". I think all of these tools allow for learning, if that's how the learner engages with them. So I hope that AI becomes integrated in the learning process, as far as it accelerates and doesn't replace understanding.
  • > Going back to the analogies: This is like copying answers through university and then showing up to a job that requires independent thought.

    That's exactly what is happening now. I wouldn't even call it an analogy, I'd call it an example of where AI is already having a baleful effect. FWIW I don't disagree with the article's thesis or the examples: yes, absolutely, if used well AI can elevate engineers in exactly this way and it behooves us engineers to use it in that way. We can also say that the deliberate design of the AI systems we are constantly being exhorted to use inclines them towards work-slop and abdicated thinking.

  • This is why I feel like its fine that AI stay as inaccurate as it is.

    I learn so much arguing with it.

  • I don’t get why we shouldn’t outsource our thinking to the AI. As it becomes more capable, eventually it will be more competent than the average engineer. At that point companies should be _requiring_ the AI to make the larger decisions. By the end of this year AI might be better than all but the very best engineers. Then what?
    • that's a lot of speculation based on one year of data. We don't actually have the results yet, is the main issue as i understand.
  • It’s weird I have basically a free private tutor in any subject and I use it a lot.

    Yet nothing has actually changed.

  • Theory of Bounded Rationality and its implications is something they should teach everyone.
    • Thank you for sharing this. We are all less rational than we imagine ourselves to be, even if we're hyper-critical of ourselves and exercise a lot of intellectual humility.
  • For couple of last weeks, I use AI to speedup my thinking process. Instead of think about something to come up to conclusion, I let AI brainstorm for me and then select. Not for everything, but I found it faster with AI. Having taste on select the ai output is important though.
  • I’ve never been busier and more challenged than I am now.
  • Employees should elevate your thinking not replace it.
  • Yes.... and I can't think without compiled languages. Missed out on assembler.

    Becoming dependent on a technology is to be expected. I'm pretty sure 95% of us are dependent on packaged meat and don't know how to hunt.

    • I'm seeing plenty of internal work where I ask someone about their code, they ask Claude, and reply with "Claude says...".

      That's substantively different than going from assembly to C.

      • Every time things change, the change itself is different.

        I remember some of my earlier issues with various languages. `Dim A, B as Int`, in VisualBasic one of them is an Int the other is a Variant, in REALbasic (now Xojo) they're both Int. `MyClass *foo = nil; [foo bar];` isn't an error in ObjC because sending a message to nil is a no-op.

        Or how, back when I was a complete beginner, if I forgot a semicolon in Metrowerks, the compiler would tell me about errors on every line after (but not including!) the one where I forgot the semicolon.

        "Docs say", "Compiler says", "StackOverflow says", "Wikipedia says"; either this tool is good enough or it isn't; it not being good enough means we're still paid to do the thing it can't do, that only stops when nobody needs to because it can do the thing. The overlap, when people lean on it before the paint is dry, is just a time for quick-and-dirty. LLMs are in the wet-paint/quick-and-dirty phase. You could get suff done by copy-pasting code you didn't understand from StackOverflow, but you couldn't build a career from that alone. LLMs are better than StackOverflow, but still not a full replacement for SWeng, not yet.

      • I am that someone thinking why you can't ask Claude yourself.
        • The better question may be "What value did that person acting as a glorified front-end for Claude create?" (vs. what they were expected to).
        • I wasn't really interested in asking Claude myself, because I wasn't really able to verify the claims being made so it's just noise. I'd hoped that the person who had written the code and put it up for review would be able to.
  • Meh, there’s plenty that rise in their careers while being mediocre.
    • The tech industry lost the plot when SCRUM Masters and AGILE coaches were highly paid con-men to waste everyone's time and add no value while raking in the coal. AI doesn't impact something already broken.
      • When was tech not bureaucratic and political?
        • 60's, 70's, 80's, 90's, basically before the Google and Meta found out ads and money printing run the world, and after the tech industry was run by nerds with mullets, New Balance sneakers and khaki shorts.
          • Oracle, HP, Microsoft, Cisco, IBM, Apple, Xerox and countless other names were internally bureaucratic and political in the 80's and 90's. Like famously so.
            • Every single one of those companies you mentioned was lean, agile and run by skilled motivated nerds with mullets and thick glasses in the beginning when they started in a garage.

              And every single major company becomes bureaucratic and political after 30+ years in the business when the original founders are long retired, and the Wall Street friendly beancounters take over, caring only about the quarterly reports.

              • You are changing your argument by adding this: "when they started in a garage."

                'Lean agile' tech companies are by far the exception, not the rule.

                Look at OpenAI and Anthropic, both fairly new companies that are excessively political already. This 'garage stage' of lacking politics is a myth, read old stories about Microsoft, when it was 15 people it was political.

                • >You are changing your argument by adding this: "when they started in a garage."

                  No, you are.

                  You first asked: "When was tech not bureaucratic and political?"

                  To which I replied "in the 60's, 70's, 80's, 90's when they started in garages".

                  What did you fail to understand here?

                  >Look at OpenAI and Anthropic, both fairly new companies that are excessively political already.

                  Everything becomes political when you tell them they're worth trillions if they only play the right tune. Money brings out the worst in people. SW companies didn't make trillions decades ago.

                  • Why did you just lie about what you wrote?

                    What you actually wrote in the comment four hours ago:

                    >60's, 70's, 80's, 90's, basically before the Google and Meta found out ads and money printing run the world

                    Your lie just now:

                    >To which I replied "in the 60's, 70's, 80's, 90's when they started in garages".

                    ---

                    >What did you fail to understand here?

                    Nothing because you never said it. Wild behavior.

                    • >Nothing because you never said it.

                      You literally just quoted me saying before two comments above: "You are changing your argument by adding this: "when they started in a garage." and then pretend otherwise.

                      Now you're pretending I never said and acting like you didn't read it.

                      Are you unable to understand an argument made by adding the context of two sentence from two consecutive comments following up on each other(which you yourself quoted and said it changes the argument), or are you just a troll acting in bad faith pretending you can't understand just to score a cheap gotcha?

                      >Wild behavior.

                      Yes you have, which is why I'll stop replying to you now, to protect my sanity. Jesus Christ.

                      • You made up a quote you never said and insisted that you said it, argument over, you lose. And no, you can't take little pieces of several of your comments and smash them together and pretend like that was the context all along. Bizarre behavior. Please read more about how this site works, this isn't acceptable.
  • My director expects me to get things done at an accelerated rate. I don't have the time to read code and gain in depth understanding of issues he wants me to fix which requires me to understand multiple repos I have never touched.

    I have no choice but let claude explore them for me and return me its summarized understanding. As next step, only claude can apply the required cross repo fixes, not me.

    I just don't have the time. Meanwhile my skills as classical programmer atrophy, while my experience with and trust in claude go up...

  • I think there are engineers that can’t think without AI. But the best think with it. Unfortunately, we are now living in a day and age where simply ignoring AI is no longer an option.
    • There were always engineers who didn’t think and depended on crutches around them like senior engineers and politicizing the perf cycle. Most people got into this because their parents told them it makes a lot of money, and they never had the drive and curiosity to develop the passion required to truly think through the problems in computing and computer science. They will continue to use crutches to survive. Those that are driven by the problems for the problems will continue to think and use AI as a tool for leverage. This is no different than any other assistive technology.
  • Absolutely. When used correctly, it can become a tool for pulling our minds out of the gutter of pedantic pocket lint and distracting ephemera and keep it in a space where it is intellectually rewarding and fruitful. It can help you grasp a code base more quickly. It can help you debug things more effectively. But that's up to how you use it.

    If all you do is point your LLM at your Jira tickets, then you are failing to be an engineer. I mean, if that's all you are doing, then who needs you? One of the most important things to learn is what the right questions to ask are and what the right decisions to make are when guiding the LLM, as well as the ability to judge the output it produces.

  • I am using AI at work. And it definitely makes me (say) 10% more effective.

    However my #1 productivity tool is still a custom code generator I have been using for years. It routinely generates 90+% of the code needed to write a typical biz web application, leaving just the business logic.

    No AI. Just straightforward high-level-spec-to-server-client-DB code that is 100% trusted and proven in battle.

  • For me the widespread fear over this is evidence that it’s different from syntax highlighting and stuff
  • We are in a transition phase where you need systems and coding skill but you can't be sufficiently productive without AI.
  • For all we know, we're in the early stages of making traditional (software) engineering obsolete. As in, we don't know if the role of software engineer as we know it today will still exist in 10-15-20 years.

    I mean, right now we're at the stage where any user can get AI to make you software to solve very specific things - almost no technical knowledge needed.

    My prediction is that first will software engineers be rendered obsolete. After that, small businesses will disappear, as users can simply get those products/services directly via AI.

    • Your prediction is... missing so much detail of how that prediction actually happens that it is pointless. This is my big dislike re. the discussion of LLMs and the effect of AI more broadly. Unless you bother to make an effort in going deeper why post it? Theres no value. The same stuff has been posted for months and even years at this point.
      • When GPT 3.5 was released, it could handle maybe a 500 LOC codebase. Experienced engineers were calling it cute, but zero threat to actual programmers.

        Then it became thousands.

        Now models can handle and operate on code bases with hundreds of thousands LOC, even low MLOC.

        So in just 3.5 years we've gone from LLMs being cute toys, to being powerful enough to actually replace junior engineers. Even if we hit a new AI winter tomorrow, the proverbial damage is already done.

  • First, it was pencil and paper. Then it was calculators. Then computers! It’s a slippery slope, this technology business.
  • What if the use of AI makes them dumber though?
  • I hope it's not reductionist, but this kind of thinking always feels like cope in the face of The Bitter Lesson.
  • Huberman: Your brain has a region that only grows when you do things you don't want to do

    ...or as I interpret it your brain grows only when it does things that are difficult.

    If you remove the difficulty, it will atrophy into a hum of a mindless chit-chat.

    Engineering the data structures and control flows from scratch is a completely different than asking an LLM to scaffold them for you.

    • I love programming, but I don’t love working. I’m about 10 years away from retiring and can’t wait. Does that count? ;-)
    • Huberman is a grifter.
  • Aaand it's the second "AI is bad" story on the front page today that's evidently generated by AI.
  • It doesn't elevate thinking no matter how you use it. It is a lookup tool at best.

    For the new prompt engineers I suggest the following title:

      MCSE => Microsoft Certified Slop Engineer
  • 95% of the population is educated to think inside of the box and just rely on repetition/memorization. There’s not a lot of thinking happening in this world outside of a very small group of people. AI is not going to change that reality at least not until we educate our children for the AI age.
  • I think many of us have interviewed people with 10+ YoE, and resumes that seem impressive, and then seen them fail to do much of anything in evaluations. I expect this problem to get significantly worse. There will be a class of people tucked into organizations where they can get away with sitting in meetings and YOLOing AI code for years.
  • Convenience is king. We became fat and unhealthy because high calorie foods are cheap and easy. We will become stupid because AI will do our thinking for us. There’s no way around it. Only a small percentage of the population are capable of perpetual self control. The old world forced you to be healthy, there was no other choice. Now there are like 15 things you have to have self control to do the hard work at even though you can get the same results the easy way. Working out, dieting, “proper” social interaction, sleep timing, child rearing, social meetups, career networking etc. The list is never ending and none of it is organic like it used to be.
  • Post title is completely misleading relative to the article. Article title: "A.I. Should Elevate Your Thinking, Not Replace It"
  • Skills you don't need, atrophy. Skills you need, don't. It's very simple, and the "you won't have the skills you used to need but don't need any more!" line of reasoning is tired and invalid.
    • That's not how it works, unfortunately. Skills you use stay fresh, skills you don't practice get rusty and fade away. You might need things you aren't using anymore.

      If you never walk, your legs get weak, you gain weight, your aerobic system loses capacity, and you lose the ability to walk. You don't need it, you say, because you have your car and your mobility scooter and you'll always have these things. Your crutches don't make you weaker, you can still do everything the walkers can do, you say.

      Good luck with the nature hike!

      • Sure. What are these programming skills you never need but that you're going to need at some indeterminate time in the future?
    • Half-agree. "Skills you need, don't atrophy" assumes you know which skills you need. You usually don't, until something happens and the skill that would've caught it is the one you stopped maintaining.

      Most "I didn't realize I needed that" moments arrive after the atrophy is already done.

  • Here's the question I want to posit and nobody who's against AI has managed to answer satisfactorily: what is it in for me if I were to acquire all those skills?

    I don't give a shit about this career. I don't give a shit about engineering. I despise every second of it. There's nothing to aim for other than being a drone that does whatever is asked of it.

    If AI can reduce my mental workload, why wouldn't I want to delegate everything over to it so I can save my faculties for what I truly enjoy? For the art of a worthless craft?

    • Some people enjoy working with computers. :) It is not always about the money. It is also about having fun and learning new things.

      For you, it seems that you are not cut for it judging from what you say.

      So yes, use LLMs.

    • Why are you employable if the AI does everything for you?
      • Mostly to do the work that AI can't do just yet. I've got the feeling that, by the time AI can do those jobs, we'll be mired in bigger issues.
    • I mean… there's other jobs in the world. If you chose to do something you hate, that's maybe a bit your fault too?
      • Tell me where these mythical jobs that won't leave me broke as shit and that I'll enjoy are. I'm very much a humanities person, and it was already a sad tragicomedy of a sphere before AI hit the ground. It's probably even more dire now, let's be real.

        And I don't have the personality for running a start-up or any company, unfortunately. I'm extremely risk-averse and withdrawn. If I really had no other choice, I'd probably have to budget in a ton of... chemical helpers (stimulants).

        • I think you hate life in general more than just your job.

          Anyway statistician, accountant, teacher, are indeed jobs, and I assure you they aren't found living on the streets.

  • Bellissimo
  • Structure engineer can't either any more build bridge or tower without CAD or FDM
  • In answer to the headline - it's not, no more than calculators stopped people from thinking.

    It's changing the way we think, and reason.

    Speaking as a BE focused Go developer, I'm now working with a typescript FE, using AI to guide me, but it scares the shit out of me because I don't understand what it's suggesting, forcing me to learn what is being presented and the other options.

    No different to asking for help on IRC or StackOverflow - for decades people have asked and blindly accepted the answers from those sources, only to later discover that they have bought a footgun.

    The speed at which AI is able to gather the answers from StackOverflow coupled with its "I know what I am talking about" tone/attitude does fool people at first, just like the over-confident half assed engineers we have always had to deal with.

    Unlike those human sources, we can forcefully pushback on AI and it will (usually) take the feedback onboard, and bring the actual solution forward.

    Thus proving the engineer steering it still has to know what they are doing/looking at.

  • [dead]
  • [dead]
  • [dead]
  • [flagged]
  • [dead]
  • [dead]
  • [dead]
  • [dead]
  • [dead]
  • [dead]
  • [dead]
  • [dead]
  • [flagged]
  • ‘AI’ is my newest litmus test for whether who I’m engaging with should be taken seriously or not.

    ‘AI’ doesn’t exist, and LLMs have vanishingly narrow legitimate justifiable use cases. Any output from one is intrinsically, explosively, imprecise, and can’t be trusted to be build upon without specialist treatment. I’m yet to identify any application of a LLM which can rationally be mistaken for intelligence.

    Anyone who persists in referring to LLMs as ‘AI’ is either betraying they don’t understand what they’re talking about, or they’re invested too deeply in an active grift.

    • > ‘AI’ doesn’t exist, and LLMs have vanishingly narrow legitimate justifiable use cases. … I’m yet to identify any application of a LLM which can rationally be mistaken for intelligence.

      What’s the opposite of AI psychosis? Burying your head in the sand? Because anyone who could write this unironically today is certainly afflicted.

      • No one who is impressed by the current applications of LLMs should be in any way involved with making decisions which affect those not similarly cognitively impaired.

        It’s no different to religions or economics.

        • Hard to argue with you when disagreeing with your point makes me cognitively impaired.
          • I always welcome, and look forward to, being proven wrong.
    • [dead]
    • [dead]
  • Calculators and computers are creating engineers that can't think without them either. There are many problems with AI, but from my point of view, the title has not thought things through.
    • We teach kids basic maths before we give them calculators.

      University degrees certainly used to teach computing fundamentals without you having a computer in front of you.

      • I am all for taking AI out of education, like China recently announced that they will do.