• Whenever I get worried about this I comb through our ticket tracker and see that ~0% of them can be implemented by AI as it exists today. Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying. But context limitation is fundamental to the technology in its current form and the value of SWEs is to turn the bigger picture into a functioning product.
    • While true, my personal fear is that the higher-ups will overlook this fact and just assume that AI can do everything because of some cherry-pick simple examples, leading to one of those situations where a bunch of people get fired for no reason and then re-hired again after some time.
      • > leading to one of those situations where a bunch of people get fired for no reason and then re-hired again after some time.

        More likely they get fired for no reason, never rehired, and the people left get burned out trying to hold it all together.

    • A lot of this can be provided or built up by better documentation in the codebase, or functional requirements that can also be created, reviewed, and then used for additional context. In our current codebase it's definitely an issue to get an AI "onboarded", but I've seen a lot less hand-holding needed in projects where you have the AI building from the beginning and leaving notes for itself to read later
      • Curious to hear if you've seen this work with 100k+ LoC codebases (i.e. what you could expect at a job). I've had some good experiences with high autonomy agents in smaller codebases and simpler systems but the coherency starts to fizzle out when the system gets complicated enough that thinking it through is the hard part as opposed to hammering out the code.
      • We have this in some of our projects too but I always wonder how long it's going to take until it just fails. Nobody reads all those memory files for accuracy. And knowing what kind of BS the AI spews regularly in day to day use I bet this simply doesn't scale.
    • It's not binary. Jobs will be lost because management will expect the fewer developers to accomplish more by leveraging AI.
      • Big tech might ahead of the rest of the economy in this experiment. Microsoft grew headcount by ~3% from June 2022 to June 2025 while revenue grew by >40%. This is admittedly weak anecdata but my subjective experience is their products seem to be crumbling (GitHub problems around the Azure migration for instance), and worse than they even were before. We'll see how they handle hiring over the next few years and if that reveals anything.
    • Can you give an example to help us understand?

      I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.

      • Here's an example ticket that I'll probably work on next week:

            Live stream validation results as they come in
        
        The body doesn't give much other than the high-level motivation from the person who filed the ticket. In order to implement this, you need to have a lot of context, some of which can be discovered by grepping through the code base and some of which can't:

        - What is the validation system and how does it work today?

        - What sort of UX do we want? What are the specific deficiencies in the current UX that we're trying to fix?

        - What prior art exists on the backend and frontend, and how much of that can/should be reused?

        - Are there any scaling or load considerations that need to be accounted for?

        I'll probably implement this as 2-3 PRs in a chain touching different parts of the codebase. GPT via Codex will write 80% of the code, and I'll cover the last 20% of polish. Throughout the process I'll prompt it in the right direction when it runs up against questions it can't answer, and check its assumptions about the right way to push this out. I'll make sure that the tests cover what we need them to and that the resultant UX feels good. I'll own the responsibility for covering load considerations and be on the line if anything falls over.

        Does it look like software engineering from 3 years ago? Absolutely not. But it's software engineering all the same even if I'm not writing most of the code anymore.

        • This right here is my view on the future as well. Will the AI write the entire feature in one go? No. Will the AI be involved in writing a large proportion of the code that will be carefully studied and adjusted by a human before being used? Absolutely yes.

          This cyborg process is exactly how we're using AI in our organisation as well. The human in the loop understands the full context of what the feature is and what we're trying to achieve.

        • But planning like this is absolutely something AI can do. In fact, this is exactly the kind of thing we start with on our team when it comes to using AI agents. We have a ticket with just a simple title that somebody threw in there, and we asked the AI to spin up a bunch of research agents to understand and plan and ask itself those questions.

          Funny enough, all the questions that you posed are things that come up right away that the agent asks itself, and then goes and tries to understand and validate an answer, sometimes with input from the user. But I think this planning mechanism is really critical to being able to have an AI generate an understanding, then have it be validated by a human before beginning implementation.

          And by planning I don't necessarily mean plan mode in your agent harness of choice. We use a custom /plan skill in Claude Code that orchestrates all of this using multiple agents, validation loops, and specific prompts to weed out ambiguities by asking clarifying questions using the ask user question tool.

          This results in taking really fuzzy requirements and making them clear, and we automate all of this through linear but you could use your ticket tracker of choice.

        • I mean, what is the validation system? Either it exists in code, and thus can be discovered if you point the AI at repo, or... what, it doesn't exist?

          For the UX, have it explore your existing repos and copy prior art from there and industry standards to come up with something workable.

          Web scale issues can be inferred by the rest of the codebase. If your terraform repo has one RDS server, vs a fleet of them, multi-region, then the AI, just as well as a human, can figure out if it needs Google Spanner level engineering or not. (probably not)

          Bigger picture though, what's the process of a human logs an under specified ticket and someone else picks it up and has no clue what to do with it? They're gonna go ask the person who logged the bug for their thoughts and some details beyond "hurr Durr something something validation". If we're at the point where AI is able to make a public blog post shaming the open source developer for not accepting a patch, throwing questions back to you in JIRA about the details of the streaming validation system is well within its capabilities, given the right set of tools.

          • Honestly curious, have you seen agents succeed at this sort of long-trajectory wide breadth task, or is it theoretical? Because I haven't seen them come close (and not for lack of trying)
            • Yeah I absolutely see it every day. I think it’s useful to separate the research/planning phase from the building/validadation/review phase.

              Ticket trackers are perfect for this. Just start with asking AI to take this unclear, ambiguous ticket and come up with a real plan for how to accomplish it. Review the plan, update your ticket system with the plan, have coworkers review it if you want.

              Then when ready, kick off a session for that first phase, first PR, or the whole thing if you want.

      • Then why isn't it? Just offload it to the clankers and go enjoy a margarita at the beach or something.
        • There are plenty of people who are enjoying margarita by the beach while you, the laborer, are working for them.
          • Preach. That's always been the case though, AI just makes it slightly worse.
      • Why do you have a backlog then? If a current AI can do 100% of it then just run it over the weekend and close everything
        • As always, the limit is human bandwidth. But that's basically what AI-forward companies are doing now. I would be curious which tasks OP commenter has that couldn't be done by an agent (assuming they're a SWE)
          • This sounds bogus to me: if AI really could close 100% of your backlog with just a couple more humans in the loop, you’d hire a bunch of temps/contractors to do that, then declare the product done and lay off everybody. How come that isn’t happening?
            • Because there's an unlimited amount of work to do. This is the same reason you are not fired once completing a feature :-) The point of hiring a FTE is to continue to create work that provides business value. For your analogy, FTEs often do that by hiring temp, and you can think of the agent as the new temp in this case - the human drives an infinite amount of them
      • I think the "well defined prompt" is precisely what the person you responded to is alluring to. They are saying they don't get worried because AI doesn't get the job done without someone behind it that knows exactly what to prompt.
      • >>I look at my ticket tracker and I see basically 100% of it that can be done by AI.

        That's a sign that you have spurious problems under those tickets or you have a PM problem.

        Also, a job is a not a task- if your company has jobs which is a single task then those jobs would definitely be gone.

    • We're all slowly but surely lowering our standards as AI bombards us with low-quality slop. AI doesn't need to get better, we all just need to keep collectively lowering our expectations until they finally meet what AI can currently do, and then pink-slips away.
    • Apparently you haven't seen ChatGPT enterprise and codex. I have bad news for you ...
      • Codex with their flagship model (currently GPT-5.3-Codex) is my daily driver. I still end up doing a lot of steering!
  • Labor substitution is extremely difficult and almost everybody hand waves it away.

    Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.

    Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.

    This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.

    • I lost my job as a software developer some time ago.

      Flipping burgers is WAY more demanding than I ever imagined. That's the danger of AI:

      It takes jobs faster than creating new ones PLUS for some fields (like software development) downshifting to just about anything else is brutal and sometimes simply not doable.

      Forget becoming manager at McDonald's or be even good at flipping burgers at the age of 40: you are competing with 20yr olds doing sports with amazing coordination etc

      • I have worked in the restaurant industry within the last 5 years and I'm probably older than you.
      • Ugh.. sorry to hear :( I am myself unemployed right now. Its really hard to land a job in tech.. Luicky, I dont need to flip burgers for now...
        • Who's gonna play you to flip burgers with no experience doing it and everyone else needing a job as well?
          • There is a huge demand for low-skill labor in other industries. Stuff like plumbing, HVAC, and a ton of other traditionally unsexy jobs that can barely keep enough people in a town to perform these jobs at higher costs than normal.
    • >the most unskilled labor

      People are worried about white-collar not blue-collar jobs being replaced. Robotics is obviously a whole different field from AI.

      • > Robotics is obviously a whole different field from AI

        I agree, but people are conflating the two. We have seen a lot of advancements in robotics, but as of current that only makes the economics worse. We're not seeing the complexity of robots going down and we're seeing the R&D costs going up, etc.

        If it didn't make sense a few years ago to buy a crappy robot that can barely do the task because your business will never make money doing it, it probably doesn't make sense this year to buy a robot that still can't accomplish the tasks and is more expensive.

      • Yeah, although in the "Something big is happening" Shumer did say at the end "Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects."

        Being the hype-man that he is I assume he meant humanoid robots - I think he's being silly here, and the sentence made me roll my eyes.

    • Jobs that require physical effort will be fine for the reasons you state

      Any job that is predominantly done on a computer though is at risk IMO. AI might not completely take over everything, but I think we'll see way fewer humans managing/orchestrating larger and larger fleets of agents.

      Instead of say 20 people doing some function, you'll have 3 or 4 prompting away to manage the agents to get the same amount of work done as 20 people did before.

      So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.

      • > So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.

        And that's probably something most people are okay with. Work that can be automated should be and humans should be spending their time on novel things instead of labor if possible.

    • Can you walk me through this argument for a customer service agent? The jobs where the nuance and variety isn’t there and don’t involve physical interaction are completely different to flipping burgers
      • A customer service agent that can be automated should be, but it's not working right now. Most support systems are designed to offload as much work as possible to the automated funnel, which almost always has gaps, loops, etc. The result is customers who want to pay for something or use something that get "stuck" being unable to throw money at a company. Right now the cost of fraud is much greater than the cost of these uncaptured sales or lost customers.

        Eventually that will change and the role of a customer service agent will be redefined.

    • Funny, I go to South Korea and the fast food burger joints literally operate exactly as you say they couldn't. I've had the best burger in my life from a McDonalds in South Korea operated practically by robots.

      It's a non reality in America's extremely piss poor restaurant industry. We have a competency crisis (the big key here) and worker shortage that SK doesn't, and they have far higher trust in their society.

      • > McDonald’s global CEO has famously stated that while they invest in "advanced kitchen equipment," full robotic kitchens aren't a broad reality yet because "the economics don't pencil out" for their massive scale.

        > While a highly automated McDonald’s in South Korea (or the experimental "small format" store in Texas) might look empty, the total headcount remains surprisingly similar to a standard restaurant

    • The burger cook job has already been displaced and continues to be. Pre-1940s those burger restaurants relied on skilled cooks that got their meat from a butcher and cut fresh lettuce every day. Post-1940s the cooking process has increasingly become assembly-lined and cooks have been replaced by unskilled labor. Much of the cooking process _is_ now done by robots in factories at a massive scale and the on-premise employees do little else than heat it up. In the past 10 years, automation has further increased and the cashiers have largely been replaced by self-order terminals so that employees no longer even need to speak rudimentary English. In conclusion, both the required skill-level and amount of labor needed for restaurants has been reduced drastically by automation and in fact many higher skilled trade jobs have been hit even harder: cabinetmakers, coachbuilders and such have been almost eradicated by mass production.

      It will happen to you.

  • qgin
    You don't need AI to replace whole jobs 1:1 to have massive displacement.

    If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.

    • That's exactly the point of the essay though. The way that you're implicitly modeling labor and collaboration is linear and parallelizable, but reality is messier than that:

      > The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.

    • Also, you don’t need AI to replace your job, you need someone higher up in leadership who thinks AI could replace your job.

      It might all wash out eventually, but eventually could be a long time with respect to anybody’s personal finances.

      • Right, it doesn't help pay the bills to be right in the long run if you are discarded in the present.

        There exists some fact about the true value of AI, and then there is the capitalist reaction to new things. I'm more wary of a lemming effect by leaders than I am of AI itself.

        Which is pretty much true of everything I guess. It's the short sighted and greedy humans that screw us over, not the tech itself.

    • The problem is, you won’t necessarily know which 20% it did wrong until it’s too late. They will happily solve advanced math problems and tell you to put glue on your pizza with the same level of confidence.
    • In reality that would probably mean that something like 60% of the developer positions would be eliminated (and, frankly, those 60% are rarely very good developers in a large company).

      The remaining "surplus" 20% roles retained will then be devoted to developing features and implementing fixes using AI where those features and fixes would previously not have been high enough priority to implement or fix.

      When the price of implementing a feature drops, it becomes economically viable (and perhaps competitively essential) to do so -- but in this scenario, AI couldn't do _all_ the work to implement such features so that's why 40% rather than 20% of the developer roles would be retained.

      The 40% of developer roles that remain will, in theory, be more efficient also because they won't be spending as much time babysitting the "lesser" developers in the 60% of the roles that were eliminated. As well, "N" in the Mythical Man Month is reduced leading to increased efficiency.

      (No, I have no idea what the actual percentages would be overall, let alone in a particular environment - for example, requirements for Spotify are quite different than for Airbus/Boeing avionics software.)

    • We are already in low-hire low-fire job market where while there aren't massive layoffs to spike up unemployment there also aren't as many vacancies.
    • What happens if you lay off 80% of your department while your competitors don't? If AI multiplies each developer's capabilities, there's a good chance you'll be outcompeted sooner or later.
  • (In the semiconductor industry) We experienced brutal layoffs arguably due to over-investment into Ai products that produce no revenue. So we've had brutal job loss due to Ai, just not in the way people expected.

    Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.

    • Maybe I'm being naive here, but for AI (heck, for any good algorithm) to work well, you need some at least loosely-clearly defined objectives. I assume it's much more straightforward in semi, but there're many industries, once you get into the details, all kinds of incentives start to disalign and I doubt AI could understand all kinds of nuances.

      E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.

      • I think those conversations occur due to changes in timeline of deliverables or certainty of result, would that not be an implementation detail?
        • Well, like I said, there're hidden incentives behind the scene; in my case, the hidden incentive is that, the requester/client is one of the company's subpar broker, and PM probably decided to just offer an average level of commitment, not going above and beyond. Hence the plan was to do exactly what the broker want even though that was messy and inferior. You can't just write down that kind of motivation on paper anywhere.

          --- I said it because I did the analysis, and realized that if I implement the original version, which basically is a crazy way to iteratively solve the MIP problem, it's much harder to reason with internally, and much harder to code correctly. But obviously it keep the broker happy (the developer is doing exactly what I said)

  • I was with the author on everything except one point: increasing automation will not leave us with such abundance that we never have to work again. We have heard that lie for over a century. The stream engine didn't do it, electricity didn't do it, computers didn't do it, the Internet didn't do it, and AI won't either. The truth is that as input costs drop, sales prices drop and demand increases - just like the paradox they referred to. However, it also tends to come with a major shift in wealth since in the short term the owners of the machines are producing more with less. As it becomes more common place and prices change they lose much of that advantage, but the workers never get that.
  • Bottlenecks. Yes. Company structures these days are not compatible with efficient use of these new AI models.

    Software engineers work on Jira tickets, created by product managers and several layers of middle managers.

    But the power of recent models is not in working on cogs, their true power is in working on the entire mechanism.

    When talking about a piece of software that a company produces, I'll use the analogy of a puzzle.

    A human hierarchy (read: company) works on designing the big puzzle at the top and delegating the individual pieces to human engineers. This process goes back and forth between levels in the hierarchy until the whole puzzle slowly emerges. Until recently, AI could only help on improving the pieces of the puzzle.

    Latest models got really good at working on the entire puzzle - big picture and pieces.

    This makes human hierarchy obsolete and a bottleneck.

    The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.

    Of course, it's not just about the software, but streams of information - customer support, bug tickets, testing, changing customer requirements.. but all of these can be handled by AI even today. And it will only get better.

    This means different things depending on which angle you look at it - yes, it will mean companies will become obsolete, but also that each employee can become a company.

  • Unfortunately, one of the struggles in old high tech (thats the only thing i know, are you also experiencing this?) is that the C-level people don't look at Ai and say LLM's can make an individual 10x more productive therefore (and this is the part they miss) we can make our tool 10x better. They think: therefore we can lay off 9 people.
    • There aren't 10x revenue gains in most businesses if their workers become 10x more productive. Some markets grow very slow and/or have capped growth.

      Therefore, the best way to increase profit is to lower cost.

  • My view is that we spend a lot of time thinking that ai cant do x and y when the wider problem is the short to medium term redirection of capital to tech rather than labour.

    Ai might not replace current work but it’s already replacing future hypothetical work. Now whether it can actually do that the job is besides the point in the short term. The way business models work is that if there’s an option to reduce your biggest cost (labour) you’d very much give it a go first. We might see a resurgence of labour if it turns out be all hype but for the short to medium term they’ll be a lot of disruption.

    Think we’re already seeing that in employment data in the US, as new hiring and job creation slows. A lot of that will for sure be the current economic environment but I suspect (more so in tech focused industries) that will also be due to tech capex in place of headcount growth

  • You are not worrried for one of the 2 reasons:

    1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).

    2 You prefer to persue no troubles in matters of complexity.

    Time will tell, is showing it already.

    • Agree. I feel like most of the people sounding the alarm have been in the software-focused job hunting market for 6+ months.

      Those who downplay it are either business owners themselves or have been employed for 2+ years.

      I think a lot of software engineers who _haven't_ looked for jobs in the past few years don't quite realize what the current market feels like.

      • Alternatively: this is an America problem. I'm outside of America and I've been fielding more interviews than ever in the past 3 months. YMMV but the leading indicator of slowed down hiring can come from so many things. Including companies just waiting to see how much LLMs affect SWE positions.
        • Alternatively, it's a loud minority.

          As an American I found a new job last year (Staff SW), and it was falling off a log easy, for a 26% pay bump.

    • Even people in category #1 should be concerned. Even if their income is not directly affected, the potential for disruption is clearly brewing: mass unemployment, social and civil unrest.

      I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.

      I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.

    • For 1, unless you already have an self-sustaining underground bunker or island, you will be affected. No matter how much savings and total compensation you have. If you went out to get grocery in the last week, it will affect you.
  • The article frames the premise that "everything will be fine" around people with "regular jobs", which I assume means non knowledge work, but most of public concern is on cognitive tasks being automated.

    It also argues that models have existed for years and we're yet to see significant job loss. That's true, but AI is only now crossing the threshold of being both capable and reliable enough to be automate common tasks.

    It's better to prepare for the disruption than the sink or swim approach we're taking now in hopes that things will sort themselves out.

    • There is no “preparing for the disruption” at an individual level, aside from maybe trying to 100x a polymarket bet to boost your savings.
  • Ordinary people aren't even ok now.

    Lest we forget, software engineers aren't exactly ordinary people: they make quite a bit above the median wage.

    AI taking our jobs is scary because it will turn us into "ordinary people". And ordinary people are not ok. They're barely surviving.

  • > Bottlenecks rule everything around me

    The self-setup here is too obvious.

    This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.

    It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.

    I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.

    • I belive the black plague actually caused a massive labor shortage and wages increased. When a huge amount of people die and you still need to have people build bridges and be soldiers and finish building the damn cathedral that's been under construction for the last 400 years then that is what will happen.

      Here's an article:

      https://history.wustl.edu/news/how-black-death-made-life-bet...

      • I meant the jobs die. So I am not sure what would stand in for "labor shortage" in a situation of sustained net job losses. Perhaps a growth opportunity for mannequins to visually fill the offices/shops of the fired, and maintain appearances?

        But yes, if lots of people deathed by AI, the remaining humans might have more job security! Could that be called a "soft landing"?

    • The black plague's capital-concentration aftermath supposedly fueled the renaissance and the city-state ascensions, and ultimately the great land discoveries of the 14th and 15th centuries.

      Not sure if there's an analogy to make somewhere though

    • > Job loss is likely to have statistics more comparable to the Black Plague.

      Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.

      On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).

      • It is very obvious what and who caused the low living standards in North Korea and yet here we are decades later with no end in site.
  • i am somewhat worried in the short term about ai job displacement for a subsection of the population

    for me the 2 main factors are:

    1. whether your company's priority is growing or saving

    - growing companies especially in steep competition fight for talent and ai productivity results in more hiring to outcompete

    - saving companies are happy to cut jobs to save on margin due to their monopoly or pressure from investors

    2. how 'sequence of tasks-like' your job is

    - SOTA models can easily automate long running sequences of tasks with minimal oversight

    - the more your job resembles this the more in-danger you are (customer service diffusion is just starting, but i predict this will be one of the first to be heavily disrupted)

    - i'm less worried about jobs where your job is a 'role' that comes with accountability and requires you to think big picture on what tasks to do in the first place

  • Maybe you should be a little worried. A healthy fear never killed anyone.
    • I mean - anxiety definitely kills people, right?
      • Is it "healthy fear" if it turns out to be a fatal dose?
    • "For quality of life, it is better to err on the side of being an optimist and wrong, rather than a pessimist and right." -Elon Musk
      • Profound quotes are only profound when said by someone who's widely respected.
      • Is that true? I’m not so sure. In the 1950s I could have been optimistic that asbestos won’t give people cancer.

        “Some of you may die, but that’s a risk I’m willing to make” -also Elon Mush probably

      • Optimism is a luxury for those who won't be the ones paying for the mistake.
        • I'm optimistic that my favorite team will play well this season.

          I ain't paying for shit.

  • The take that I am increasingly believing is that Software Engineers should broadly be worried, because while there will always be demand for people who can create software products, whatever the tools may be, the skills necessary to do it well are changing rapidly. Most Software Engineers are going to wake up one day and realize their skills aren't just irrelevant, but actively detrimental, to delivering value out of software.

    There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.

    • Is AI filling in for all those COBOL programmers they needed yet?
  • I don't worry about it because worrying about it just seems like a waste of time and an unproductive, negative way to think about things. Instead I spend my time and thought not in worry but in adapting to the changing landscape.
  • No it's not a February 2020 moment for sure. In February 2020, most people had heard of COVID and a few scattered outbreaks happened, but people generally viewed the topic as more of a curiosity (like major world news but not necessarily something that will deeply impact them). This is more like start of March 2020 for general awareness.
  • I read that essay on Twitter the other day and thought that it was a mildly interesting expression of one end of the "AI is coming for our jobs" thing but a little slop-adjacent and not worth sharing further.

    And it's now at 80 million views! https://x.com/mattshumer_/status/2021256989876109403

    It appears to have really caught the zeitgeist.

    • I just skimmed this and the so called zeitgeist here is fear. People are scared, it's material concern and he effectively stoked it.

      I work on this technology for my job and while I'm very bullish pieces like that are as you said slopish and as I'll say breathless because there are so many practical challenges here to deal with standing between what is being said there and where we are now.

      Capability is not evenly distributed and it's getting people into loopy ideas of just how close we are to certain milestones, not that it's wrong to think about those potential milestones but I'm wary of timelines.

      • Are you ever concerned about the consequences of what you are making? No one really knows how this will play out and the odds of this leading to disaster are significant.

        I just don't understand people working on improving ai. It just isn't worth the risk.

        • Of course, I think about this at least once a week maybe more often. I think that the technology overall will be a great net benefit to humanity or I wouldn't touch it.
          • Genuine question: how?

            I’m younger than most on this site. I see the next decades of my life being defined by a multi-generational dark age via a collapse in literacy (“you use a calculator right?”), median prosperity (the only truly functional distribution system we have figured out is labor), and loss of agency (kinda obvious). This outcome is now, as of 2026, essentially priced into the public markets and accepted as fact by most media outlets.

            “It’s inevitable” is at least a hard point to argue with. “Well I’M so productive, I’m having the time of my life”, the dominant position in many online tech spaces, seems short-sighted at best.

            I miss being a techno optimist, it’s much more fun. But it’s increasingly hard.

        • >I just don't understand people working on improving ai. It just isn't worth the risk.

          A cynical/accelerationist perspective would be: it enables you to rake in huge amounts of money, so no matter what comes next, you will be set up to endure it better than most.

    • Let me get something straight: That essay was completely fake, right? He/It was lying about everything, and it was some sort of... what?

      Did the 80 million people believe what they were reading?

      Have we now transitioned to a point where we gaslight everyone for the hell of it just because we can, and call it, what, thought-provoking?

      • What was fake? I don't see anything controversial or factually wrong. I question the prediction but that's his opinion.
      • Yes. It’s an ad for his product, which nobody had heard of before. I’m not on twitter but I’m seeing it pretty much everywhere now.
      • > Did the 80 million people believe what they were reading?

        Those numbers are likely greatly exaggerated. Twitter is nowhere near where it was at its peak. You could almost call it a ghost town. Linkedin but for unhinged crypto- and AI bros.

        I'm sure the metrics report 80 million views, but that's not 80 million actual individuals that cared about it. The narrative just needs these numbers to get people to buy into the hype.

    • Well the zeitgeist is that our brains are so fried that such piece of mediocre writing penned by a GPT-container startupper can surge to the top
    • This is what they get for not reading our antislop paper (ICLR 2026) and using our anti-slopped sampler/models, or Kimi (which is remarkable relatively non sloppy)

      https://arxiv.org/abs/2510.15061

      I thought normies would have caught onto the EM dash, overuse of semicolons, overuse of fancy quotes, lack of exclamation marks, "It's not X, it's Y", etc. Clearly I was wrong.

  • The advent of AI may shape up to be just like the automobile.

    At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.

    After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.

    Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.

    Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4

    • > This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.

      That's just an American thing, I've never owned a car and most people of my age I know haven't either.

      • That's fair. The public infrastructure in other places around the world is a lot more hospitable to other methods of transportation.
    • > Last will come the AI-integrated brain computer interface. You won't have any choice

      Choose to die

  • I’m not worried about job loss as a result of being replaced by AI, because if we get AI that is actually better than humans - which I imagine must be AGI - then I don’t see why that AI would be interested in working for humans.

    I’m definitely worried about job loss as a result of the AI bubble bursting, though.

  • I'm one of those developers who is now writing probably ~80% of my code via Claude. For context, I have >15 years experience and former AWS so I'm not a bright-eyed junior or former product manager who now believes themselves a master developer.

    I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.

    You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.

    I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.

    As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.

    Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.

    The type of AI fears are coming from things like this in the original article:

    > I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.

    Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.

    There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.

    • This. At this point AI/LLM/Claude Code is still a power user tool; the more you know about your domain + the more you're willing to reasonably use it, the more gain you have.

      That being said the real danger is not coming from AI today, it's more C-suites believing AI can just zero shot any problem you throw at it.

  • > it’s been viewed about 100 million times and counting

    That's a weird way of saying 80 million times.

  • [dead]
  • [dead]