• I enjoyed reading theses perspectives, they are reasoned and insightful.

    I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.

    For another area, prose, literature, emails, I am firm in my rejection of gen AI. I read to connect with other humans, the price of admission is spending the time.

    For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.

    Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?

    Can it even advance beyond patterns/approaches that we have built until then?

    I have many more questions and few answers and both embracing and rejecting feels foolish.

    • > For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.

      Humans are vital for non-craftsmanship reasons. Human curiosity and the ability to grok the big picture was vital in detecting the XZ backdoor attempt. If there is an wholesale AI-takeover, I don't think such attacks would have been detected 5 years in the future.

      AI will make future attacks much easier for several reasons: changes ostensibly by multiple personas but actually controled by the same entity. Maintainers who are open to AI-assisted contributions will accept drive-by contributions, and will likely have less time to review each contribution in depth, and will have a narrower context than the attacker on each PR.

      AI-generated code fucks with trust and reputation: I trust the code I generate [1] with or without AI, I trust AI-generated code by others far less than their manually generated code. I'm not aure what the repercussions are yet.

      1. I am biased and likely over-optimistic about the security and number of bugs.

    • I'm worried about a few big companies owning the means of production for software and tightening the screws.
      • Given how fast the Open Source models have been able to catch up their closed-source counterparts, I think at least on the model/software side this will be a non-issue. The hardware situation is a bit grimmer, especially with the recent RAM prices. Time will tell: if in 2–3 years time, we can get to a situation where a 512GB–1TB VRAM / unified memory + good fp8 rig is a few thousands and not tens of thousands of dollars, we'll probably be good.
        • A few thousands of dollars plus the energy to run the system is unaffordable to most of the world's developers. Not that it is going to be the first way in which the Global South is kept from closing the gap.
      • This has already happened or happening quite fast with cloud. Where setting up own data center, or even few servers could be crime against humanity if it does not use whole Kubernetes/Devops/Observability stack.
      • This is my immediate concern as well. Sam said in an interview that he sees "intelligence" as a utility that companies like OpenAI would own and rent out.
        • The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.

          [0] https://huggingface.co/docs/transformers/index

          • Im thinking mainly if they manage to get some kind of regulations that make open source impractical for commercial use, or hardware gets too expensive for small hobbyists and bootstrapped startups, or if the large data center models wildly out class open source models. I love using open source models but I can't do what I can do with 1m context opus, and that gap could get worse? Or maybe not, it could close, I don't know for sure, and how long will Chinese companies keep giving out their open source models? Lots of unknowns.
            • I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.

              [0] https://www.gartner.com/en/articles/domain-specific-language...

        • Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it, since the end product ("intelligence") can be swapped out with little concern over who is providing it.
          • > Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it

            I believe this is the natural end-state for LLM based AI but the danger of these companies even briefly being worth trillions of dollars is that they are likely to start caring about (and throwing lobbying money around) AI-related intellectual property concerns that they've never shown to anyone else while building their models and I don't think it is far fetched to assume they will attempt all manner of underhanded regulatory capture in the window prior to when commoditization would otherwise occur naturally.

            All three of OpenAI, Google and Anthropic have already complained about their LLMs being ripped off.

            https://www.latimes.com/business/story/2026-02-13/openai-acc...

            https://cloud.google.com/blog/topics/threat-intelligence/dis...

            https://fortune.com/2026/02/24/anthropic-china-deepseek-thef...

            • Which is a wildly hypocritical tack for them to take considering how all their models were created, but I certainly wouldn’t be surprised if they did.
            • In other words, it is an existential question for them. And given that some of the people running these companies have no moral convictions, expect a complete shitshow. Regulation. Natural security classifications. Endless lawfare. Outright bribery. Anything and everything to retain their valuations.
  • The industry and the wider world are full steam ahead with AI, but the following takes (from the article) are the ones that resonate with me. I don't use AI directly in my work for reasons similar to those expressed here[1].

    For the record, I'll use it as a better web search or intro to a set of ideas or topic. But i no longer use it to generate code or solutions.

    1. https://nikomatsakis.github.io/rust-project-perspectives-on-...

    • I just completely shifted my mindo n that as well. I used to think I can just ai code everything, but it just worked because I started at a good codebase that I built after a while it was the AIs codebase and neither it, nor me could really work in it, till I entangled it.
  • >It takes care and careful engineering to produce good results. One must work to keep the models within the flight envelope. One has to carefully structure the problem, provide the right context and guidance, and give appropriate tools and a good environment. One must think about optimizing the context window; one must be aware of its limitations.

    In other words, one has to lean into the exact opposite tendencies of those which generally make people reach for AI ;)

    • I'm not sure there is a "normal" tendency to reach for AI. But there is certainly parallel in that, say, javascript and PHP have a reputation of being preferred by barely able people who make interesting and useful things with atrocious code.
      • I've seen rust codebases that would make you cry along with perfectly well architected applications written in both perl and php. You're just playing into common language silo stereotypes. A competent developer can author code in their language of choice whatever that may be. I'm not sure "reaching for AI" implies anything besides that some folk prefer that tool for their work. I personally don't have a tendency to reach for AI, but that doesn't somehow imply they or I are "lesser" because of it.
        • > You're just playing into common language silo stereotypes.

          Yes, the stereotype is what I brought up on purpose.

          > A competent developer can author code in their language of choice whatever that may be. I'm not sure "reaching for AI" implies anything besides that some folk prefer that tool for their work.

          More relevantly, a competent developer can use AI just like one can use PHP. It buys enormous value in the short term.

          > I personally don't have a tendency to reach for AI, but that doesn't somehow imply they or I are "lesser" because of it.

          Yes, just like people who use PHP can make excellent programs. Nobody in this conversation implied anyone was lesser than another.

          • So you're saying reputations of "atrociousness" in both cases (AI users and implied poor quality producing software devs) aren't warranted? That wasn't clear in your post (at least to me.) Simply pointing out a correlation of negative stereotypes without refuting evidence just helps reinforce them.
        • It does to executives who sign the checks to ai usage contracts
          • The implication being that execs want folks who "reach for AI" to meet some arbitrary contract targets? Sounds like optimizing for the wrong things but I've seen crazier schemes.

            In my opinion the end goal of those execs pushing AI is the age old goal of seizing the means of production (of software in this case) by reducing the worker to a machine. It'll likely play out in their favor honestly, as it has many times in the past.

          • I don't know what an AI usage contract is but it sounds like corporate suicide.
      • I made a similar point on a different thread, saying of course it's possible to make reliable, performant code in just about any language.

        Of course it's possible, (computers are very fast in this century!), it's just that the kind of people who prioritize that, don't tend to use those languages!

        (There's lunatics like me who take pride in shipping eight kilobyte browser games. But not many, I suspect ;)

  • AI ultimately breaks the social contract.

    Sure, people are not perfect, but there are established common values that we don't need to convey in a prompt.

    With AI, despite its usefulness, you are never sure if it understands these values. That might be somewhat embedded in the training data, but we all know these properties are much more swayable and unpredictable than those of a human.

    It was never about the LLM to begin with.

    If Linus Torvalds makes a contribution to the Linux kernel without actually writing the code himself but assigns it to a coding assistant, for better or worse I will 100% accept it on face value. This is because I trust his judgment (I accept that he is fallible as any other human). But if an unknown contributor does the same, even though the code produced is ultimately high quality, you would think twice before merging.

    I mean, we already see this in various GitHub projects. There are open-source solutions that whitelist known contributors and it appears that GitHub might be allowing you to control this too.

    https://github.com/orgs/community/discussions/185387

    • Prioritizing or deferring to existing contributors happens in pretty much every human endeavor.

      As you point out this of course predates the age of LLM, in many ways it's basic human tribal behavior.

      This does have its own set of costs and limitations however. Judgement is hard to measure. Humans create sorting bonds that may optimize for prestige or personal ties over strict qualifications or ability. The tribe is useful, but it can also be ugly. Perhaps in a not too distant future, in some domains or projects these sorts of instincts will be rendered obsolete by projects willing to accept any contribution that satisfies enough constraints, thereby trading human judgement for the desired mix of velocity and safety. Perhaps as the agents themselves improve this tension becomes less an act of external constraint but an internal guide. And what would this be, if not a simulation of judgement itself?

      You could also do it in stages, ie have a delegated agent promote people to some purgatory where there is at least some hope of human intervention to attain the same rights and privileges as pre-existing contributors, that is if said agent deems your attempt worthy enough. Or maybe to fight spam an earnest contributor will have to fork over some digital currency, to essentially pay the cost of requesting admission.

      All of these scenarios are rather familiar in terms of the history of human social arrangements.

      That is just to say, there is no destruction of the social contract here. Only another incremental evolution.

    • An agent is still attached to an accountable human. If it is not, ignore it.
      • How do you figure out which is the case, at scale?
      • The problem is that it acts as an accountability sink even when it is attached.

        I've had multiple coworkers over the past few months tell me obvious, verifiable untruths. Six months ago, I would have had a clear term for this: they lied to me. They told me something that wasn't true, that they could not possibly have thought was true, and they did it to manipulate me into doing what they want. I would have demanded and their manager would have agreed that they need to be given a severe talking to.

        But now I can't call it a lie, both in the sense that I've been instructed not to and in the sense that it subjectively wasn't. They honestly represented what the agent told them was the truth, and they honestly thought that asking an agent to do some exploration was the best way to give me accurate information.

        What's the replacement norm that will prevent people from "flooding the zone" with false AI-generated claims shaped to get people to do what they want? Even if AI detection tools worked, which I emphasize that they do not, they wouldn't have stopped the incidents that involved human-generated summaries of false AI information.

    • I forgot to mention why I brought up the idea of who is making the contribution rather than how (i.e., through an LLM).

      Right now, the biggest issue open-source maintainers are facing is an ever-increasing supply of PRs. Before coding assistants, those PRs didn't get pushed not because they were never written (although obviously there were fewer in quantity) but because contributors were conscious of how their contributions might be perceived. In many cases, the changes never saw the light of day outside of the fork.

      LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.

      So I don't think the question is whether machine-generated code is low quality at all, because that is hard to judge, and frankly coding assistants can certainly produce high-quality code (with guidance). The question is who made the contribution. With rising volumes, we will see an increasing amount of rejections.

      By the way, we do this too internally. We have a script that deletes LLM-generated PRs automatically after some time. It is just easier and more cost-effective than reviewing the contribution. Also, PRs get rejected for the smallest of reasons.

      If it doesn't pass the smell test moments after the link is opened, it get's deleted.

      • > LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.

        Of course you could have an agent on your side do this, so I take you to mean a LLM that submits a PR and is not instructed to make such a reflection will not intrinsically make it as a human would, that is as a necessary side effect of submitting in the first place (though one might be surprised).

        It would be curious to have an API that perhaps attempts to validate some attestation about how the submitting LLM's contribution was derived, ie force that reflection at submission time with some reasonable guarantees of veracity even if it had yet to be considered. Perhaps some future API can enforce such a contract among the various LLMs.

    • > AI ultimately breaks the social contract

      Business schools teach that breaking the social contract is a disruption opportunity for growth, not a negative,

      The Hacker in Hacker News refers to "growth hacking" now, not hacking code

      • It depends who you ask.

        You cannot say that breaking the social contract (the fabric of society, if you will) is generally a good thing, although I am sure some will find opportunities for growth.

        After all, the phoenix must burn to emerge, but let's not romanticise the fire.

        • > You cannot say that breaking the social contract (the fabric of society, if you will) is generally a good thing

          I am not saying it's a good thing, just that it's a common attitude here

          I suppose it didn't come through in my original post, but I was trying to be critical

    • Generational churn breaks social contract.

      You all using Latin and believing in the old Greek gods to honor the dead?

      Muricans still owning slaves from Africa?

      All ways in which old social contracts were broken at one point.

      We are not VHS cassettes with an obligation to play out a fuzzy memory of history.

  • What a sober read. Hey one of my old colleagues is on there. They are also one of the best engineers I have ever encountered - period. Nice person too.

    I don't know what most people are doing in their day to day with AI but this is the closest to reality I have seen thus far. I have seen posts on here about how they have 4 agents making 50kloc a day or something and I can't reliably get a complete spec to output 50 lines of commit worthy code. Emphasis on reliable and commit worthy. I won't go into pros and cons, but I just don't see how everyone is even operating like this, especially with a team of people and any semblance of legacy code. Note, by legacy I do not mean old bad code, I mean pre existing code from various contributors.

    For research on crates/integrations, I see some benefits, but sometimes I think it is because search engines have been enshittified and inundated with the top 100 results of nearly all queries being AI slop. 10 out of 10 times I would rather ping a person. Lately I have been assuming most pro-ai articles are written by LLMs or advertising campaigns and ignore it. It's working well so far. Still a top performer on my team...

  • I feel bad for people who reject LLMs on moral grounds. They'll likely fall behind, while also having to live in a world increasingly built around something they see as immoral.
    • On the falling behind:

      I strongly doubt that is going to be the case - picking up these tools is not rocket science, even if you want to be able to use them fairly effectively. In addition, there is so much churn in AI tooling these days that an early investment might not really be worth a lot in the longer run.

      On the other hand, hands-on experience in programming and architecture is currently a must-have to use the tools effectively - and continuing without AI in the short term might just buy an inexperienced engineer some time to learn, and postpone skill atrophy for an experienced engineer.

      Of course, who can know what the future looks like, but I doubt a "wait and see" approach is that dangerous to anyone's career.

      • Why would anybody who rejects them on moral grounds pick them up later? It isn't a discussion of lateness, it's a discussion of opting out.
        • Asking it to do something isn't exactly complicated. At the very least, it's way easier than actually coding so why would you expect people to struggle with writing? There's no skill required in using LLMs, that's kinda the point.
          • The point is that people who reject them on moral grounds won't be using them, irrespective of whether they are easy to use.
        • Someone might feel different about a (future) community owned and managed LLM than one controlled by Altman, Musk, and similar. It would be nice to feel like we're building something together instead of funding the oligarchy and accelerating the collapse of civilization.
    • I don't necessarily agree with the LLM moral objection, but this point of view is unconvincing. Change the topic to say, slavery, and the "I feel bad for those who reject slavery on moral grounds, they'll fall behind..." argument becomes fairly absurd.

      You're essentially saying the very concept of a moral objection is to be pitied. Maybe you believe that's true but I'd say that reflects poorly on our values today.

      • No, he's saying this specific moral objection is to be pitied.

        When I say "I feel bad for people who feel a need to own guns", I'm not saying I feel bad for people who feel a need to lock their doors at night.

    • LLMs are very easy to pick up, the point of them for their makers is to commoditize skill and knowledge, you can't be left behind in learning to use them, AI providers don't have economic incentives to make them into anything other than appliances.

      The people more at risk of being left behind are the ones that don't learn when not to trust their output.

      • > The people more at risk of being left behind are the ones that don't learn when not to trust their output.

        Or the ones who fall out of practice writing software themselves because they've been relying on AI to do all the work.

        (Or the same, but with "long-form English text" instead of "software".)

      • They'll get left behind in the same sense that 1980s professionals who refused to touch computers got left behind.
        • Not using a computer eventually meant failing to use the basic medium of modern work. Not using an LLM does not yet imply the same thing.
    • The point of those pushing AI at the top is precisely to leave all human devs "behind", as it were. Anyone who thinks otherwise is not paying attention. Whether or not they succeed in their endeavors, time will tell. In either case, if their towers of money fail to deliver on the promise or not (like the last 3 AI winters I've lived through) doesn't mean we won't have a bunch of new useful tools at our disposal in the end.
    • I feel bad for people who reject Windows 11 on moral grounds. They'll likely fall behind, while also having to live in a world increasingly built around something they see as immoral.

      https://shkspr.mobi/blog/2026/03/im-ok-being-left-behind-tha...

    • This is just the typical FOMO nonsense pushed by AI fans.

      It's the exact same as seen with many past hypes, and every time the result is a lot more nuanced than those fans claim. It wasn't that long ago that people were claiming MongoDB was going to revolutionize the world and make relational databases obsolete, or how cryptocurrencies were going to change the world, or NFTs, and the list goes on.

      • For every MongoDB and NFT, there is also the personal computer, the Internet, the web, the smartphone, etc. If you think LLMs are comparable to NFTs, well I really don't know what to say... It's genuinely shocking to me that there are smart people on HN who believe this.
      • [dead]
    • > They'll likely fall behind

      So far this doesn't seem to be the case, despite it being repeated endlessly over the last few years.

      >while also having to live in a world increasingly built around something they see as immoral

      Should people just decide that things they think are immoral are actually fine and get over it? Doesnt really seem coherent...

      • > So far this doesn't seem to be the case, despite it being repeated endlessly over the last few years.

        For most professional roles today, using LLMs is a must. It's probably as important as using Google, if not more.

        > Should people just decide that things they think are immoral are actually fine and get over it? Doesnt really seem coherent...

        No, hence why I feel bad for them.

      • When the moral perspective isn't that sound and isn't that important, yeah, they usually do. Everyone gets tired of complaining.
    • I feel bad for people who reject lying/stealing/cheating/corruption/backstabbing on moral grounds. They'll likely fall behind, while also having to live in a world increasingly built around something they see as immoral.
      • People who reject those things don't get left behind... In fact, they tend to live more successful lives and spend less time in prison.
        • I would argue that leading AI companies have engaged in some or all of those :)
    • This is total FUD.

      The goal that AI-Megacorp CEOs have been pushing lately is "super intelligence" and so if that's where you truly think we are rapidly heading, what's the risk for those of us not hyper-invested in AI? This "super intelligence" (by definition) will be able to understand us both equally well, so all these "prompting skills" people claim sets them apart from people who don't use AI that much will be utterly pointless.

      • We're not there yet and we don't know when that will happen.
    • Are the people who aren’t born or haven’t even entered a workforce also falling behind?
      • Yeah that's why you go to school, learn, get trained etc..
    • I feel bad for people who accept AI. They're going to wind up just as replaced by it as I will, but it will somehow come as a surprise to them despite the writing being on the wall for ages

      I imagine there will be a lot of regrets in the future from people that were early adopters that eventually got pushed out by the AI they love so much

      • Regret? Of what? The tech is here. You won't slow it down by not using it. People need to either adapt by moving to more and more niche areas, or become the person to be retained when the efficiency gains materialize. We still don't have the proper methodology figured out, but people are working on it.

        That said, I'd agree that people who currently claim 20x speedups will indeed be replaced.

        • If enough people refuse to use it then we can absolutely slow it down

          So I'm doing that. Even if I don't expect to "win" in the end, I'm doing what I think is right

          Maybe one day I'll be vindicated

          • At the very least I can look my kids in the eye when they're working age that I didn't happily help bring in their bleak futures.
        • > You won't slow it down by not using it.

          Then why is it forced into everywhere and everyone and everything?

          • It's because of simple FOMO of companies. If they don't "invest" in it they will be left behind. Which is true. However, the way they invest is equally (if not more) important. E.g. MS is a good example of how not to do it.
          • Because they don't want you to realize that you have the power to reject garbage then have the government punish them for creating such waste.
      • There must be plenty of people who "accept" it in a fatalistic manner, where the final result will not be a surprise.
  • The title is misleading. It says in one of the first sentences:

    > The comments within do not represent “the Rust project’s view” but rather the views of the individuals who made them. The Rust project does not, at present, have a coherent view or position around the usage of AI tools; this document is one step towards hopefully forming one.

    So calling this "Rust Project Perspectives on AI" is not quite right.

    • Correct. This is one internal draft by someone quoting some other people's positions but not speaking for any other positions.
      • Not speaking for others, but Niko's writing is IMO strongly shading the wording used to describe positions that do or don't align with his own views.
    • Maybe "Rust maintainers' perspectives on AI" or "Rust contributors' perspectives on AI" would be better?
    • I took it as meaning "perspectives of people in the Rust Project about AI."
  • [dead]
  • Anything that uses the phrase "diverse perspectives" is not worth reading.
  • Seems like a lot of people’s problems with AI come from talking to the dumber models and having it not provide sufficient proof that it fixed a bug. Maybe instead of banning AI, projects should set a minimum smarts level. e.g. to contribute, you must use gpt-5.4-codex high or better for either writing it or code reviewing it.
    • doesn’t matter if you use the best model

      I use Opus 4.6 almost exclusively and it still generates nonsense if I don’t guide it.