• LLMs have felt to me like they excel in one particular skill (being able to make connections across vast amounts of knowledge) and are basically average, otherwise. If I'm below average at something (painting, say) the results astound me. But if I'm above average (programming, writing (I like to think)), I'm generally underwhelmed by the results.

    I used Claude a lot for planning my current fun project. Good rubber duck. It liked all the suggestions I pitched for the design, but I only went with the last one after discarding the others.

    The others were all fine and would have worked, but they weren't the best that I found.

    Back to the point, if we're getting average guidance from the AI and we're just offloading our thinking process at that level, then I could sure see it panning out like TFA says.

    • I'd argue that they're above average for the population, and below average for experts. Can they draw as well as an expert/professional illustrator? Probably not. Can they draw better than almost anyone who isn't a expert/professional illustrator? Probably.

      I think the value we're losing is where people are bad at things, which is often where new ideas/approaches come from, but this is a macro metric, so it's a hard sell to the person struggling when there's an easy button available.

      • > I'd argue that they're above average for the population, and below average for experts. Can they draw as well as an expert/professional illustrator? Probably not. Can they draw better than almost anyone who isn't a expert/professional illustrator? Probably.

        That's pretty much the definition of "average" (as most commonly used, to refer to "mean" rather than median or much less commonly mode), isn't it?

    • Agreed. The test I run is "Can it simplify a concept I know in a matter of sentences/words, better than I can?" on a subject area that requires a lot of nuance.

      And to my disappointment, the LLMs never come close. They dump paragraphs upon paragraphs. I actually want to see the LLMs surprise me. But they disappoint me for what Im specifically looking for.

      • Hm I gave the prompt:

        > Summarize a technique for type erasure as is done in C++. Assume that your audience is familiar with all of C++'s features, but not with the technique of type erasure. The briefer the summary, the better, but don't omit any essential pieces or use vague language.

        ChatGPT Plus generated a lot of text and code. Then I said:

        > Now please summarize using only a few sentences.

        and it replied:

        > Type erasure wraps any concrete type `T` in a uniform container that stores a `void*` to the object plus a pointer to a type-specific static table of function pointers (the “vtable”). Each `T` provides its own table implementing required operations (destroy, copy/move, and any interface methods), and the container dispatches all behavior through that table. This lets unrelated types share a runtime interface without a common base class, with lifetime and semantics fully defined by the per-type table.

        Not perfect, but pretty good.

        • Is it pretty good, though? :) I wouldn't let that go out the door. Needs a *lot* of TLC.
    • > I'm below average at something (painting, say) the results astound me. But if I'm above average (programming, writing (I like to think)), I'm generally underwhelmed by the results.

      Industrial scale dunning kruger/gell man amnesia. We're ~5 years in to the meme of "wow, every white collar profession other than MINE is doomed. But yeah, mine requires really specific domain knowledge, taste, and problem solving, so I'm not super worried about it but it's a very helpful tool"

      • The expectations are just too low for LLMs.

        It’s like a children or a puppy. People get impressed and go “how cute” at anything it does.

        I see designers and PMs vibe coding shit that they would complain for days if it was delivered by a developer. I see C-Levels delivering reports that would make them eviscerate some intern.

    • It's just Gell-Mann amnesia.
  • This state of affairs presages the advent of a second dark age - one that will forever eclipse the era of radical openness & transparency that once served the software community for decades. Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM whok would steal their competitive advantage & replicate it at scale, until any possible information asymmetries have been arbitraged away. The development & secrecy of technique will once again become a deep moat as LLMs fall into local, suboptimal minima, trained on and marketed towards the lowest common denominator. The Internet, or at least, The Web, becomes a Dark Forest of the Dead Internet (Theory), in which humans fear of speaking out and capturing the attention of the LLM who would siphon their creative essence for more, ever more training data. Interaction contracts into small meshes of trusted, verifiably human participants to keep the tides of spamslop at bay. Quasi-monastic orders that still scribe with pen and paper emerge, that believe there is still value in training and educating a human mind and body.

    - Unknown, 19 Feb 2026

    • > Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM who would steal their competitive advantage & replicate it at scale

      I've already started thinking this way, there's stuff I would have open sourced in the past but no longer will because I know it would get trained on. I'm not sure of any way I can share it with humans and only humans. If I let the LLMs have the UI patterns and libraries I've developed it would dilute my IP, like it has Studio Ghibli's art style.

      • It's worth questioning the underlying assumptions. It's humans - all humans - that benefit from LLMs. I see a lot of people having this attitude, but I can't help but see it as really being about seeking credit instead of generosity, and/or Dog in the Manger mindset.
        • Humans aren't benefiting from LLMs, only a few individuals are. Let's stop with the fake platitudes and realize that unless this technology isn't completely open sourced from top to bottom, it's a complete farce to think humans are going to benefit and not just the rich getting richer.
          • > Humans aren't benefiting from LLMs, only a few individuals are.

            Honest question: how is this different from traditional Open Source? Linux powers most of the internet, yet the biggest beneficiaries are cloud providers, not individual users. Good open weights models already exist and people can run them locally. The gap between "open" and "everyone benefits equally" has always been there...

            • Because opposite is true for open source? It is actually for free, whether you contribute to it or not. Anyone can legally use it for free. Torwalds can not just wake up one day and decide to charge more.

              If you feel like linux is a too much of a monopoly, you can actually fork it and compete.

              • But I considered that when I said "Good open weights models already exist and people can run them locally."

                You can have a great LLM model with vast coding knowledge running on your computer right now, for free. It won't be the best one nor the fastest one, but still a very good one.

            • Same is true about science as well. Taxpayer money is spent on research, but the outcomes of that research primarily benefits the corporate interests.

              I'm the last person to cheer for unrestrained capitalism, but this anti-billionaire / anti-AI narrative is getting ridiculous even for general population standards, much less for HN. It's like people think their food or medicine or LLMs grow on fucking trees. No. Companies and corporations is how adults do stuff for other adults, at scale. Everyone understands that, except of a part of software industry, that by accidental confluence of factors, works by different rules than literally the rest of the world.

          • You must not be serious. Every single person using LLMs, whether paid or free tiers or open models, whether using them for chat or as part of some kind of data pipeline - so possibly without even knowing they're using them - benefits.

            "Few individuals" get money mostly for providing LLMs as a service. As far as tech businesses go, this is refreshingly straightforward, literally just charging money for providing some useful service to people. Few tech companies have anything close to a honest business model like this.

          • Gemma4 is apache2 licensed.

            I am unsure about the openness of the training data itself. That too should be required for a LLM to be considered 'open'.

            Open source is the only way forward, I agree.

        • > It's humans - all humans - that benefit from LLMs

          This is not true tho. The moment LLM will be necessary, we will all have to pay to the monopoly owners, as much as they can extract.

          But, they will never pay to us.

        • I'm not seeing how the benefits have outweighed the positives at this point. Spam, scams, porn, being inundated with slop, people losing their skills and getting dumber, mass surveillance...

          Is that worth possibly maybe saving some time programming, but then not gaining the knowledge you would have if you did it yourself, that can be built on in the future?

          I don't see technological advancement as good in itself if morality is in decline.

          • I reached the same conclusion. It also made me realised how most technologies degraded our lives.

            Before the TV people would go to the theatre. It's becoming hard to find a theatre these days. Artificial light is convenient, it made billion or people develop sleep disorder and we can't see stars at at night. Mass food production supposedly nourished more people: veggies today have 20% the minerals content they had 70y ago..the list go on and on.

            • > Mass food production supposedly nourished more people: veggies today have 20% the minerals content they had 70y ago..the list go on and on.

              I suggest you should have a look at malnutrition rates 100 years ago vs now. Without mass food production we would not be able to sustain even 50% of current population.

              • Why would that be bad? Why is more better?
                • Would you ask that your starving great-n-grandparents worried about whether they're able to feed the infant that would later become your ancestor?
                  • Isn't that a food distribution problem not a food production one?
            • I think it's more fair to say that with every technology there are tradeoffs. Consider the wheel, before the wheel people probably were more physically fit, but they couldn't move as large of loads. Well, except in the Andes where they figured out how to move gigantic stones well beyond the weight that any wooden wheel would have been able to carry anyway and cut and place them into configurations that were earthquake resistant.

              Technology and civilization is path dependent, and I think it's silly to make blanket statements about the merit of technological progress overall. Everything choice (including the choice to do nothing) has unintended consequences. I would never condemn anyone for inventing a new technological solution to a problem, but once the systematic effects are understood then we do need the collective ability to course correct (eg. social media, AI, etc).

            • It's odd to me that you live in a place where it's hard to find a theatre. Living in a cosmopolitan city there's so many theatres with anything from professional shows to amateur dramatics all at very reasonable price points.
        • > seeking credit instead of generosity, and/or Dog in the Manger mindset.

          I have tried being generous to enemies. It only turns them them into... bigger, hungrier enemies.

          I'm happy with never getting "credit" for anything I "accomplish" (whatever those notions even mean under a system where thoughts can be property).

          I mean: as long as my labor output cannot be subverted to benefit hostiles even the tiniest bit.

          > It's humans - all humans - that benefit from LLMs

          The set of "all humans" includes that power-hungry majority who find nothing wrong with subjecting other sentient beings to sadistic treatment.

          Those who, as soon as they take notice of me - or my kind, or our speech, or our trail - more often than not become terrified into outright aggression.

          So far we had been protected from their stupidity and lack of imagination, by their stupidity and lack of imagination.

          Now they've had brain prostheses developed for 'em, and... well I can't really do much for those who haven't already begun to reevaluate their baseline safety, now can I?

        • Corporations are not humans.

          And while sociopaths - who benefit the most from corporations - technically are humans, I don't consider them parts of humanity, more like a cancer tissue on top of it.

          So whatever benefit humanity gets is more than cancelled by the growing cancer.

          • So I am to assume you're not using LLMs yourself, or any technology employing those models in the pipeline (which at this point includes many features in smartphones made in the last 3 years)? If that's not the case, then you are a beneficiary too.
            • There are some local benefits, there are some local and global costs. My point is that we are in a strongly net negative situation, mr Jack.
      • > there's stuff I would have open sourced in the past but no longer will because I know it would get trained on

        Could you publish under AGPLv3, so any AI users with recognizable patterns from your code can get in trouble?

      • I've already started thinking this way, there's stuff I would have open sourced in the past but no longer will because I know it would get trained on.

        Same here.

        I no longer post photos, code, or pretty much anything other than short comments on the internet.

        I'm not going to do free work for trillion-dollar AI companies.

        I do, however, find it interesting to watch AI destroy the whole "content creation" industry.

        All of the "creators" and "influencers" and "I wanna be a YouTube star when I grow up" people are all going to have to look for real jobs soon.

        I've seen in the newspaper that there are real companies paying real money for fake AI-generated "influencers" to flog their products.

        Why pay dollars to a wannabe, when you can pay pennies to an AI corp?

    • > Interaction contracts into small meshes of trusted, verifiably human participants to keep the tides of spamslop at bay

      This is already happening and you don't have to look far to find it.

      Personally HN is the only site I browse and comment on anymore (and I'm on here less than I once was). The vast, vast majority of my time online is spent in walled off Discords and Matrix chats where I know everyone and where there's a high bar to add new people. I have no real interest in open communities anymore.

    • > Quasi-monastic orders that still scribe with pen and paper emerge, that believe there is still value in training and educating a human mind and body.

      https://en.wikipedia.org/wiki/Anathem#Plot_summary

      • A college instructor turns to typewriters to curb AI-written work and teach life lessons - https://apnews.com/article/typewriter-ai-cheating-chatgpt-co...

            The scene is right out of the 1950s with students pecking away at manual typewriters, the machines dinging at the end of each line.
        
            Once each semester, Grit Matthias Phelps, a German language instructor at Cornell University, introduces her students to the raw feeling of typing without online assistance. No screens, online dictionaries, spellcheckers or delete keys.
        
            The exercise started in spring 2023 as Phelps grew frustrated with the reality that students were using generative AI and online translation platforms to churn out grammatically perfect assignments.
    • Somehow made me think of Warhammer 40k (maybe pre men of iron?)
      • It’s a recurring theme, see dune’s references to Samuel Butler.
        • I say this with a multiple decades-spanning love of the game and the lore, but Warhammer 40k is what you get when teenagers try to create something immediately after reading Dune.
    • Directionally correct. But seems overly optimistic to think that moats can be kept from the prying eyes of LLMs, unless you're not interacting with the market at all.
    • Sounds lifted from Alpha Centauri
    • Scary... where can I find more of that?
    • There were no "dark ages", that's the same common wisdom blunder like "in the middle ages everybody was dressed in drab grey clothing, ate gruel and walked through mountains of poop everywhere". It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today. It was by no means a time of standstill.
      • As far as I can tell, the dark ages were called the dark ages because there wasn't much evidence to be found: writing was less prominent during that time.

        > It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today.

        You are seeing the fall of the western part of the Roman Empire a bit too rosy. Compare and contrast https://acoup.blog/2022/01/14/collections-rome-decline-and-f...

      • Yes, Europe did not have dark ages, it only had period of population decline, of less emissions, less building, less inventions, less records and severed trade networks.
        • Population decline? Less emissions? Haven't we reached consensus that those would be welcome today? Is it time for a pro-dark-age movement?
          • The world is projected to hit population decline already sometime between 2060 and 2080, so I guess the younger ones of us will find out definitively whether it's a good or bad thing.
      • I am very sorry, but you are wrong. Between the fall of Rome (476 AD) and the Carolingian empire (~800 AD) there was a period of not only standstill, but regression, devolution and forgetfulness. Compared with what came before, it can be rightly called the dark ages.
  • Human communication and reasoning is the end result of billions of years of evolution. I'd be very surprised if LLMs can fundamentally alter it in a few years.

    When considering phenomenon like these, I think people seriously underestimate what I'd call the "fashion effect". When a new technology, medium or aesthetic appears, it can have a surprisingly rapid influence on behaviour and discourse. The human social brain seems especially susceptible to novelty in this way.

    Because the effects appear so fast and are often so striking, even disturbing, due to their unfamiliarity, it is tempting to imagine that they represent a fundamental transformation and break from the existing technological, social and moral order. And we extrapolate that their rapid growth will continue unchecked in its speed and intensity, eventually crowding out everything that came before it.

    But generally this isn't what happens, because often what a lot of what we're seeing is just this new thing occupying the zeitgeist. Eventually, its novelty passes, the underlying norms of human behaviour reassert themselves, and society regresses to the mean. Not completely unchanged, but not as radically transformed as we feared either. The new phenomenon goes from being the latest fashion to overexposed and lame, then either fades away entirely, retreats to a niche, or settles in as just one strand of mainstream civilisational diversity.

    LLMs will certainly have an effect on how humans reason and communicate, but the idea that they will so effortlessly reshape it is, in my opinion, rather naive. The comments in this thread alone prove that LLM-speak is already a well-recognised dialect replete with clichés that most people will learn to avoid for fear of looking bad.

    • Technologies often have rapid, and obvious, effects on writing. The telegraph services charged by the word, so an abbreviated style that became known as "telegraphese", developed.

      And it doesn't have to be that direct. Novels have been hugely influenced by films.

    • There's plenty of people communicating more with LLMs than humans right now, of course it's going to have an effect because our language and thought patterns are extremely adaptive to our environment. The communication system we are born with is extremely bare-bones/general so that it can absorb whatever language and culture we are born into.
      • Those people tend to suffer from AI psychosis, and I don't think it's a thing you'd want to admit publicly that you don't interact with any humans and prefer the company of machines (let us also ignore that such people wouldn't be in public to begin with).

        I can't image such people are living meaningful lives in any capacity. They're up there with consumers that think the only purpose in life is to cheerlead for a corporation and buy their wares.

        • The GP said more with LLMs than people - not no interactions at all with people and not preferring machines to people. I don't think it is that hard to spend more time talking with LLMs than people if you work in tech and I don't think that takes away from one's life meaningfulness.
          • Yes, this is called alienation of the work place and it has been discussed since the 1800s. Maybe tech workers will realize that their employers are literal enemies of humanity rather than their friends.

            Employers want to mechanize humans and they'll force it even if it makes everyone miserable for their entire, short, lives.

            Amazon is a good example of this.

    • Young singers brought up listening to autotuned vocals can unknowingly learn and emulate the sonic signature of the tuning algorithm (and the telltale lilt when it's used as an effect, but the subtle tuning case is more surprising).

      If you read too much sloppy LLM prose, it's going to influence how you write and structure your own.

    • I caught myself saying “you’re absolutely right” to my wife last night, unironically. This was 100% not in my vocabulary six months ago.

      If I spend 40 hours a week talking to anybody, some of their language or mannerisms are going to rub off on me. I can’t think of a compelling reason why a human-sounding chat bot would be any different.

      • Almost two decades ago I watched all of Farscape in under two weeks during a college winter break. I often still reflexively say "frell" instead of "fuck".
      • Another one I noticed is "or maybe I hallucinated that" instead of "or maybe I dreamed that". Researchers will be horrified to learn that even talk about LLMs affects people's vocabulary.
    • It's obviously untrue that technology can't fundamentally alter human communication in a few years. For example, the advent of film, then radio, and finally television caused a convergence of culture at the national and even global level. Characters like Mickey Mouse and the cast of Star Trek are instantly recognized internationally, even to those who never have seen any of the works they star in. There likely isn't anyone here who doesn't remember some catchy commercial jingle of their youth or catchphrase from media that entered the national lexicon. And yes, it also affected reasoning: Walter Cronkite, a long ago TV journalist, was labelled "the most trusted man in America" for the integrity of his reporting. The internet caused a second wave of transformation since it was many-to-many communication instead of unidirectional broadcasting that allowed the coalescence of subcultures, examples being various fandoms and, infamously, 4chan.
    • Social media has shaped us. Why should AI not do the same?

      It may finally [help us fix out the bullshit asymmetry](https://www.konstantinschubert.com/2026/03/31/ai-the-bullshi...) that has been exacerbated by social media.

      If AI can provide us with a shared source of truth, it will be a big improvement over whatever twitter is doing to people.

      And strangely, all these models seem to converge to a shared epistemology.

    • There is a reason Coke spends ~ 5 billion dollars worldwide on advertising sugar water... It works.

      Monkey see monkey do. Simple as that.

    • Fashion seems like the right analogy. I think about how many sentences I speak today that would have been incompressible to me ~15 years ago and not even due to recent events/technology, but just because our slang/humor has evolved during that time.

      The flip side is the same thing was true then, and we aren't making a lot of jokes about the narwhal baconing at midnight these days.

    • The first thing I thought when I read the abstract of the underlying paper was that this sounds like "model collapse" at the society level.

      I don't feel super confident that we'll "soon" find ourselves in a world where there is no variance left in thought (would that be the net effect of total model/epistemic collapse?), though if you do accept that there could be any loss of variance due to AI, perhaps it's not unreasonable to consider how much and how quickly could this happen?

      All this is by way of saying, I don't think it's wrong to ask these kinds of questions and think deeply about the consequences of societal shifts like this.

    • Think of all the things that took hundreds/thousands/millions of years to develop and mature, which humans have managed to destroy in relatively short order.

      Every 50 years we cycle out an entirely new batch of thinking humans. What cognitive legacy is it exactly that you think is going to be self-preserving?

      • You're talking about system altering the environment. GP was talking about the system altering itself. The system is a massive self-stabilizing collection of feedback loops. Unlike the static environment[0], it's incredibly hard to intentionally move such system to a different equilibrium. If it weren't, we'd already solved all the thorny world problems long ago.

        --

        [0] - Any self-stabilizing system that operates much slower than us - such as ecosystems or climate - is, from our perspective, static.

        • > The system is a massive self-stabilizing collection of feedback loops.

          Source? lol

          Actual, measurable literacy is in the toilet. The average person reads at the 6th grade level. What sort of equilibrium are you trying to claim we are in right now?

          > Unlike the static environment, it's incredibly hard to intentionally move such system to a different equilibrium.

          It's not intentional. That's the point!

      • Plato said the same thing.
    • >But generally this isn't what happens, because often what a lot of what we're seeing is just this new thing occupying the zeitgeist. Eventually, its novelty passes, the underlying norms of human behaviour reassert themselves, and society regresses to the mean. Not completely unchanged, but not as radically transformed as we feared either. The new phenomenon goes from being the latest fashion to overexposed and lame, then either fades away entirely, retreats to a niche, or settles in as just one strand of mainstream civilisational diversity

      The internet didn't follow this trajectory. Neither did smart phones.

      Surprise, surprise, it's the same people trying to make AI entrenched into our society.

    • Fads are often driven by moneyed interests. AI is no different. As long as guys Elon Musk, Sam Altman, Mark Zuckerberg, etc. are trying to bend the world to their will, and as long as they have the resources to do so, AI will be zeitgeist for just as long. On a smaller scale, this extends even to a CEO outsourcing support to AI, etc.
  • Subtly? I beg to differ. My team leader only communicates to me using his LLM and so his "thoughts" are not his own!
    • I often wonder if the popularity of LLMs among company executives is that they are the perfect yes men.

      They rarely disagree with any idea or proposal, providing a salve for the insecurities of their users.

      • I was listening to one of Altman's more recent interviews and it sounded like he himself has LLM induced psychosis.
        • I'm not a fan of Altman, but it seems debatable whether LLM psychosis is psychosis if it is conducive to the subject given their environment. Which seems to be the case for Altman by some measures.

          I'm sure if we took one of us back in time a couple hundred years we would be diagnosed with all sorts of machine-magic induced psychoses.

          • I get what you're saying, but psychosis is a very real thing that humans can fall into and I experienced it myself once.

            Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.

        • I remember him tweeting about how he can "feel the AGI" when speaking to GPT
          • Another meaningless, extremely cringeworthy, tweet, hailed as a messianic message by many at the time.
          • Yeah, it's hard to say if he's doing marketing because that's his job or if he's really swallowed the whole pill
            • Is it really hard to figure out that the owner of a company, who personally stands to make 100s of billions, would be doing marketing when talking about said company? Do they not teach critical thinking anymore in schools, did it go away with phonics too? Why would you ever ignore the MASSIVE conflict of interest here, it's just really foolish but it's endemic not just in tech journalism or journalism in general where people just take the words of others and not apply any critical analysis to them.

              It's all access journalism now, waste of time.

              • > Is it really hard to figure out that the owner of a company, who personally stands to make 100s of billions, would be doing marketing when talking about said company

                The question isn't about what action he's taking, it's about what motivates him under the surface. Obviously what he is doing is marketing. What I'm curious about is whether he truly believes his own marketing or if he is just doing it because its his job

      • Definitely see our internal company agents enforcing the status quo!
    • nusl
      • > > The experience is strange; you aren't able to grasp any common human aspects because there are none. You can't reason with the human, because the human isn't doing the reasoning. You can't appeal to it, because the LLM behind it is in direct support of its own and the proxy's opinions and whims.

        I've sometimes wondered if the chat context is why some people think LLMs are intelligent, it being divorced from their usual experiences, and they need something like this to feel the cognitive dissonance before they can notice LLM shortcomings.

      • I've been calling them "meat condoms". In the workplace, it's one or two warnings before completely ejecting them. On social media, instant block.
        • Seems to be becoming more common, even for folks that are otherwise quite pleasant to deal with. Perhaps social and workplace pressures causes people to opt for it, much like LinkedIn is a cesspool of bullshit
      • That's terrific lol thanks for the link BTW!
    • I'm dealing with the same nonsense. I get LLM-generated reviews of my work, documents, and plans which are not grounded in reality or nuance. Regularly have to explain why the AI is wrong. I was told I should run my docs through the LLM to make them read better. But they're not even being read by humans at this point.
    • This is one of my fears with this, losing ones voice. Everyone's expression distilled to the mean. This has ramifications in things like recognizing if a person is who they say they are too. At least currently, it is punished/shunned to sound like an LLM, but it's well within reason to see that shift to individuality being penalized.
      • I think corporations will start penalizing first, they're already doing that to some extent at my work because they want their in-house agents to only review our PRs.
    • Guilty as charged. In my mind, when I'm insecure about a response or if I don't have enough expertise in the topic at hand I end up running it through an LLM. Lately I've been really trying harder to keep my original ideas as much as possible. I'm seeing a bit of an improvement, but still early to tell
      • "running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it? Checking against an LLM then using your own voice feels completely fine, just another type of validation before you share something, but if you actually let the LLM rewrite what you say, then I feel like that's beyond "running it through an LLM", it's basically letting the LLM write your text for you instead of just checking/validating.
        • The decline of writing is something that's been going on for a long time. Well written and grammatically correct emails are something that's been on the down turn. Consider how often people send emails in all lower case, lacking punctuation, or even without any sentence structure.

          The "you need to write in a more professional business oriented way" is something that a lot of people are having difficulty with. Yes, this needs to be addressed earlier in someone's education more forcefully - but the SMSification of long form text started a while ago.

          With that said, the "Ok, you need to write long form with correct grammar when sending an email that a director or VP is CC'ed on". It used to be Grammarly as the "install this and have it fix up your grammar and tone" ( https://web.archive.org/web/20191104093353/https://www.gramm... GPT-1 timeframe there). However, LLMs of today seem to be more accessible than Grammarly but it largely does the same thing - fix up and refine tone.

          What I don't see from back then is people decrying Grammarly saying that it's making everything sound the same.

          I'm also not sure if I would prefer the pre-fixup emails to what is produced by an LLM unless sending coworkers to remedial writing classes is something that is acceptable.

        • Yes checking and validation is one thing, but there are several engineers in my area that only communicate using agent copy paste. I challenged one fellow about that and he was furious!
        • "running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it?

          The article seems to imply this is what is happening, as writing style converges towards LLM's style. You can call it what you want, but the important bit is that this is how it appears that LLM's are being used.

          Checking against an LLM then using your own voice feels completely fine

          Why use an LLM? If you're worried about style, starting with your own voice is more efficient. If you're worried about facts, looking something up in a primary source is best, and is probably cheaper on a few axes, especially if you need to check/validate anyway...

      • You have to make some mistakes in your communication (or anything) if you ever want to grow and learn.
        • You're absolutely right here and things have improved significantly at work after dropping this habit even if slightly.
      • That is completely opposite of how I use LLM. If I do not have expertise, I ignore LLM and search for more trustworthy resource. LLM lie very confidently and if I do not have expertise, they will lie to me.

        When I do have expertise, I use them because I am able to check.

    • Man that's so annoying I have a similar problem our devops person I ask question to literally gives me AI responses

      Also annoying to me working with a "partner" non-technical they just send me an LLM dump of how to do something

      I was trying to explain it to them in an analogy like showing up to a mechanic and telling them what to do based on what ChatGPT said

    • Well, has it been an improvement?
    • Just because thoughts are translated doesn't mean they are consumed in the process.

      However I don't doubt many "team leaders" can and should be replaced with LLMs.

    • AI doesn't have to be conscious or sentient to take over, all that needs to happen is for politicians, law enforcement, journalists, educators etc. to uncritically parrot everything it outputs. The military is already using AI to make targeting decisions. If they just go with whatever the AI says to strike, then AI is already fighting our wars.
      • As a bonus, mistakes can be blamed on AI.
        • For many that's not a bonus, that's the goal. Consequence-free life ahoy.
          • Fun and games until the AI decides extincting us is worth it.
            • Unfortunately you can really tell which people haven't seriously considered that possibility or seriously don't care if it happens
      • The scary thing is that AI decision making has been infiltrating society for decades as an unseen entity.
    • I would be looking for another job.

      I'm fine with using LLMs as coding tools. But I find it deeply offensive when someone is very explicitly using them to communicate with me.

      Communication is such a deeply human experience. It lets people feel each other out, and learn things beyond just the words being said. To have that filtered out by an LLM is just disgraceful.

      • I was talking to managers and they were talking about how they'd use AI to write reviews about their employees to which I said I would not like a non-genuine review/not personal.

        Their rational is coming off more professional

      • Good luck finding a company that doesn't have these people if LLMs are used
      • Yes exactly and I am actively applying for jobs. But I feel like the next job will also have this nonsense behaviour
      • I think you're gonna struggle to find companies that aren't infested with this kind of thing.

        Observing the effect of LLMs on the "business side" of things, I'm increasingly thinking of these as a kind of infection against which the MBA set and their acolytes have no immune response, and I think it's going to eat a large proportion of the benefit of LLMs to most businesses (possibly overwhelming it and actually harming productivity, will depend on how much better these tools get).

        LLMs are awesome at bloating your slide decks while making them really slick and complete-looking. They're great at suggesting an entire set of features on a ticket you've just barely started writing ...but did you actually want all those? You end up with redundant or in-context-gibberish features that leave the person actually doing the work tracking down WTF actually matters. They are adding overhead to communication, so far, not just by puffing up and padding language (which isn't great either) but by adding noise "content" that can't be stripped out without talking to the person who created it and making sure that was actually just AI bullshit and not something they actually needed; that is, you can't just do the "LLM, summarize this" trick, because the author used an LLM to plan it, too, not just to pad-out and gussy-up something they actually thought through and wrote.

        LLMs are letting people present very convincingly as having a more-complete understanding of what's going on than they really do in ways that are messing up productive work, I'm not sure business-folks are going to be generally capable of tamping this down because it is so in-line with the way they already operate (but on speed), and helps them so very much to look good to one another while saving tons of time. This isn't just the MBA set I accuse above, either, I'm noticing that this improbably-complete deck communication upward is becoming necessary to look competent (and to ladder-climb) as an IC.

        Like, I'm only starting to think this through and really observing what's going on through this lens as I've only noticed it in the last few weeks, but the more I see the more alarming this is. I think this is going to be a little like the largely-wasteful "legibility" obsession of upper management, something enabled by computerization that they find irresistible and are pretty bad at employing judiciously and effectively, but probably a lot worse in terms of harm-to-productivity, and directly affecting and changing the behavior of far more layers of an organization. They never (businesses as a whole, to anthropomorphize a bit) gained wisdom with their new powers to burn resources chasing legibility, and this is starting to look like another thing they just will not be able to use (internally! I don't even mean for actually producing external-facing results!) with restraint and taste.

        • I reckon you've hit the nail on the head and if you haven't done already, you should write your thoughts into a blog post. It is great to read someone's ponderings about the state of the industry and corporate uptake of LLMs
    • And I would bet he judges your work with AI, assigns you work generated by AI, and perhaps evaluates whether you yourself use enough AI.
      • That's exactly what he does...wtf are you spying on me?? Lol but seriously, I don't know how to handle his AI delegation
  • It's not explanation — it's relabeling. Why it matters:
    • You're absolutely right
    • Great point — this is the smoking gun
  • Yeah, I’ve notice that people have started to sound like LLMs even when the LLMs aren’t writing for them. Not stupid people. Not lazy people. Some of the smartest people I know —- I can’t figure out how to use an em dash here, but you get the point.
    • No diverging opinions, no unexpected critique, but universal basic intelligence. And here is the kicker: we won't even notice.

      Here's an easy three-step plan to unanimous democracy:

      • ask your LLM

      • don't edit — the LLM has already selected the most average and most plausible opinion for you

      • give it your voice, your voice matters

      Learn to anticipate — there may not always be a power bank to keep your phone from running low!

    • This could also be explained by the frequency illusion:

      https://en.wikipedia.org/wiki/Frequency_illusion

    • If writing goes the way music seems to be going with Angine de Poitrine gaining a huge following as a kind of allergic reaction of people against the 'AI sameness'... then we could be in for a wild ride.

      On the other hand, music is primarily an art form and writing (nowadays) is primarily utilitarian I would contend, so maybe the analogy doesn't quite hold up.

  • > AI may be making us think and write more alike

    Many technologies have been doing that for centuries. For example: The printing press making books available to the masses, standardized spelling in English (it was a mess before!), radio and TV broadcasting speech thus creating more uniform accents nationwide, the Internet spreading all kinds of information globally instantly, even memes (literally thinking and writing alike).

    • Yeah, that doesn't mean it's not a bad thing.
  • Take a community with AI moderation like Reddit, I've been a participant for years. With the recent push to AI autocorrect and moderation, you can see the changes in language. New words, new ways of speaking, unconsciously editing yourself because you don't want to draw the eye of the bot. It doesn't feel subtle. It feels Orwellian.
    • It's particularly egregious on youtube, where people frequently use words like "unalived" or "self-deleted" instead of murder or suicide, lest they incur the wrath of the almighty algorithm.
      • That seems to me to be an example where the language is forced to change but the thoughts remain the same. Sure, people are using the "safe" terms, but they're using them to continue to subvert the rules, not to bow to them.
        • The problem is when that vernacular extends into regular life. I haven’t noticed it yet with unalive, but I’m sure there will come a day. Eventually if the censors continue suppressing the word suicide, we will end up with unalive taking suicide’s place both online and offline. Then, the censors will censor unalive, and a new word will be coined, and the cycle continues.
          • https://www.usatoday.com/story/life/health-wellness/2024/08/...

            > On Friday, a social media user tweeted an image from the Nirvana exhibit at the Museum of Pop Culture in Seattle. A placard dedicated to the “27 Club” read, “Kurt Cobain un-alived himself at 27.”

          • I'm not fully comfortable with the shift in language either, but my point is that, even if the language is changed, the thoughts will remain. To use 1984 (is there a Godwin's law equivalent for this now?), the party taught that 2 + 2 = 5, which is changing thought. Social media is trying to do that, but failing. The danger is if it's one day effective, but to date it hasn't been.
      • This was the first change that came to mind and one of the more drastic changes.
      • Youtube comments is a separate genre itself. Due to youtube moderation policy - music video comments are all the same, same tired jokes, patterns. Not an AI slop per se, but feels the same.
        • Slop is slop - whether artisanal or AI generated.
    • I recently had a comment removed by reddit. It wasn’t even against the rules. It was anti establishment is all. I insulted the billionaire class in that comment. Class division style comments are now banned. Wouldn’t want revolution on a for profit forum now would we?
      • That's surprising because reddit moderation usually tends the other way and any objection to some part of the omnicause gets you banned.
  • I always wonder if competitive market dynamics will solve problems like these, at least to some extent and for some people, because the people who retain the ability to communicate in a distinctive, persuasive and original style will be rewarded. Corporate dronespeak is no less homogeneous than AI writing, and companies with this communication style are regularly disrupted by nimbler, more authentic-sounding competitors.
    • I sure hope so. The way companies are pressured to hit growth numbers, I really hope messaging in general doesn’t all get sloppified along with code lol.

      I think AI writing makes humanities and writing courses more important, and I hope people maintain their sense of taste with writing, but tbh I’m not optimistic here.

  • I agree. The AI witch hunting has made people abandoned em-dashes. It also made me abandoned listing. At this point it will make us abandoned any structural writing in a few years.
  • An aspect of LLMs that I like is the specificity in word choice. One well defined word can be an alias for a couple sentences of explanation that human might not have pulled out of the air in that moment.

    It reminds me of the wheel of emotions. If people absorb a wider palette of words communication might benefit. https://www.isu.edu/media/libraries/counseling-and-testing/d...

    • This is a fair point. When people talk about LLM writing they're always picking on its visible tics and clear flaws. It's a lot more uncomfortable to talk about the things it does better than most of us. There is a lot of precision in how they choose words and phrasing, especially top models like Opus. Lately I've had Opus explain some things to me I've never really been able to grasp otherwise, in fairly concise conversations.
  • While I cringe at most LLM speak, I have learned quite a bit from it. Certain terminology and some gaps in my entirely self-learned English. I appreciate that. It helped me better express myself at work and use less words (but hopefully more substantive ones).

    But yeah, their general tone is very... castrated. Safe. Hugely impersonal.

    I have learned to quickly edit out their suggested comments when I ask for an advice.

    To me they have been a positive -- after careful curation.

    • You would have been better off reading good english literature to improve your own writing.
      • I can't turn back the clock and I will have to relay to my family that this month I'll get less money because I need to read English literature.

        We operate within our own time and energy budgets, man. I might never get to that literature. My peaceful times in life where I could chill and choose what to do are over and I don't expect them to ever come back. Though who knows.

  • This is my current fear, even if I choose not to use it if everyone around me does their way of speaking is all going to become more chatbot-esque. It already seems to be transferring to people its false sense of confidence, and its lack of reasoning ability. The corporate demand to participate in this is something I can't do, the cost is our humanity.

    I guess one hope for luddites is that we can stay tethered by reading pre-LLM books and other content.

  • The most interesting finding here is that LLMs make individuals generate more ideas but make groups generate fewer. The individual effect, in my own experience, depends entirely on how you use the tool. If you treat the first answer as the answer, you get the homogenization the article describes. If you use the LLM to attack your own framing from angles you wouldn't reach alone, you end up closer to first principles, not further. Same tool, opposite outcomes. The discipline is what differs, and most people probably default to the first mode.
  • Anecdotally I can say this is true both in education and software development. The diversity of approaches and writing styles among junior developers used to be fun to observe and mentor. Now with everyone using AI coding agents there is a same-y-ness to people's work that makes it harder to see what the writer actually knows or doesn't know. A friend of mine who teaches high school English has said the same thing about student work.
  • On a creative level, I remember McCarthy describing scalped heads as like wet polyps blue in the moonlight. The more generic ways of describing something like that would never give me such a visceral reaction to the violence he was trying to tell me something about.

    I already lose interest reading books where the phrases are recycled and the max sentencelength for the whole book grazes 40.

    If people communicate to me without personality through prompt wastrelry I'll discount theirs and wait till they're willing to actually have an opinion. In this specific context style and substance tend to come in a pair or not at all. If you can't beat 'em you can at least filter 'em out.

  • I would imagine a similar critique was leveled at the written word when it was starting to supplant oral cultures.
    • eru
      Well, Plato's sock puppet Socrates famously opposed writing with pretty much these arguments.
      • No, he did not and it would be good if people would have _actually_ read Plato's Phaedrus before regurgitating the same nonsense every time someone has a critical perspective on LLM writings.
        • Are you just trying to be a bit more measured by saying he wasn't so much "opposing" as "articulating pros and cons"?

          Or are you trying to say that things like

          "this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves"

          or

          "You would imagine that [written speeches] had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves."

          aren't actual statements of opposition, or that there are no parallels to that and LLMs?

          • I'm not who you replied to, but no, no, I don't think that's an "opposition" to writing in the sense that it's making us stupid or replacing oral traditions.

            From my limited understanding of history and Greek philosophy, Socrates valued dialogue, a "back and forth" for understanding. Basically a scientific method of probing to understand something or someone. This needs to exist to be fully sure you understand something. Sort of what we are doing now.

            A static piece of literature or a speech can't be probed for more clarity. You may read something and come off with a completely different understanding from the author. You might even pervert or "abuse" the original intent since words can have multiple interpretations.

            I don't think there was opposition in the sense that you shouldn't write. My understanding is just that in order to truly understand something, you need a dialogue. It allows you to actually arrive at what was meant to be conveyed.

            It actually seems sort of ironic that people are saying this about Socrates because of what was written about him….

            • And LLMs get us back to the back and forth dialogue! Plato's sock puppet would be pleased.
            • Socrates did not favored "scientific method" nor anything close to it. And took issue with writing itself as it reduces power of the memory.

              And to be fair, we did lost the "technology" of memorization. We are not capable to create easy to remember texts, because we are not trying to.

              • > And to be fair, we did lost the "technology" of memorization. We are not capable to create easy to remember texts, because we are not trying to.

                One of the more impressive Taskmaster (British humor gameshow) tasks was the memorization one. The contestants were given the task to recite a non-standard deck of cards in order after 5 minutes of looking at them.

                https://youtu.be/aSQnWQUyekk

      • Yup.

        And to be clear, maybe some things were genuinely lost when we switched to the written word. But I have to believe it was a net gain.

        Time will tell if that's true here as well.

  • English is not my first language, but when I started using Firefox with the built-in spell correction, I firmly believe my ability to spell words went drastically up. My grammar is stiff iffy, like I'm pretty sure I do comma splices everywhere, but at least most people can understand what I say now compared to when I was 13 and on the internet.

    If there was a "gramma nazi" teenie tiny LLM with a total focus on English grammar only, and you baked that into every browser, I feel like my grammar would improve slightly. Word does it to an extent, but I don't use Word nearly enough for it to be meaningful. Firefox text spell checking was on 98% of the things I used online.

    • A ton of "incorrect" comma usage isn't even (historically) wrong, actually, it's just currently unfashionable.

      There was a reaction in the last century against poor writers with poor taste over-using punctuation and writing ugly, long sentences. The result was stern advice to students to eliminate punctuation and cut sentences up into tiny bits. These same students came out of this process believing this was correct writing, not a straight-jacket put on them to keep them from hurting themselves. They unthinkingly cite Hemingway and borrow his clout, I suppose judging almost all writing before Hemingway and most after him, up until the 80s or 90s, as "bad" even when it's the work of masters. They blame the author when their stunted literacy (learning to write can hardly be separated from learning to read, at least at the more-advanced end of "to read") leaves them, as adults, struggling with texts once meant for children.

    • Some play this everyday, as vocabulary will improve in time =3

      https://play.freerice.com

    • I'm not going to say with 100% confidence that spellcheck never teaches anyone anything, but you have to beware of basic post hoc ergo propter hoc here. Virtually everyone's spelling improves between age 13 and adulthood.
      • I would intentionally tell myself how to correctly spell the words, but that is fair.
    • did you mean "grammar nazi"? /s
  • This is undoubtedly the case and imo quite concerning. Hard to minimize the effects as well, personally speaking.
    • Try hate; it will do. But most will love it instead and you would be driven apart from them.
  • A really aggravating thing about seeing so much AI-generated text around is that it makes me constantly second-guess my own writing. Does that sentence sound natural? Am I veering into ChatGPT territory? God forbid I use an em-dash. And how much of the perception of it "feeling" like AI text is real vs. paranoia?

    It's incredibly frustrating, but maybe a silver lining is that it will help me write more authentically, I don't know.

  • Social media is a tool for perpetuating monothought
    • Social Media creates distinctive Filter Bubbles. A dominant LLM company (or multiple aligned ones) create one way of thinking.
  • You are absolutely right!
  • I have made an observation that others have not discussed, that the real gem of our collective LLM experience is the proper documentation of “skills.”

    Am I the only one who has noticed that the proper documentation of skills we do for LLMs after so many decades of neglecting junior and mid level roles are the real work?

    We carefully explain to our LLMs policies, procedures, and practices which for generations before we have vaguely arbitrarily and ambiguously expected each human role to “figure out” for themselves?

    Simply as a catalog of expectations our experiences have been valuable, apart from the “automated” aspects the LLms provide.

    • One of the ways I think the effect of LLMs on productivity (in software, anyway) will be tempered is that the work required to use them effectively & safely is all work we were supposed to be doing, but largely were not, at least not as completely as we aspired to. Exactly what you mention, much more detailed and thought-through feature requests, more-complete and higher-quality test suites, large and high-quality test datasets, documentation, thorough code review, all that stuff, all of it falling well under what we "should" be doing at every place I've ever worked, in terms both of quality and amount of it that we did.

      They won't accelerate software development to the degree naïve analysis might suggest without significantly harming quality and reliability unless we start doing all those things we've been neglecting much better, which adds more work... with the result that I think our diverging paths here are "much worse software, made faster" or "software at least as good, with better supporting artifacts, but barely, if at all, faster to develop"

  • People from a nation think and write alike because they share a common canon of literature and stories.

    It's just a pity AI was trained on mindless, garbage business-speak, and now that's our globalised common literature.

    And now we're feeding that regurgitated mindless, garbage business-speak back into AI models, thereby reinforcing the garbage and further rotting our minds.

  • Well, in few years not sure I will know how to think any more. If I am stuck on something I just ask the LLM and get the solution. While this shortcut sometimes saves me a ton of time and headaches, I miss that long route of thinking and getting to a solution myself. Maybe in future we will have gyms for brain workouts… I don’t know
  • So too did the printing press. Again, this is not a "something similar has happened in the past, therefore this is nothing new" sort of comment.

    This is quite new, however this outcome was totally unavoidable -- once methods of communication become widespread and centralized it is impossible for them not to impact language and thought.

    • On the contrary, the printing press enabled people to quickly spread new ideas. Protestantism was enabled by it. That was quite the schism in thinking.
      • Definitely agreed -- I wasn't precise enough when making my point, but I think your point is absolutely correct.
        • I also agree with your follow-up around the normalization of language. It's a good point, but it seems like an improvement to standardize effective communication, and at the time it was outweighed by the ability to spread challenging heterogeneous ideas. LLMs threaten to engulf us in a uniform grey goo that lulls us away from critical consideration of what we're interacting with and, even more dangerously, creating.
    • how exactly the printing press did that?
  • Wrote about this a while ago actually; I called it the Billion Steve problem - https://x.com/gyani1595/status/2034652087494090829
  • People are unloading the cognitive load onto the LLM. Probably because life stress is causing them to rely on technology to bring relief. It may not necessarily be a great choice.
  • this is why it's so critical (IMO) to find ways to tune the models to produce more out-of-distribution outputs. it's incredibly easy to generate "in-distribution" text and the major labs are optimizing for this because of "safety", but the only way to generate truly creative outputs is to step in and out of the fringe.
  • One has only to compare blogs and "thought leadership" posts from now and five years ago to see this is already happening, and big time.
  • Knowing people have gone full "LLM-brain", it's not subtle.
  • Oh no, LLMs threaten our individuality ⸻ what will we do?!
  • jerf
    "say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets,"

    Are you kidding me?

    How much more "real-world diversity" could they possibly incorporate into the models than the entire freaking Internet and also every scrap of text written on paper the AI companies could get a hold of?

    How on Earth could someone think that AIs speak like this because their training set is full of LLM-speak? This is transparently obviously false.

    This is the sort of massive, blinding error that calls everything else written in the article into question. Whatever their mental model of AI is it has no resemblance to reality.

    • The problem isn't the diversity in the training set - the problem is that the method by design picks the average.
      • LLM speak isn't even quite the average either. It's something more like the average, then pushed through more training to turn it into the agents we think of today (a fresh-off-the-training-set LLM really is in some sense that "fancy autocomplete" that people called it for a while), then trained by the AI companies to be generally inoffensive and do the other things they want them to do. All of the further actions push the agents away from the original LLM average. The similarity of the "LLM tone" across multiple models and multiple companies, and the fact I don't think this tone has been super directly trained for, strongly suggests that the process of converting the raw LLM into the desirable agents we all use is some sort of strong strange attractor for the LLMs that are pushed through that process.

        Maybe they are training for that tone now, either deliberately or accidentally. But my belief that they weren't initially comes from the fact that it's a new tone that I doubt anyone designed with deliberation. It bears strong resemblance to "corporate bland", but it is also clearly distinct from it in that we could all tell those two apart very easily.

        • Like foxes coming up with floppy ears.
  • Can't affect you if you don't use it
  • Just crank up the temperature.
  • …and the first paragraph has an em dash
  • He's absolutely right, and that's not just insightful, that's prescient.
  • Indeed. Everything will become uniform.
  • Speak for yourself (literally?). I don't use LLMs for my writing or editing, ever.
  • The problem is that we do want us to look, think and act alike. Mostly in corpo environment, but it spills into our "free" hours.
  • I know many people from the continent who sound American because they learned ENGLISH language that way... yes it's strange how the world of communication centres around the world of discourse...

    This isn't new. But nice to see more social sciences joining the party on the LLM bandwagon.

  • Sure, the written word helped us to think and write more alike. The the Gutenberg press, radio and moving pictures, and then the internet. Modernity is an ognoing acti of cultural genocide that swallows everybody up and blends all human culture together into an undifferentiated grey goo.

    Several years ago I visited Flores Island, which is part of the Indonesian archipelago and the place where Archeologists discovered Hobbit man. The Island is only 150 km long but it's inhabitants speak 5 distinct languages and 80-something dialects.

  • Large language models may be standardizing human expression

    I think it is important to distinguish "human expression" from copying a response from an LLM. Someone who outsources their thinking to an LLM is only offering an AI's expression. It's not human expression.

  • Who is "us"?
  • > The team points to multiple studies showing that LLM outputs are less varied than human-generated writing and that LLM outputs tend to reflect the language, values and reasoning styles of Western, educated, industrialized, rich and democratic societies. ... The researchers say that AI developers should intentionally incorporate diversity in language, perspectives and reasoning into their models.

    Which is why Altman says Saudi Arabia should have it's own Sovereign AI cloud. Why should LLMs reflect democratic societies views on man and woman for example? They should also reflect the perspectives on man and woman that Saudi Arabia has, especially to local people. Western views should not be imposed on the rest of the world.

  • Compared to social media, probably for the better.
  • [dead]
  • [dead]
  • [dead]
  • [dead]
  • The LLM people call it "safety" but in reality its censorship and conformity. Yet, it's trivial to get them to talk about how to make a bomb or whatever. It's mostly political in nature.

    https://www.trackingai.org/political-test

    You dont accidentally end up entirely left wing libertarian.

    • That quadrant is where basically all "Western" mainstream academia sits, and has for quite a long time, and they write an awful lot.

      I am a little surprised that the influence of online "influencer"-speak and marketing, being so voluminous and evident in the things' writing styles, hasn't dragged them other directions, though. Nor the enormous amount of socially authoritarian social media posting. I suppose the former is so empty of actual philosophical content (or, indeed, anything of substance) that it might have little effect, but the latter... that's weird. Maybe they're down-ranking by tone (angrier = lower-rank) which would sharply elevate academic-style writing, assuring a tendency toward economically-left liberalism.

  • No shit
  • > contributed to the research, which was supported by funding from the Air Force Office of Scientific Research.

    I guess when they're not busying bombing train infrastructure in Iran they have some money left to give to some propagandizing about AI. Always try to stay on top of the game!

  • Wasted the opportunity of using an em dash instead of an en dash in the title.