• > For those looking for a more discreet way to spend quality time, my sources tell me the single-occupancy office rooms on the top floor are sometimes left unlocked.

    This was an interesting little rundown on all the different libraries at Harvard made all the more enjoyable by the author's humor and wit.

  • ghc
    The writing style seems a little unnatural, but the odd grammatical error convinced me that it wasn't the result of someone asking an LLM to review the libraries and write the reviews in the voice of an intellectual who went to Harvard.

    What a world we live in, that suspecting an LLM guided by a specific prompt would be my first instinct.

    • The trope of HN comments determining whether an article was written by AI is becoming extremely tiresome.
      • I firmly believe that the quality of HN comments is made worse by people complaining about LLM generated content than by the LLM generated content itself.

        At least the LLMs are contributing to the discussion.

        • If people generally thought the LLMs were contributing anything of value, then the high volume of comments against them that you're describing wouldn't exist. Instead, LLMs are contributing bad content and also the downstream criticism on top of it.
          • I'm of two minds, honestly.

            On the one hand, I agree that LLMs wherever perceptibly used do nothing to aid legibility, and much to hamper it. That is legitimately irritating.

            On the other, it isn't at all new, is it? How LLMs write best, or at least how they write most, is just an outgrowth of the same methylphenidate-and-Adderall style that's characterized online writing broadly construed since the days of the original Buzzfeed, which might as well have been called "Sloptrough" if we were using those words that way then.

            I would certainly like less of the first, as much as anyone. On the other hand, it's surprising to me at this late date to encounter people who read a lot online, and have not become accustomed to the second - that is, accustomed to filleting a longform article on sight, skimming and glancing back and forth to identify what thesis may be present if any, and only actually settling in to read sequentially in the uncommon case where something initially mistaken for "content" has proven to be worth that level of effort.

            It's surprising to me because I expect people to respect the value of their own interested attention, and not permit it be idly wasted. Sometimes someone has something worthwhile to say, but not the skill to do a competent job of actually saying it, and so the reader is required to meet the writer considerably more than halfway. I described above what that process looks like in practice. It isn't really something I tried to learn, just something I began doing out of frustration with having my time wasted. (Is that unusual? A little while back someone here had to explain to me, with obviously strained patience, that most people are unlike me inasmuch as they experience pleasure directly from the effect of opiates, and not only out of the sudden surcease of pain. That clarified for me why so many people get hooked so easily, but it also suggests I may not be the best judge of what's "normal" in these matters, I suppose.)

            In terms of difference in practice, LLM output is a little wordier, a little more of a slurry, sure - but on the other hand, precisely because the results tend to exhibit such a strong or "pattern language" form of stereotypy, I find it's actually often simpler to dissect a large quantity of LLM output for the sentence or two of actual thought underlying it, than to do the same with something of similar length which was written by a human, whose paragraphs will almost never be instantly dismissible en bloc, the way most LLM-output paragraphs are.

            I suppose that last may sound distasteful, but consider: the paragraphs we're discussing, wherever originating, are filler and that's why we don't like their presence. These paragraphs have been filler since this was The Atlantic's unique house style back when that was still a real magazine, and these paragraphs were never not going to be anything but filler, so whether they were excreted by a human or a robot has nothing to say about the artistic quality of what we've already agreed, indeed taken as axiomatic, is not art. It's styrofoam! It's packing material, which we were never going to care more about than the minimal effort required to throw it away. So why care all that much whether it's hand-blown or machine-extruded?

      • Silver lining - it's a fun Turing Test. But yea, I absolutely agree with you. It derails entire conversations.
    • The author says he is a visiting researcher from ETH Switzerland. That is he is not a native English speaker.
      • Oh, that actually tracks. I should have checked his bio.
        • I dare say that he has access to at least some libraries that a random person can’t just breeze into.
    • Ok Deckard
    • > ...the odd grammatical error convinced me that it wasn't the result of someone asking an LLM...

      That's easily solved by models intentionally introducing the odd grammatical error here and there, just enough to convince the sceptics, not so many as to give the impression of being unlettered. A bit like the mythical 'RHS button' (which stands for 'real human shitty' but in reality is called the 'Shuffle' or 'Swing' function) which is supposed to make mechanically-precise drum machines sound more like human drummers.

    • "We?" I had no such trouble. You should spend less time with LLMs, if you can.
      • So you're basically telling me to get off HN (and the internet), where lazy writing co-authored by LLMs is increasingly becoming the norm. Great.
        • I was telling you to protect yourself. Now you're told. Good luck with the rest.