- We're going to at least restrict Show HNs for a while.
I do think this is relevant though: "HN can't be immune from macro trends" - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
- Please do so. And, forgive me if I speak heresy, but there has to be more proof of work (friction) to create accounts. I was shocked at how easy it is for something like chatgpt atlas to create new accounts on the fly.
- The problem is that we might lose some gold.
Not too seldom have I seen the author or a significant party of a story chime in through a fresh green account, as they were alerted by the story being posted here one way or another. And usually when they do it's very interesting.
As such I would find it detrimental if they had to jump through too many hoops so they don't bother or it takes too long so the thread dies before they can participate.
- Indeed. Here is a recent litmus test: https://news.ycombinator.com/item?id=47051852. How can we filter the lightweight stuff while still benefiting from posts like these?
(a bit more about this at https://news.ycombinator.com/item?id=47056384, with a reply from the OP)
- One thing we did at reddit for a while was put posts from new people in "jail". They would show up in a special yellow box at the top of the home page to accounts that tended to be early upvoters of things that became successful later (our Nostradamusus so to speak), and then if it got enough upvotes from that group it got out of jail and placed on the regular /new page.
So maybe some sort of filter like that? Only show it to those kinds of accounts at first?
The downside is that if that group isn't big enough you get a lot of groupthink, but if your sample is wide enough, it can be avoided. To be honest, I don't recall why we stopped doing it.
- Maybe have a signup flow where you can skip the new account restriction by putting some file on a website of some currently trending link. And then the restriction is lifted temporarily for the thread linking to it?
- I have often heard that vote rigging is detectable on HN because the site software penalizes voting from accounts at the same IP address.
Rumor had it that there is also some kind of social-network metric detecting when socially adjacent accounts (or alts) are engaged in astroturfing, the practice where a small cabal tries to pass themselves off as a broader grassroots campaign.
Flip that around though and the same metrics might allow new accounts to be meaningfully vouched for by existing ones.
- I think vote rigging detection might be based on the length of your session
- You would need, say, a StackExchange-like crowdsourced moderation system whereby users with relatively high karma are randomly selected to check posts from new account, by casting votes to reject or keep.
- >How can we filter the lightweight stuff while still benefiting from posts like these?
Well, the simplest automated method would be to run the post and comment together through an LLM with a prompt that's roughly:
"Is this person claiming to be the author or co-creator of the work discussed in this submission?"
Only green accounts subject to it. I predict you'd probably have a very low false positive and false negative rate.
It's of course a terribly slippery slope. My perhaps overly-cynical take is that once the infra is place some of your bosses would be prone to eventually abusing it.
Personally I'm here for it: Dang, moderator turned whistleblower—on the run from dark VC money—in a race against time to save freedom. Still working on a title for the film.
- Just sharing observations it may help, it may not…
what I’m seeing is new or sleeper accounts that have been idle for over a decade with low (<99) karma getting into comment circles. Over the last couple of weeks i’ll see several top comments on articles with back and forth between other similar accounts… it’s got to the point that I check a user habitually before I even bother reading… and I have never hidden so many comments before getting to something substantive in the comments…
Like many here, I don’t wish to limit new users, but this does seem from my armchair perspective to be a pattern to be on the look out for.
- This is interesting. Can you link to some of these?
I've noticed this kind of behavior on Reddit but never on HM
- Interesting litmus test, as the post isn't just green, it's riddled with LLM copyediting. Doesn't read as if originally composed by an LLM, so there's that.
Would seem to require some discernment to classify. Not all assistive use is slop.
- I mean I guess you're right - I didn't notice it, because the community reaction to the project was so positive.
> Not all assistive use is slop.
That's right, and the key is to discern which posts/projects are interesting.
- The discussion about the LLM assisted/written submission at the time, with replies by the author: https://news.ycombinator.com/item?id=47055300 The defence given was essentially "just reformatted it for better grammar"
It's obviously says LLM to me at first read through.
I suspect that:
a) less people are willing to expend a bit of energy to notice LLM usage given how much of it is. ("we've lost" theory)
b) that people are losing the ability to detect LLM submissions. ("we're cooked" theory)
or c) that people don't care about the use of LLM. ("who cares" theory).
Personally I've been feeling less invested, because it seems as if most users don't care and even the main users of the site don't notice it.
- Responding from a new account is different from posting from a new account. You aren’t vetting people by making accounts have a minimum age to post articles. That’ll just cause people to make accounts before they need them.
Reddit has forums where you need a minimum karma to post to certain subreddits and that is typically upvotes on your comments, but it could also be upvotes on someone else’s moderated subreddit.
- I think the right people will stick around. There is a certain kind of indivudal that has the paitence to understand that a system that restricts new accounts from post is a good thing. Of recent, there have been a lot of posters that come here from the open web just to try and slant opinion.
- But sticking around doesn't solve the scenario mentioned by parent.
1. some interesting projects gets to HN main page
2. author of the project is not on HN so creates a green account and interacts
even if that person would have the patience to stick around, by the time they would be able to respond, it would be too late for it to be relevant to the (now stale) discussion.
- This is one of the best things about HN. The sheer number of times someone has posted a link and the author or someone significant to the project deep within some megacorp makes a green account and starts answering questions that you never thought would get answered. Some of the most golden replies come from greenies.
- Yes, and we've always gone out of our way to protect those. It's perhaps the thing I hate the most about our software that sometimes it kills such posts.
- These are some of the best interactions we have here.
For sure a problem worth considering.
I can't think of anything easy...
Only even remotely sensible thought I have at present:
We add a check box to replies created by new accounts. Maybe created by all accounts?
The prompt reads something to the effect of: I am mentioned in the article. And then they get to say how.
-This is my project -I am mentioned by name -Etc...
Whatever it is they wrote, appears somehow, maybe as a required line or something.
Others can see that and either flag the account or vouch.
This at least some what distributes the required attention load.
That said, I don't like it. Have nothing better, so here it is!
Then others seeing that
- > even if that person would have the patience to stick around, by the time they would be able to respond, it would be too late for it to be relevant to the (now stale) discussion
This is a fundamental part of how HN sees its own functioning; they refer to it as "rate limiting".
- The SA Forums model does accomplish the goals of filtering out noise, but then you’re stuck with a stagnant community of “the right people.”
- Unironically slashdot's moderating and meta-moderating is the best long-term system I've seen.
Everything else seems to eventually cause new blood to dry up.
- I remember reading slashdot but what is their system? Is there a separate set of mods that moderate the moderators?
- You get points to mod other people and other people can meta-mod your posts.
- The key is that both were randomly assigned to users - you’d never know if you’d open a thread and be a moderator. If you posted in the thread you couldn’t moderate.
And about the same frequency you’d be assigned to metamoderate, basically being asked if a moderator’s “vote” was a good one or not (you didn’t have to fully agree you’d do the same, just that it wasn’t bad).
Someone who scored low in meta moderation would get less or no moderator chances.
- I am only that kind of individual when I'm inclined to post unconstructively – not that I know that, at the time. When I'm feeling constructive, friction is likely to make me take my constructive energies elsewhere.
- Seems like restricting posts but not comments from a fresh account would thread that needle pretty well?
- I'd suggest: new accounts are read-only for at least a week. Then they can comment (rate limited at first, gradually relaxed) and vote, and then after some additional amount of time and/or karma they can submit a post. Maybe some of these mechanisms are already in place? Bots can probably game this too but drive-by bots maybe won't be patient enough.
- It seems easy enough to circumvent: "We're launching our product in 2 weeks, so let the AI create and 'warm up' 20 new HN users so they're ready to shill".
It's really not a problem that can be solved easily :(
- If someone is going to put that much effort into to it, let them. I think the ideas here are to try to get some low hanging fruit to see if that works “good enough”. You’ll never block all AI generated accounts, but you may not have to and still have the desired effect.
But if someone wants to plant 20 new accounts, grow them out with karma votes, so that they can game the voting, there are probably other ways to detect that.
- The issue is that it’s not that much effort anymore.
We rely on friction for most of our social norms.
- Any amount of friction reduces the amount of slop. What proportion of clankers are going to realize that they need to warm up the accounts two weeks in advance? Answer: a proportion that your never going to see with that barrier in place.
With a couple few layers of defense, you'll weed out almost all of the bad actors. Without strong monetary incentives for spamming, you also avoid most persistent actors.
- With enough layers you will also weed out almost all of the good actors. Normal people are busy and don't have time nor patience to jump over too many hoops to promote their cool new research, or to respond in a thread where someone linked it.
- Reddit has more friction to sign up or post while new or low karma.
The main subreddits will basically shadowban you until your account is aged and has more than X karma.
- This is why I don’t create a Reddit account or post there: there are so many rules that dissuade new accounts. I don’t even bother to try.
- Reddit is fantastic, to me. It's worth the struggle to get past the initial bullshit.
There are a lot of flaws, though. Their appeal system is very broken, for instance.
- Which in itself is annoying, IMO. It creates a whole separate set of problems. You need karma, so people post in karma-farming subs to get a few crumbs. Then you get auto-banned from a dozen of the top subreddits preemptively for farming.
Reddit hasn't been as overrun by bots yet, for the most part, although how long they can hold out I don't know.
- maybe not overrun by spam, but the amount of bots I see on popular subs is definitely not 0
- You don’t have a choice.
We live with GenAI, and the human to bot ratio is now leaning in a different direction. The old norms are dead, because the old structures that held them up are gone.
This idea that theres “more hoops - losing participation” on this thread keeps assuming that the community is unaffected by the macro trends.
It’s weirdly positing that HN posts and users, are somehow immune/unaffected by those trends.
- Immediate comment privileges are really important. Lots of examples, but to give a silly one, someone pastes their clipboard without realizing it includes their API key or their email. Good Samaritans should be able to say, "Hey, I just caught something."
And, as another commenter mentions, if someone shares your work, you should be able to comment on that thread without delay.
- This is the only reason I got myself a HN account: someone posted a link to a blog post of mine, and I happened to see the increased traffic on my VPS.
(And I stuck around after, a few posts are interesting enough. All the AI stuff isn't, and there is too much of that unfortunately.)
- You reminded me how infuriating it was not to be able to post comments on StackOverflow. Felt like getting those few upvotes required was taking forever, and all without ability to ask for clarification.
- Requiring accounts to be a certain age does not help and will only affect legitimate users. The slopsters will simply create accounts, wait a bit and start posting then.
Actually cross the will out. They are already doing this to avoid the green smell. This account replied to me today. 4 months old, but only started posting today. https://news.ycombinator.com/user?id=BelVisgarra
Oh damn, that's the one who posted the AskHN about the verified job portal on the frontpage today. Either this is some chilling still in build up, or it's an actual human being with severe LLM slop impersonation derangement syndrome.
- Yikes. That account is like the epitome of LLM posting. It's a shame, too, because it makes me feel less inclined to read discussion on this forum.
- Yeah, unfortunately there are bots here that are much better at hiding that and even do language mistakes on purpose.
It's still a small minority of comments, but it's definitely getting a problem and just the chance — even if it's small one — of talking to a bot, rather than a human causes inhibition. Finding out that one has been talking to a bot is finding out you've been scammed. You invest time and human emotions into something for another human to read, even if it's just a quick HN comment, just to find out that it was all for nothing. It sucks the humanity out of it and thereby out of oneself. You get tricked into spending your valuable limited human social energy on soulless machines with infinite capacity of generating worthless slop instead of on other humans.
- didn't even bother not using an em dash...
- If most people are like my on that topic, then they use HN without an account, until they want to post or comment something, then they try to find out how to create an account. If they won't be able to post or comment then, then they will just not create or retain that account.
I was able to have discussions where one party has significantly unpopular opinions. Such discussions are unique to HN, please don't kill them.
- [flagged]
- If that were to happen, I'd also suggest that comments from fresh accounts should also have URLs deleted or disabled.
- Even something like…
Example[.]com
But don’t worry, HN has been thoughtful about links from new accounts for months and months (can’t speak for longer, but maybe/probably). Effort could well be duplicative unless I’m unaware of some more granular detail.
- I'm surprised posts aren't restricted a bit more. Maybe that's just my old school "lurk moar" mentality, but I feel like I really need to understand the vibes of a community before I start to contribute posts to it.
- True
Mm, balancing against “long-time lurker, made an account just to post this”…
- Yeah, exactly. Thirteen years ago, I was a lurker. No account, because why would I make an account just to read? But when I wanted to say something badly enough, I made an account. (I think the first thing I did is post an Ask HN about functional programming, so "no posting for X time" might have turned me away.)
- This problem can be solved by an invite/vouch for system.
New account can be invited or vouched for by an old account with good karma. If an account that you vouched for starts spamming and/or slopposting, you lose your vouching for abilities for a period of time or forever.
- I didn't know anybody here before I joined. (I have been here for a few years, and I still don't know anybody here.) How would a person like me get invited or vouched?
- Totally.
I don't think the solution is changing the dynamic but flagging, this site self-moderates quite well, aside from dang and tomhow's great work.
- Yes that is exactly what I just did, some of us are just getting around to having time to post
- These changes aren’t being suggested in a vacuum.
It’s perhaps unintentional, but your framing makes it seem that this is a baseless whimsy.
At this point, it appears that we will be talking to bots more than humans.
It’s a brave new world, and not adapting to it will see the humans leave.
- Honest question, what are the alternatives to HN?
Because if new account restrictions create enough friction, you lose legitimate users who periodically rotate accounts for privacy reasons.
At some point the annoyance tips toward just lurking, and a forum where only old accounts talk is a stagnant forum given enough time.
- Lobste.rs comes to mind. High enough friction that, even as a seasoned participant here, I haven’t tried over there yet.
- That looks interesting, but I feel like it’s likely to be close to impossible to join. Feels like it would be weird asking someone you know for an invite.
- Wow, i just noticed, that they block access from Brave Browser.
- What's up with Lobste.rs blocking the Brave browser? - https://news.ycombinator.com/item?id=42353473 (93 comments, and linking to https://lobste.rs/s/iopw1d/what_s_up_with_lobste_rs_blocking... which is about that, though if you browse with Brave you might have trouble with it)
- I've wanted to join lobste.rs for several years but don't see any way to do so. I think that might be a bit too far in the other direction.
- Same here, I don't know anyone who might send me an invite unfortunately. It's unlikely for this topic to come up organically in a conversation as in "hey by the way are you on lobste.rs" so my previous attempts were by sending messages in my company's notice board asking if someone is there. But in the last few years I have worked in smaller startups so the sample size is too small for this strategy to succeed.
- FWIW, folks on lobste.rs are (mostly) friendly and willing to extend invites if you seem like a real person. My understanding is that the invite system is primarily in use to avoid drive-by spammers and the like.
Feel free to send me an email (findable via my HN profile) mentioning that you found it via this thread, and I’m happy to extend an invite.
- I think we've gone from the eternal September to the eternal December
- Perhaps more proof of work is necessary, but it makes me sad.
I still remember creating my HN account. It stands out in my memory, because it was the smoothest, simplest, easiest, and quickest account creation of my life.
I had lurked here for around a decade before finally creating an account. Any urge to participate was thwarted by my resistance toward creating accounts (I just hate account creation for some reason). But HN's account creation process was a breath of fresh air. "You mean it can be this easy? Why isn't it this easy everywhere? If I had known how simple it was, I would have created an HN account years earlier, lol."
It was especially stunning to me, because I think the discourse on HN is generally of a higher quality than most other sites (which I wouldn't naturally associate with such an easy account creation process).
It's my only fond memory of account creation (along with maybe when I created an account on America-Online back in the 90s, since that was my first ever account and it was all so novel). Just a few quick seconds, and then I'm already commenting on HN. It was beautiful. I remain delighted.
- My intuition is increasing the difficulty of account creation favors motivated actors and disincentivizes organic participation because:
1. ideological and/or economically motivated actors will just see it as a cost of doing business.
2. Ordinary sign-up friction is more likely to make HN appear ordinary to anyone who stumbles upon it.
3. Sign-up friction is a moat. The strength of HN is moderation of what gets in.
- I rotate accounts on "social media" (mostly Reddit and Hacker News, the others don't interest me) every few weeks or months to make sure not too much of my post history accumulates in one account. I would dislike it very much if there would be high friction to create new accounts. On the other hand my behavior is probably a major outlier.
- Same, though I'm also surprised how easy I can make new accounts for this site. But I love that. Hope it doesn't require me to jump through a bunch of hoops in the future.
- You are aware of the guidelines? (You are not fostering community)
> Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
- Technically, every HN account is a throw-away account. ;-)
https://web.archive.org/web/20260228135203/https://www.brain...
https://www.azquotes.com/quote/351103
https://web.archive.org/web/20250713080832/https://www.usmcm...
- Thanks, I was not aware. They seem to be guidelines, and not rules. I find my privacy and the prevention of anyone to build a full profile of me (especially how easy that is now in the age of LLMs) a bit more important than the vague concept of "fostering community". I am sorry.
- Just like how HN itself can't be immune from macro trends, neither can its users, and macro trends have unfortunately made this a necessity for many of them.
- Your behavior is only an outlier because we don't teach kids basic security practices and so they don't grow up into adults who think like that. We also don't teach kids how to avoid "Internet addiction" dopamine chasing, so seeing a number (eg: karma score) get smaller instead of bigger hurts feefees.
I'm well aware that the cyberlibertarian ethos endemic here opposes any form of regulation. But when the status quo clearly isn't working something has to change. Parent's have failed to step up and do their jobs. Somebody else has to.
- I think the problem is you can be tracked by your email when you sign up for a new account. So I am not sure how this can be helpful.
- On Reddit and Hacker News, I don't need an email address to sign up. But also I use SimpleLogin to have a separate email address per website/account. Quite necessary these days when personal data is leaked by some company or other every day.
- This matters when you're hiding from the website. It doesn't matter if you're just trying to hide such things from the public.
- It also matters if you're trying to hide from subpoenas to the website.
- My HN account has no email. Not sure whether it would still be possible for a new account.
- > not too much of my post history accumulates in one account
I'm curious to hear what benefits you think can be gained from avoiding this.
- You can build quite an extensive profile of someone given enough post history. More post history means more details. Especially nowadays with LLMs it's trivial. This can lead to all sorts of issues. One is people I know in real life being able to identify me. Another is that through various means my account may be linked to my personal identity (e.g. through matching usernames or emails across platforms) and oppressive regimes (now or in the feature) may use my post history to take action against me.
- I do the same. It simply means theres less accidental leakage / self-doxing that could be pieced together if you (or llm) read every comment on the account.
Suggestion: Pick a long term account, dump the comments, and see what an llm could figure out about the target
- I do it sometimes just to restrict my own pride in the account. I get a buzz from upvotes and that upsets me on a deeper level.
- Same, but also for the opposite reason: a new account gives me a chance to do better. If I post lame comments, I accept the lameness of the posts attached to a particular user name and the hesitation I feel to post more lame comments decreases. With a fresh identity, I am more likely to avoid lame posting sort of like how you avoid going out in the mud in brand new sneakers. A sort of repentance; being born again in the digital realm.
- Honestly, it's probably good if platforms disincentivize this. If you know creating a new account is high friction, you are more likely to take care of the account you have, and be a higher quality member.
If you intend your accounts to be thrown away, you will likely behave worse.
*I'm using "you" generically, I don't mean you specifically.
- I think yours might be extreme. But I think the anonymity here is widely appreciated. And frankly necessarily relies on easy creation of accounts.
People share things that they often wouldn’t. And somehow the culture remains mostly civil. It’s a pretty fantastic forum IMHO.
Changing the rules would surely change the vibe, so to speak.
- I appreciate the anonymity. Posting as throwaway is often useful to distance the poster from $work or $ex or other situations yet contribute to a conversation.
But will it continue under all the login id surveillance laws coming up?
- Reddit didn't ban you? I got banned for that lmao
- Reddit didn't (yet). Another tech focused community site did though... So I stopped participating in the community.
- Never got banned for it, though my "rotations" tend to be "a few weeks every year".
even if they did ban me: the account was going to be deleted in a short while regardless. So that fear isn't present for what's essentially a longer lasting throwaway.
- I really don’t like newbie has 0 trust. So some proof of work makes sense more than limiting new users.
- What would this proof of work scenario, instead of restricting new user content, look like?
- I was going to suggest emotional leetcode, but LLMs do well on this.
When given a conversation about Alice and Suzy having a one-upmanship conversation (my husband rich, my kid is a genius) and what emotions they are feeling, and what Suzy could have said instead to improve the conversation, it gave accurate responses (e.g. they're feeling insecure, competitive, envy).
- That type of question could also turn people off. We already have too many discussions where people are quick to jump to conclusions and attribute intent, rather than asking basic questions.
- But is there a connection between the front page being full of "AI" slop and "AI" worship and these new accounts? Or are the old timers also upvoting those submissions in the detriment of other, more interesting topics?
- Wow! I might be witnessing the end of HN
- I echo this sentiment for all social media platforms today...
At least new accounts are more obvious here. This pattern has been increasingly used for scams, spam and AI slop on Instagram, X and Facebook for years.
- Seems to be a general problem right?
The standard solution is using an email to register account, maybe a cloudflare captcha, and then using good network logging to group accounts by IPs and chainbanning abusive accounts when they are caught by other mechanisms.
- [dead]
- Agree, HN can't be immune to what happens in the programming world. Would be great though if we can have a way to mute or hide accounts. This way each HN user will be able to clean his own feed of articles.
- That works for me so long as it’s not the main solution, as I personally don’t want to curate, I’d rather just partake in a sanely moderated forum and that’s my understanding of what HN has been it’s just facing a new challenge with ai spam
- [dead]
- A site can't easily be immune to macro trends in authentic dicussion, but it can be significantly immune to inauthentic uses.
- I was thinking of setting up a system to highlight sock-puppeters and other consistent-rule-violating accounts, as a 'fun project' that might improve the HN experience. But it strikes me that the HN staff probably already does something like this, they may not welcome a side-loaded project of this sort, and it would require some automated crawling of HN (which again may be unwelcomed). Finally, I don't actually have experience in this area. Is this something that would be welcomed, or unwanted?
My initial thought is to set up a devoted account like "sock_puppet_detector", and using the infrastructure from https://hackersmacker.org/, add any likely sock-puppets as 'foes'.
- It'd be pretty easy to spot too, because most people don’t even bother trying to hide it (either out of laziness and/or ineptitude).
A lot of users don’t seem to realize that anyone can click on the domain in a "Show HN", and Hacker News will show you all the times that domain has been submitted. So you’ll see four or five different low karma sock puppets accounts that have all submitted the same site.
- Oooooh that’s a great idea!
- I'm wary about new accounts such as yours wanting to censor and shape discourse by antagonizing people who hold diverse views that differ from your own here.
The HN culture has shifted drastically over the past 5 years.
- "New account". Meanwhile, the account is 4.5 years old with 2600 karma and has hundreds of thoughtful comments.
- To be clear, I wouldn't filter people just because they have different views than me (the goal is to automate the detection, to avoid the effort of reading all the comments -- I should mostly not be in the loop). But I have come across accounts that openly admit to being sock-puppets (eg https://news.ycombinator.com/item?id=47242156). These sorts of accounts I would highlight.
Likewise for guideline-abusers. I don't really know what heuristic you would use to detect rules abuse, but I imagine there are at least some clear violations that could be detected.
Finally, I think I'd make one account for sock-puppets, another for guidelines-abusers, etc, so people can 'subscribe' to whatever degree of 'highlighting' that they want.
- user: pinkmuffinere
created: August 8, 2021
karma: 2686
- This is excellent. lol
- That's sad there have been some really neat things shared that way but you gotta do what ya gotta do.
- Why not let the users choose at settings? like "Show dead" ?
- For all accounts or just new ones?
- Just new ones for now.
I don't want to make HN harder for legit new users, but I do think a bit of community participation is reasonable before posting a Show HN, so it isn't just a box on some "how to promote your project" checklist.
- It's really hurting the brand. I can't remember the last time I bothered to even check that index. I used to check it all the time.
- /newest is pretty grim, too. Go there and click any link, and odds are you won't even need to read the contents to know it's AI generated, because you'll immediately be met by one of:
- A landing page that looks exactly like every single AI generated landing page ever, I don't even need to describe it, you already know what it looks like
- An article or blog post headered by an image with the Gemini logo in the corner
- A Github repository with CLAUDE.md or AGENTS.md and/or 50 large commits made in the span of a day
I'd estimate that more than half of new submissions now fall into one of the above categories.
- There's almost no shot to get hand authored posts some views (I tried with one of mine recently). I felt like I submitted it and a moment later there were like 20 new very obviously AI generated posts ahead of it.
- I recently had the same experience with a Show HN thread I posted.
- How new is new?
- It does seem, anecdotally, that the Show HN is being used less since the recent analytic posts that made it to the front page.
- Minimum karma perhaps?
It's easy for people to game but it's at least one more effort-based hurdle.
- Here's an idea: allow downvotes for green posts with published guidelines on when downvoting is and is not appropriate. We can collectively filter out the pure spam efficiently to make it less worthwhile to post.
- I welcome this. Lots of AI slop has been thrown on to this site and the drawbridge needs to be eventually raised a little.
Can't allow low-quality posting from new accounts here but thank you for listening to the concerns.
- [dead]
- [flagged]
- Reddit has tried this approach and, IMO, it's failed.
A new human user will spend actual time creating a thoughtful and helpful post, only to be greeted by "sorry, your post has been removed by automod because you don't meet criteria". They get disheartened and walk away forever.
The spammers, on the other hand, know how the rules work and so will just build their bots to work around this (waiting 30days, farming karma).
The net result is that these rules ensure that much greater proportion of new accounts come from bad actors - who else would jump through hoops just to participate on a web forum?
- It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way. Hacker News has three advantages. First, it is moderated by the same people who build the tooling, so the incentives are aligned. Second, it is an enormous source of soft power for a venture capital firm with the resources, incentives, and likely the competence and capacity to keep it running smoothly. Third, the scale is smaller and is not tied to hardline revenue constraints like CPM, user LTV and DAU-maximization which restrict what Reddit can do.
- > It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way.
Not to mention reddit mass removed experienced moderators when all the moderators had a protest about reddit removing their access to good third party tooling.
That's the day the site started its death spiral.
- I quit moderating because it was destroying my mental health.
Getting called a fascist and rehashing how “no, you’re libertarian politics are fine, but can you please just start your own sub” in a long, drawn out, hateful, back and forth gets exhausting after the 200th person who comes to the bicycling subreddit and feels they should be allowed to endorse harming cyclists with their vehicles.
Everyone got mad at spez for having the audacity to fuck with these kids, and there is a point there, but after living with it, I could see myself doing the same damn thing.
- Moderating Reddit subs can be a huge money maker. I know people making $100K/year from it. There are cabals, especially in the adult sections. Reddit has tried to address this recently by limiting the number of subs a person can moderate, but that just causes these big accounts to create more user accounts and split all their subs up that way.
- I must be old and naive but you can make money with subreddits?
- Plenty of subs blatantly allow certain brands to advertise while banning anyone else. Kind of amazed Reddit themselves haven’t put more effort into to stopping it since it kinda sidesteps their in house advertising.
- At scale they will. For now, someone else puts the effort into growth marketing, eyeball capture. Reddit eventually changes the rules, seizing control, thereby acquiring users for less human cost (as opposed to missed revenue opportunity).
- Corruption
- > It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way.
And on top of that, some of said "volunteers" are power-hungry, petty, useless fucking morons. Especially the large subreddits tend to be run by people I wouldn't trust to boil some pasta without triggering a fire alert, and yes I know people who manage that.
- It’s worse than that. On r/news they shadow ban anybody who doesn’t have verified email. No message or anything. Just nobody sees your comments. I probably made 20 or more comments there over a few months before I figured it out. It felt humiliating.
- It's even worse than that. They preemptively ban you outright on lots of major subs for posting on other subs. For instance, I can't interact with r/pics because I once commented on r/redditachievements. And a housemate once upvoted a pic on there which got us both banned for a week because Reddit thought I was trying to do a run-around on the ban.
I still love Reddit for all its flaws though.
- There needs to o be a distinction between creating a post and replying.
IMO New accounts should be restricted from creating new posts, or at least certain kinds of new posts.
Replying shouldn't be restricted. That is how users interact with each other and learn the etiquette of HN.
- I agree. I faced this in the psychology subreddit, forced to quit. They wanted karma to post comments, but without posting comments, how am I supposed to get karma specific within that community?
- Literally me on a DIY sub. I needed some advice, got auto removed, never went back.
- Same. Not DIY, but my first post was rejected and I was banned. LOL. I guess that is moderation in action!
- "excessive moderation" is a fun concept to think about.
- > waiting 30days, farming karma
If "farming karma" is a thing, maybe that forum deserves what is coming. Either the karma mechanic is inappropriate given the demographic, or it is too hard for the users to avoid upvoting bots.
- 100%. Not sure what the solution is but I have lost interest in Show HNs these days. Part of it is because when someone posted before, it usually meant they spent a fair amount of time thinking, and found it worthwhile to spend energy on the project. This was a nice first filter for bad ideas and now no longer exists.
Even for posts that are interesting to me, I get the feeling that it's not worth looking at because it was probably made using LLMs. Nothing against them, but I personally thought of Show HNs as doing something for the love of it, the end result being a bonus.
- I certainly hope they do something.
I'm not opposed to AI automating away stuff no one liked doing, or even more utilitarian things in general, but robots posting on social media and discussion sites seems antithetical. I don't know what the point of talking to a robot would be when I could talk to Claude if I wanted to do that.
I'm not even 100% sure why people are doing Show HN for low-effort stuff shit that was done in 45 minutes in Claude. I guess it's trying to resume-pad or build a brand or something?
- > I'm not even 100% sure why people are doing Show HN for low-effort stuff shit that was done in 45 minutes in Claude. I guess it's trying to resume-pad or build a brand or something?
Github star farming, SEO, etc
- Someone telling you about their AI created project is like someone telling you their dream they had last night.
- I'm not sure if LLM projects doesn't mean they were not made with love. It just makes programming accessible to more people, but essentially it's still just a tool.
It does take the handcraft out of it, in that sense an LLM-made tool would be more akin to IKEA stuff compared to a handcrafted work of art (though I struggle to call even hand-made electron crap a work of art, lol).
But yeah I know what you mean, they are usually half-finished solutions.
- [flagged]
- Why do you keep posting here? Asking seriously. You open a new account, immediately get it banned, then move on to the next. Doesn’t that get boring?
- For your first ever comment, you are breaking multiple rules.
Please review the Guidelines and FAQ
- [dead]
- Some feedback and suggestions, in a somewhat rambling fashion:
I'm using a new account and will likely use one forever, as I don't want lots of posts linked together, nor do I care about points or karma or whatever it's called. My first few comments are always shadowbanned. I also see lots of dead posts for new accounts with "showdead" turned on. A lot of them are normal, useful comments, some are inflammatory or just plain stupid. I haven't seen many comments that seem to be AI generated. Maybe they are and I just don't see it, idk.
Anyway, if a comment passes some basic filter (doesn't post shady links or talk about VIAGRA or 11 INCH PENIS or something spammy), I hope they still show up, even as "dead". On this account I copied 1 dead comment to give it more visibility and I've done it before a few times, too. The comment is still dead, btw (id 47262467). And maybe instead of (shadow)banning new users/posts, just make a separate view for old/established account and another one for all posters.
I would also be glad if I could solve some CPU- or RAM-intensive task as PoW. If I really had to, I'd pay with Monero or something similar, as long as it's an anonymous currency with low fees so a payment equivalent to 25 cents wouldn't incur a big fee. I wouldn't pay more per account (especially when I rotate them), as I've been a lurker for years and only recently started posting, anyway (so I don't care that much if I can post).
Finally, thanks for letting us sign up over Tor. :)
- I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant ban, because I see this sort of behavior from non-green accounts as well.
EDIT: I meant (but totally forgot) to qualify that my "proposal" would only apply when the LLM-ness is self-obvious—idk, make up a "reasonable person" standard or something. Presumably, the moderators would err on the side of letting things slide. Even so, many comments I've seen are simply impossible for any reasonable person to claim as "human-written"—the default ChatGPT style is simply too distinct.
- > I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant ban
It pretty much is. It’s not hard and fast (sometimes we’ll warn people or email them to ask if it’s not certain) and it takes time for us to see things and act, especially when people don’t email us when they see these comments.
But as a general rule, accounts that post generated comments get banned.
- I think your comment was generated by an LLM and hereby vote for your immediate and permanent instant ban.
- I think that your comment was generated by Eliza, and hereby vote for you to get a karma boost for being Legit Old School, then an immediate and permanent instant ban.
I'm joking, of course. If your comment was generated by Eliza it would have started with "How do you feel about 'I think your comment...'" :)
- Joke's on you, all of my comments have been written by Dr. Sbaitso[0] since forever =)
- Can you elaborate on that?
- Eliza was one of the first chatbots from the mid to late 60s: https://en.wikipedia.org/wiki/ELIZA
- that's interesting, tell me more about one of the first chatbots from the mid to late 60s
- BTW, what ELIZA implementation are y'all using? The Emacs Doctor?
- What would it mean to you if we were all using the Emacs Doctor?
- Emacs? Hah! I would appreciate it if you would continue.
- Whooosh! I think you missed the joke. :-)
(I didn't, and I thank everyone involved for the nostalgic moment. Also, shout out to Dr. Sbaitso!)
- (I think you missed the joke.)
- Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text. Some of it seems to be a knee-jerk reaction to some of the occasional, one-sided stories of people who were accused of using LLMs and fired from their jobs. And some of it seems to be just hedging so that we don't develop a culture that could penalize their LLM-generated posts or code.
We had people defending the fired Ars Technica guy, even though he admitted to using an LLM in some sort of a contrived non-apology along the lines of "I did it because I had a cold".
My main problem with that is that you can just generate an infinite supply of LLM op-eds about LLMs, and is this really what we want to read every day? If I want to know what ChatGPT thinks about the risks or benefits of vibecoding, I'll just ask it.
- Sure, it's obviously impossible to ID any single piece of writing as from an LLM without significant false positives.
But in practice, I frequently encounter a comment that either screams generic LLM slop or even just as a vague indefinable "vibe" due to one or more telltale signs, so that's red flag #1. Then, I go to the comment history, at that point if it's really a bot/claw/agent or a poster heavily using LLMs I'll usually find page after page of cookie cutter repeats of the exact same "LLM smell" (even if that account has been prompted to avoid em-dashes/lists/etc, they still trend towards repetition of their own style).
At that point a human moderator would have more than enough evidence to ban an account. It's not like we're talking about a death sentence or something. If no clear pattern of abuse from the long term commenting activity, then give them the benefit of the doubt and move on.
- Hmm, some LLM text is hard to detect, sure.
Some is also horribly easy. If the text is full of:
- Overly positive commentary and encouragement
- Constant use of bullet point lists, bolding and emoji
- This quaint forced 'funniness', like a misplaced attempt at being lighthearted
- A lot of blablah that just missed the point
- Not concise and to the point, but also not super long
Then that really screams ChatGPT to me.
I think it's because this seems to be the default styling of ChatGPT. When people tailor their prompt to be more specific about style it's a lot harder to detect but if they just dump a few lines of instructions about the content into it, this is what you'll get. So the low-effort slop is still pretty easy to detect IMO.
- > This quaint forced 'funniness', like a misplaced attempt at being lighthearted
HN always downvotes attempts at humour, be them chatbot or brain generated :)
- > Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text.
And it's becoming more and more difficult - not just by AI getting "better" (and training removing many of the telltale signs), but also because regular people "learn" to write like an AI does. We're seeing it with "algospeak" - young terminally online people literally say stuff like "unalived" in the meatspace nowadays.
We're living in a 1984 LARP.
- The moderators are supposed to just know it when they see it? It's that black and white to you? Or are lots of false positives a price we have to pay?
- Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)
Maybe there can be a dedicated 'flag botspam' button?
Then again it's a nuanced issue. I see AI used in a large percentage of writing now, so would this rule apply to the article as well?
- > Maybe there can be a dedicated 'flag botspam' button?
We already have flagging and downvoting?
- Abusing the flag button by reporting LLM generated posts and comments (which are not breaking any current guidelines) seems like a good way to get your flags ignored.
- Flagging isn’t only in case of breaking the guidelines. From the FAQ:
What does [flagged] mean?
Users flagged the post as breaking the guidelines or otherwise not belonging on HN.
In other words, submissions get flagged that users believe don’t belong on HN. LLM-written submissions can be one such case.
- "Not belonging on HN" is an open invitation to flag anything someone disagrees with. Many posts are flagged simply because they express an unpopular opinion.
Community moderation won't fix this problem. It can only be mitigated if the site owners invest significant resources in addressing it. And judging by how little YC actually invests in HN, I wouldn't hold my breath. This website will succumb to this problem just like most others.
- https://news.ycombinator.com/item?id=47290841
It is against the rules though
- I would be worried the reason for the flag wasn't _immediately_ obvious. Maybe if there was a drop-down for the rule being violated it would help.
- What a bizarre way to run a community. The guidelines make no mention of this "rule," does dang not have the ability to edit them?
- https://news.ycombinator.com/item?id=47261561 seems like a better source for the policy.
- > Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)
I find the above comment concerning, so I ask: to what degree is the above commenter calibrated to ground truth? How would they know? How would we know?
[1]: https://en.wikipedia.org/wiki/Calibrated_probability_assessm...
It seems to me comments like the above are overconfident in the worst ways.
- He was using a dozen obvious ChatGPT-isms. So either he was lying about writing it manually (the comforting option), or he actually writes like that, which is what I meant being concerning.
But yeah, there isn't a way to prove it one way or the over, even when it's "obvious".
I saw in some schools they're using systems where you have to type the essay in a web app, and the web app analyzes your keystrokes to determine if you're human.
- It’s only going to get harder has people continue to model their writing on LLM style.
- You're absolutely right.
- I laughed so hard. It has been a long time. Thanks!
- I guess it's been fun but the internet is well and truly dead
If not already, then soon
- Something we need to remember that AI was trained on every public internet comment, the vast majority of which are legit terrible. The biggest tell that someone is using AI is having multiple paragraphs saying the same point over and over again. Even trolls are more succinct.
- Huh, this is what specifically drove me to complain about LLM-generated tickets at work - multiple paragraphs rewording and emphasizing the same point, all of which was topically relevant, but not necessary.
(i.e. it was obvious in the first place, think along the lines of a ticket about a screen loading slowly, and then multiple paragraphs explaining the benefits of faster-loading screens.)
- Dammit, am I going to get banned for rambling?
- In some fraction of cases, it's really obvious.
I would argue that those cases are really the ones that cause an LLM-specific harm, i.e., which make people feel like they aren't exclusively among fellow humans.
If someone posts something that doesn't clearly read LLM-ish, but is otherwise terrible, it's not really different from if the same terrible thing had been written by hand.
I don't think anyone who objects to LLM comments is really demanding a super-low false negative rate. Just get rid of the zero-effort stuff. For example, recently I've seen a lot of comments from new accounts that are just sycophantic towards TFA and try to highlight / summarize a specific idea or two, but don't really demonstrate any original thought (just, like, basic reading comprehension and an ability to express agreement). And they'll take a paragraph to do so, where a human with the same level of interest in the material might just say "good post" (granted, there's an argument to be made for excluding that, too).
- Sorry, updated my original comment—I meant to qualify it to only those cases where it's blatantly obvious. Obviously a lot of ambiguous comments will slip through as a result, but I agree with you that false negatives are better than false positives.
- Can you show an example of "blatantly obvious"?
- https://news.ycombinator.com/threads?id=naomi_kynes
https://news.ycombinator.com/threads?id=aplomb1026
- Oof. Some of those seemed reasonable at first. Ex: CloakHQ's comment on Compaq/DEC...
....until you start scrolling down the page and it becomes screamingly obvious that everything it says comes from the same template.
Maybe the problem isn't just that AI produces gobs of useless crap. Maybe what's worse is that it can produce even more mediocre crap that crowds out the good?
All oatmeal, no steak, leads to "starvation" by poor nutrition.
- Your comments use em dashes. Many would claim those are vastly overrepresented in AI language and thus an account overly using them are blatantly AI.
I don't think your account is AI just by these few comments, but I would like to point out that most rubrics one might use to determine what is obviously AI might end up including the way you talk.
If there was a truly accurate tell, some algorithm you could feed a few sentences in and it could tell you "yep, this is 100% AI", then yeah sure use that. I don't know you could realistically build that machine, especially when it comes to the generation of text.
- For what it's worth, there are modern LLM detectors with extremely low false-positive rates. The tech has advanced quite a bit since the ZeroGPT days. Personally I've gotten very good results from Pangram Labs. Still can't directly ban people though because false positives are always possible.
- Are they great at detecting normal prompts that don't try to make the LLM speak non-LLM-ishly? If you make the LLM not use em dashes, "it's not; it's" phrases and similar things, and if you make it make a few mistakes here and there, would it still be detected? My point is that if people aren't trying to hide their LLM use, it might work, otherwise it probably wouldn't. How would a detector tool work against output where the prompt tells the LLM to alter the way it writes? Or if the LLM output is being modified by another LLM specifically designed to mimic certain styles?
Like, why would my comment (or yours, or any other comment) pass or fail the LLM check the I/you/someone else used specific prompts or another LLM to edit the output? It seems like these tools would work on 99.9% of the outputs, but those outputs likely weren't created in an adversarial way.
- Is that false-positive rate from your own testing, or the author's claims? What is the source of ground truth?
- I will never, ever forgive these techbros for ruining emdashes. I will also never stop using them -- they are a permanent part of my writing style -- no matter the personal consequences.
- Your comments use em dashes. Many would claim those are vastly overrepresented in AI language and thus an account overly using them are blatantly AI.
I've always found this funny. Doesn't macOS' default text substitution enable (annoying to me) things like em-dash, smart quotes, etc?
- Can use AI to detect that
- People accuses everything of being LLM generated these days. That'd be a tough rule to enforce.
- Do this with submissions, too. Or at least put some indicator that it's AI generated.
- I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.
Those low value complaints add nothing to the conversation, and the content didnt make it to the front page because it was bad. If the sole objection is "AI bad", keep it to yourself....its boring.
- In every single article's comments now, there's always someone coming out of the woodwork to post "This article is written by LLM." These comments are about as useless as "The website's color scheme is annoying" and "The website breaks the [back button | scrollbar]." (which, by the way, are not allowed per the HN guidelines[1])
If anything should be banned, it's low-effort "This is AI" commentary. It adds absolute zero to the conversation.
1: https://news.ycombinator.com/newsguidelines.html
I'd argue that: whether or not the article (or reply) was written by AI is a tangential annoyance at this point.Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.- I have commented once or twice on articles being AI generated. I don't put them when I think the writer used AI to clean up some text. I added them when there are paragraphs of meaningless or incorrect content.
Formats, name collisions or back-button breakage are tangential to the content of the article. Being AI generated isn't. And it does add to the overall HN conversation by making it easier to focus on meaningful content and not AI generated text.
Basically, if the writer didn't do a good job checking and understanding the content we shouldn't bother to either.
- I very much agree.
The number of comments I see complaining about "it's not this, it's that" and other "LLMisms" definitely frustrates me more than the original content.
- It's much more than a "tangential annoyance" and it adds a lot to the conversation--among other things, it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.
Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication. Farming any of these steps out to an LLM completely breaks down the social contract involved in participating in an online forum like this. What's the point?
It's the exact same effect that's playing out in many other areas where LLMs are encroaching: bypassing the "human effort" step has negative side effects that people who are only looking at the output are ignoring.
I actually find your opinion so infuriating that it's taking all my composure to not reply with something nastier. If you guys want to spend your time reading shitty LLM spam posts with shitty LLM comments, why don't you find another site to do it on instead of destroying this one.
- To provide a heads up to others for who feel similarly for whether something is worth spending time with there isn't a problem speculating if something is produced by AI if there are indicators of insufficient human authorship but that's a big if. If incorrect such comments themselves become noise.
In its worst form I've seen now many times in other communities users claim submissions are AI for things that are provably not, merely to dismiss points of view the poster disagrees with by invoking calls to action from knee-jerk voters who have a disdain for generative AI. I've also seen it expressed by users I expect feel intimidated by artwork from established traditional artists.
Thankfully on HN it hasn't reached that level but I have seen some here for instance still think use of em dashes with no surrounding spaces is some definitive proof by pointing to a style guide, without realizing other established style guides have always stated to omit the spaces (eg: Chicago Manual of Style). This just leads to falsely confident assessments and more unnecessary comment chains responding to them.
What one hopes for with curated communities is that people have discriminating taste at the submission and voting level. In my own case I'm looking for an experience from those who have seen a lot of things and only finds particular things compelling and are eager to share them. Compared to some submission that reaches the front page of say popular programming language docs that just provide another basis for rehashed discussion (and cynically since the poster knows such generalized submissions do this and grow karma).
- > it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.
It is welcome though. Being on the front page regularly is evidence that people enjoy it or find it informative.
You may feel that others shouldn't be ALLOWED to enjoy it, but that's just your opinion and is almost always tangential to the actual topic.
Worse, you seem to believe that it needs to be labeled to help you identify it. Why? If its good enough that you need help to spot it then its obviously of sufficiently high quality.
- > Being on the front page regularly is evidence that people enjoy it or find it informative.
What makes you think that it's people who get it to the front page anymore? Or that most people aren't simply fooled by technology designed to mimic humans?
> Worse, you seem to believe that it needs to be labeled to help you identify it. Why?
Why not? Would adding a label and providing filtering capabilities hurt anyone else's experience?
Some people object to this content based on principle, not on its quality, or on how closely it resembles content authored by humans.
- Hey, I'm not a fan of LLM slop articles and blogspam either and if I could hold back the tide, I'd try to. But I'm just saying that pointing it out each and every time is just going to become its own form of spam. We're quickly entering a world where 99+% of what is written online, be it blogs, amateur news, or actual professional journalism, is LLM generated. You hate it, I hate it, but it's coming. The state of journalism is already in shambles and line must go up, so "everything written by AI" is sadly inevitable. Posting every time to remind people of that? I mean by the end of 2026 you might as well have a bot commenting on every article that it's probably LLM generated. I argue it adds no signal to the conversation.
- I still think it has strong normative value. Maybe at some point when norms have become firmly established these comments will be pointless and spammy but I don't think we're anywhere close to that point yet.
A lot of blogging is essentially self-expression and that stuff won't be taken over by LLMs (it defeats the whole point). Other blogging is done with some kind of sales/promotional/brand purpose and the extent to which LLMs will dominate this will depend on how we as a society react to it (see the AI art battles) since if people react negatively to it it becomes counterproductive.
- Perhaps it would be better to have comments that praise apparently human-written text?
I understand where you're coming from. I've been posting complaints about LLM-written articles almost as long as I've been here. (My analysis is definitely more complex than a search for blacklisted Unicode characters or words.)
But I've let off on that, partly because I agree the guideline is meant to encompass that kind of criticism (same with my comments about initial page content not rendering with JavaScript, honestly) but largely because it just seems futile. It's better material for a blog post than HN comments (and would be less repetitive).
- I agree with you, but...
> Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication.
Not anymore. Bots are now the majority of producers and consumers of all content on the internet. The social contract you mention has been broken for years, and this new technology has further cemented that.
Those of us who value communication with humans will have to find other platforms where content authorship is strictly regulated, or, at the very least, where tools are provided to somewhat reliably filter out machine-generated content. Or retreat from public spaces altogether.
FWIW I have very little hope that this issue will be addressed on HN, considering [1].
- It's in a lot of people's interest to keep platforms like HN free of LLM spam, frankly. It's in our interest as people who want to keep our discussion site for actual human discussion (though from the other comments in this thread, this sentiment isn't universally shared, god knows why). It's also in the interest of AI companies since if they destroy internet spaces like this they lose valuable future training data. So I'm (perhaps foolishly) optimistic--or at least not completely pessimistic--that there's hope yet for us.
Incidentally I foresee similar issues to this training data pollution arising with LLM coding taking over software engineering--which it inevitably is going to continue to do, at least in the short term. If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today? It feels to me like we risk technological stagnation as our collective skills atrophy and the market value of our skills plummets. Kind of like airplane pilots forgetting how to debug planes or handle edge cases because they just rely on autopilot all the time.
- I think a steelman interpretation of the parent is that entirely LLM-generated projects should be disallowed. There's a lot of submissions on Show HN that seem completely vibe-coded to me (like, including the README), which is a very different situation IMO from someone who simply used Claude to write some—or even most—of the code. When even the human-facing portion of a submission is LLM-generated, it bothers a lot of people (myself included).
- Agreed. Having some level of human input makes a submission at least meaningful. If the entire repo and all text is generated by an LLM, does it really matter if the human is the one posting the link? It's functionally indistinguishable from automated spam.
- > I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.
Low value content is still content, written by a human being with a specific point. I would argue that LLM written content is even worse than that, because what value does it add when you or I can just ask the LLM itself for it? Its existence is solely that of regurgitation.
- Without engaging in more ad hominem, that are wrong by the way, what's the issue with labeling AI content with what it is?
- It's one thing to have an AI-label. It's another to completely derail a conversation with a likely false AI accusation.
Example: https://news.ycombinator.com/item?id=47122272
You have to scroll a few pages before the actual article is discussed.
"This was LLM generated" is likely to float to the top of an article. That's where the best comments about the article deserve to go, not an off topic comment. An AI label should be much less obtrusive.
- > You have to scroll a few pages before the actual article is discussed.
Or you could collapse the one thread containing those comments.
- Join me and downvote them relentlessly.
- > what's the issue with labeling AI content with what it is
1. Your guess is not always correct
2. Over time, AI content will get harder to guess until it is indistinguishable from human content
3. You're not helping anyone by posting "this is AI". Maybe it is, maybe it isn't, but it's not helpful. It just adds to the noise.
- I'm not suggesting anyone post "this is AI", the submitter should vouch that it's AI or eventually get banned for spamming.
Ideally there could be a label on the submission that states it's AI
- > Ideally there could be a label on the submission that states it's AI
A lot of people tried for #politics and that didn't work. I doubt you'll get #ai.
- The guidelines haven't even been updated to say that AI generated posts and submission aren't permitted even though it's been the policy for a couple of years now if one searches for postings by the moderators. So outsiders and new HN users have no reason to know that it's not allowed. I'm sure there are reasons for it but the inaction is all very mysterious from an outsider perspective.
- This obviously should have been done years ago. @dang is there a reason it hasn't?
- ..so updating the guidelines is beyond the pale and suggesting it is downvote worthy?
How very interesting.
- I disagree with this policy.
Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.
LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases. I'd argue that someone who spent their time doing multiple passes with an LLM to get their phrasing just write, has taken obviously more care than the majority of people on HN take before commenting.
And if you don't like the way something is written? Just down vote it. That's true whether or not it's partially/wholly written by an LLM.
- Aren't down votes on this forum restricted to 500+ karma? And how would those compare to flagging? I'd hate for people under 500 karma to think they need to flag a post in order to have it get any attention by moderation. And, with your idea that LLMs help folks write, wouldn't that make the community worse for them?
And what about users like this, whose comment are very much entirely LLM generated and possibly even a bot? https://news.ycombinator.com/threads?id=BelVisgarra
- I should clarify — I disagree with disallowing any comments that used LLMs in the writing. I think comments should be judged on their quality, not on how they were written.
I might agree (don't know) with the idea of limiting new accounts more heavily.
- > I disagree with disallowing any comments that used LLMs in the writing.
I think the point here is that the community doesn't want to read AI slop, not that using an LLM to clean up your writing contains some inherent evil that prevents quality.
I don't want to accuse you of strawmanning the argument, but honestly, where did you ever see someone advocating the latter?
- Absolutely this:
> Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.
- > LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases.
Hard disagree. I have been learning another language and wouldn’t pretend to write posts after an LLM rewrote it because it is literally lower effort than learning the language correctly.
Like definitionally, you are using a machine to offload effort. I don’t know how you could claim that is not “low effort” when that’s the point of the tool.
- I wasn't talking about someone learning the language and using this instead of learning it.
There are a lot of people who understand English fairly well, but are not actively learning the language, are not native speakers, and can use LLMs to catch grammar mistakes that they otherwise wouldn't notice. Or catch small nuances in what they are saying, small implications that could otherwise go unnoticed.
In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".
- > In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".
Is that genuinely what you think most of the complaints on HN are saying?
IMNSHO that's an absurd statement to make about the other side of the argument. I'm still giving the benefit of the doubt here but jeeze, this really smells like a strawman.
There are dozens of whole classes of criticism of these tools that I see made on HN, and none of them fall into the category you described.
Ex: Saying "juniors who rely on Copilot/Claud/etc become lazy and can create low quality code without learning how to do better" is night and day different from what you're saying. And that's a criticism that must be addressed or the entire global software industry will destroy itself in two generations.
Surely the difference between that and "we don't want anybody to use Grammarly in their subs that show up here" is completely obvious, yes?
- I think all submissions to HN should be submitted via snail-mail, and must be handwritten. That would solve the problem.
/heavy sarcasm
That being said, my mother used to insist on hand-written cover letters from job applicants. Her rationale: it takes effort, so it weeds out all the applications from people who are just randomly spraying out applications for jobs they are not qualified for.
- Unfortunately I don’t think that it would solve the problem: https://www.google.com/search?q=handwritten+mail+service&udm...
- First interview question is to submit a handwriting sample.
- I taught myself to type because most people can't read my handwriting.
I would be so screwed. :-(
- Marking the sarcasm here really ruins your humour.
- Other than this being probably challenging to enforce fairly, I think I agree that if you had strong proof of an account largely or completely posting comments/stories/whatever that was adulterated by an LLM, that is really probably ban worthy like you said.
- When I read comments like this, I think about the average Joe who says: "Most people are terrible drivers." Then, someone asks them: "Are you a terrible driver?" They respond: "Of course not. I am an excellent driver." A few people roll their eyes.
First, it is not always possible to identify an LLM-generated comment. There are too many false-positives. Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?> worthy of an instant ban- Maybe we need a reverse Turing test and award -- humans write things that are indistinguishable from AI slop.
I have no idea what that could be useful for, but since the Turing test is now essentially beaten maybe its usefulness has come and gone too.
> Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?
It sounds like a fast, efficient, inexpensive and foolproof recipe for destroying a community. Let's use that as a future test: anyone who advocates for it is undeniably trying to destroy HN, so they get downvoted to 1 karma and permanently blocked from voting on anything else.
- [dead]
- How ironic, a comment advocating for banning LLM comments using em dashes
What if someone used an LLM to just translate?
- For now there is already a pretty effective mechanism in place, downvote and/or flag those comments that you think are across the line in that sense.
But in principle I agree with you, the rule for me is 'if it wasn't worth your time to write then it certainly isn't worth 1000x times other people's time to read'.
- Exactly. If your LLM wrote it, then my LLM can read it. I don't want to.
- God help us if we get to the point where we need an LLM agent to do the reading and filtering of all our social content for us. I am completely certain that is a downward spiral that ends with the collapse of our society and I give it 50/50 odds for killing off the entire species.
- I think you need (at least) one exception to that rule. We have many people here whose first language is not English, and this is an English-only forum. For at least some of those people, an AI translation may give better clarity than their own attempt at writing in English.
So I would propose that, in the ideal world where we could perfectly enforce the rules that we chose, that the rule would be "AI for translation only". If it wrote your content, your comment is gone. If it translated content that you wrote, your comment is still welcome.
- There is an epistemic silver lining. This is in fact a Red Queen's race that cannot be won. So in the end the only solution is to evaluate the text on its own merits without reference to the writer's status, because that status can no longer be reliably detected. For a public feed like this one, the only alternative is to ignore it. The fire hose of data will inevitably become ever more fecal. We can only walk away from it or be more careful about the pearls we pluck out. It ends well only if we get better at pearl detection.
- One way that I could imagine a human-only HN could evolve in the coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast, maybe there’s some controls at the top level that let you see more content “lower down the tree” if you’re ok with lower SNR. Latency to get a post widely distributed grows but I don’t see that as a massive problem.
- > coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast,
Wow this is really cyberpunk.
I'll bring my Yubikey!
- You're giving me flashbacks to PGP key signing parties.
I do like your idea, though.
- In my recent experience, local meetups and groups are unexpectedly more prone to self promotion and low effort spamming.
Local groups have a problem where members admit their friends or pressure others into inviting their friends who are not a net positive, but it feels too impolite to refuse or to kick someone out. Meeting someone in person also develops a sense of a social bond that makes it harder to downvote or flag their posts.
Local groups have always been a haven for affinity fraud, too. Running a scam is easier when you can smile, be charismatic, and pretend to be a personal friend before springing your ask on to your victims.
- This sounds like failure of leadership. Our coding meetups are already implementing what the GP suggested [0] and we also enforce our written guidelines (in this case, politely removing the bad eggs.)
[0] https://handmadecities.com/memos/HMC-Memo-004-Meetup-Hosts.p...
- I've been thinking the same. One way to moderate is to bring back physical consequences.
I'd also like to see an "Order of the White Lotus" community (or Fight Club if you prefer) where people who collectively agree to not use AI against each other can come together. They can still use AI (i.e. out of necessity) just not with other members knowingly.
I suspect whatever form it takes the stakes will be very high to hack yourself into and pollute the space. So the more successful the community becomes, the harder it is to keep in order.
- Bring back the key signing parties!
p.s. @patrickmay: jinx!
- "cannot be won" "only solution" "only alternative". sorry, no, that's too black and white. There are other solutions, even if they will only work for a couple of days/months/years.
- Don't tell anyone, but I am secretly in charge and open to suggestions. Spill.
- We can relentlessly bully anyone using phrases like "Red Queen's race" unironically. Measly human resistance against the vapid strip-miners of semantic value.
- You mean that you don't believe that we are in co-evolution with AI? Because otherwise it is a Red Queen's race, and it is a useful frame for understanding. For example we can make it a race between symbiotes.
If you are Sisyphus, the fact that the hill is infinite is useful when planning your day.
- I don't believe you are competent enough to be making those assessments.
- Agreed. Merit is the only fair solution. If OP noticed a garbage post, that means they evaluated a post on merit and decided it was garbage. So it works.
We have genAI generating videos and the quality sucks compared to human produced and filmed content. People call it out and nobody is going to watch a genAI movie at the theater or binge a genAI TV show. Merit based filtering.
GenAI for music is not as good as human-generated music either. Not a single AI song from Suno or Udio has reached the top40. Not even one. 100% of the songs are human because they are evaluated on merit.
We have SWE and agentic benchmarks to evaluate coding LLMs on merit.
Disclaimer: I am a new account.
- > Disclaimer: I am a new account.
Welcome. Illegitimi non carborundum.
- The thing is, I can read something that's really terribly written and still extract useful information from it. (Suppose, for example, an LLM was directed to synthesize information from some sources that I wouldn't have thought of doing; or a submission simply makes me aware of a blind spot I had. Or I look up documentation and find something that's incredibly verbose and full of marketing-speak, but the code samples look reasonable and can be verified by testing and/or cross-reference.)
- > So in the end the only solution is to evaluate the text on its own merits
This falls apart as soon as you realize that evaluating the text requires far more effort than generating it. If you're spending 2 minutes reading text that took 2 seconds to generate, you already lost.
- That just means that you can only evaluate a smaller fraction of the data. If your goal is to do more than sample it, you've already lost.
- This comment uses a lot of big words but it’s full of fallacies.
The HN user base is not perfect at detecting LLM content but a lot of it does get flagged and downvoted eventually. About once a day I’ll click on a link, realize it’s AI slop, and go back to HN to flag it but discover that it’s already flagged.
If you turn on showdead you can see all of the comments from LLM bots that have been discovered and shadowbanned.
The fallacy in the comment above is simple: It’s taking the current situation and extrapolating to an extreme future, then applying the extrapolated future prediction on to the current situation. The current situation does not represent the extreme future predicted. A lot of the LLM content is easily spotted and a lot of it is a waste of time to read, therefore it’s right to police and ban it. Even if imperfect.
- Earlier today I found something that impressed me as awful slop, but I was hesitant to flag the submission because as far as I could tell it got the facts right (I didn't try to verify some details of who was involved with what, but I was familiar with the proposals the article was discussing).
- I'm somewhat keen to adopt ATProto's feed generators and/or labeller concepts to create an alternative /new and comment prioritizer
- > The fire hose of data will inevitably become ever more fecal. We can only walk away from it or be more careful about the pearls we pluck out. It ends well only if we get better at pearl detection.
I'm not sure we can. Imagine an AI that 1) creates multiple accounts, 2) spews huge numbers of comments, 3) has accounts cross-upvote, and then 4) gets enough karma on multiple accounts to get downvote privileges. That AI now controls the conversation. Anything it doesn't like, it can downvote to death.
I mean, I'm sure that HN has a "voting ring" detector, but an AI could do this on a sufficient scale to be too large to register as one cohesive group. And I think HN has a "downvote brigading" detector, but if the AI had enough different accounts, I'm not sure that would trigger, either.
The best chance to detect it is just on volume (or perhaps on too many accounts coming from the same IP address or block). But if the AI was patient, I'm not sure even that would work.
That's depressing. I don't want HN to become a bot playground, with humans crowded out. But I'm not sure we can stop it, if it was done on a large enough scale.
- [dead]
- I don't understand how this is supposed to solve anything, and I've seen it suggested as a solution multiple times. If you restrict comments to older accounts, all it's going to do is make the bot creators speculatively open and proactively age accounts for future use.
- I would argue that we shouldn't let the perfect be the enemy of the good. Adding a cost to commenting that requires aging accounts I think might discourage fly by night operations and "experiments".
- This already happens now. Go look through a few of the "Show HN" authors - you'll inevitably see around several accounts that are 50-100 days old with a karma of 1 to avoid a green label.
The OP is talking about posts, not comments. The simplest solution might be to prevent someone from posting a "Show HN" until they’ve earned twenty-five or fifty karma, to demonstrate that they’ve been actively participating on Hacker News rather than using it solely to promote themselves.
- This leads inevitably to karma farming bots who upvote each other’s submissions à la Reddit.
It’s a speed bump at best.
- Yeah I considered that - but any friction is better than none. Maybe integrate an additional consideration by which low karma (threshold < 5 karma) accounts cannot upvote other low karma accounts.
Honestly, we don’t really have the same cold start problem that a brand new social media site would. We already have plenty of reputable active users here. So HN could restrict new accounts to only being able to comment initially. As they participate, their comments receive upvotes, allowing them to build up enough karma (even a small amount of 25) which unlocks the ability to upvote, and then, finally, the ability to create posts.
- Creating more friction can also lead to a higher percentage of bots. I for one immefiately leave when I realize that I need to jump through several hoops before I'm actually allowed to participate on a site. Someone building a bot farm on the other hand is probably willing to tolerate quite some friction before giving up.
- A speed bump might still be preferable to nothing.
- I have seen accounts that were dormant for years suddenly start posting frequently, all with slop. (I don't know if this represents people having an epiphany about AI use, or accounts being compromised or just what.)
- Yeah I've seen this too - like a weird equivalent of HN sleeper agents that suddenly get activated.
- I wish for karma based too if we managed to get filters. I want to see posts only by accounts with {x}+ karma points.
- Would be fine for a personal filter but if used globally would incentivize karma gaming. You can get high karma from reposts of past popular submissions (an author who was in prison who reached the front page even half-joked/resented once how many common Wikipedia articles land on the front page for the nth time).
- Have you taken a look at reddit recently? It's absolutely infested with bots farming karma, either by reposting old popular posts, or simply posting AI generated comments.
Actively encouraging this will only make things worse.
- You want other people to deal with the things you don't like and filter stuff for you, to improve your own experience and shield you from the filthy masses. God beware you have to endure a comment you don't like, your royal highness.
I'd rather see you gone than the people you complain about.
- Core function of HN Front page is based on "other people filtering stuff for others". Filtering out by any criteria (karma, account color, first letter of the nickname, whatever) doesn't automatically mean that someone is a jerk as you have stated in the comments nearby. It just means that someone is selecting the information to consume and does not harm anyone (perhaps besides the selective person who might miss interesting info due to selection).
- The filtering is supposed to be based on the quality of the content, and it's only useful to the extent that people filter either on quality directly or closely correlated metrics.
If everyone votes purely on basis of the first letter of the username, to use your example, then the votes provide no useful information and you may as well abolish it.
- Filtering is a valid form of improving signals. If there there was a reliable heuristic for users posting low effort content that was better then the user would be considering that instead.
If someone in a chatroom for example is being spammy with their messages at the expense of noticing posts one finds more relevant then blocking them isn't due to considering them some filthy pleb but improving their experience. If the user being filtered never becomes aware there's no reason to be offended, either.
Edit: also I wasn't the one to downvote you if that makes any difference.
- HN is already heavily moderated. Low-effort posters and spammers get downranked immediately, based on their behavior. OP is simply intolerant and unable to function in a social setting.
Minimum karma and account age filters are discriminatory, anti-social features that should not exist on any social site. The people asking for such features are intolerant jerks, no different from ageists or ableists. They are parasites, because they want the people who are not intolerant jerks to do their filtering for them, and keep the site alive by doing so.
What would happen if every single user enabled their minimum karma filter?
- This thread is evidence that some are unhappy with the state of a core HN feature due to users posting what they judge to be low effort content, so it does get through.
The comments here are about possible mitigations. Based on this feedback dang has apparently now restricted new accounts from posting Show HN threads, so globally now there is a form of filtering users from being seen by others based on a heuristic.
Your initial comment is written with the impression that the poster wanting to improve their chances of higher effort content is making some judgement on the posters themselves as though they're conceited ('filthy masses', 'your royal highness') when they're merely considering one approach to reducing noise from their feed.
I myself in this very comment chain have already posted that I disagree that filtering by karma would help due to gaming issues but I don't see the problem with the user's goal.
- >What would happen if every single user enabled their minimum karma filter?
Hacker News would be a much better place.
In fact, filter stories as well as users. I want to filter out any story with fewer than three upvotes and any flagged comments. That would improve quality tremendously.
- How would any new user earn karma in that system? How would any story get upvoted?
Again, this system can only work if there are at least _some_ people that are willing to upvote newbies and read new posts.
It sounds like what you want isn't a community with collaborative filtering, like Hacker News, but a newsletter with editors, like Slashdot for example.
- People will need to participate otherwise there won't be any new content. I see it as just like vouching, except someone has to vouch for green accounts. A slightly more equitable (and easier to implement) version of lobste.rs' invite tree.
What I want is for green accounts not to be abused as much as they are. The number of noxious, vitriolic troll alt accounts and bots is getting ridiculous. That is almost entirely the fault of established users of course, but there's no way to deal with them poisoning the well without affecting new users.
- I think you missed @sltkr's point. HN wouldn't just have less new content; it would fail to develop new users. That kind of stagnation is how sites like this die.
Aggressively filtering to raise the average post quality is a sugar rush and it has the metaphorical long term consequences of type-2 diabetes. Things start out feeling great but the acceleration of death is effectively guaranteed.
- My system has been working pretty well: using some extension or another that has mute functionality, if I see a person post an extremely low quality comment, I look at their comment history for two or three pages. If there is no comment of value in that set, I mute the user. The board gets better each day.
- Are you doing that here? What extension(s) do you use for it?
- And also invest more effort in karma farming. In other words, if we raise the bar for Show HNs we'll probably see more generated comments in the threads.
- I don't understand why we put locks on bicycles, a determined person can just saw them off.
- My prediction is that nothing short of human verification is going to solve this.
- I'm very wary of this request, though I understand it. I've been reading HN daily since around 2014. My involvement was purely passive (e.g., I have been a lurker) because I really didn't think I had much to contribute that wasn't already stated better by others.
I didn't actually create my account until 2021? 2022? I can't remember. And I didn't make my first post or even comment until just last week.
While I think a minimum post count or reputation metric could perhaps reduce the AI generated posts, introducing friction also makes it harder for real people to contribute anything meaningful.
Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?
I made a Show post last week where I heavily relied on AI. I'm sure there are some "tells." But even so, I spent more than three hours working on the content of my post and my first response. Would my post have been acceptable to you?
- > Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?
If a human put his effort into it, is proud of it and wants to show it to the world, i'm happy to invest some time to have a look at it and maybe provide some helpful feedback.
I'm not willing to invest my time into evaluating the more or less correct sounding ideas of a ML model.
- If you're going to spend 3 hours making a post, why not just write it yourself in the first place and avoid the issue and the reputational damage?
- This is awfully narrow minded. I had Claude give me an initial framework, based on the many many hours of context of chat across many different documents. It helped me organize my thoughts.
Some of us need assistance to communicate effectively. And for me, yes that took 3 hours even with this assistance.
- I don't care if the code is generated, i care if the content is. I don't want to read another "No complexity. No fuss. No buzzwords". "It's not just a tool, its a lifestyle". Its sooooo boring...
- Just write the text yourself, not many people enjoy reading AI-generated posts, even edited.
- I have long believed that whatever comes along to replace the reddit/HN etc type site will be based almost entirely on trust networks.
i.e. only surface stories posted by or upvoted by those you trust, and the inverse with those you distrust.
Then exponentially drop off trust transitively and it could be almost workable.
- The return of Advogato. If you weren't around for it, it had a certification system like what you describe, so the stuff on it was pretty good. After a while, spammers figured out that it had very high search engine placement because of its quality, and that pretty much ruined it. It's gone now.
- I sometimes feel like a paid newsletter that's curated by users would be fun. I'd happily pay €5 a month for a weekly/daily digest where the comments are en par with HN.
- A few paid and unpaid newsletters have quietly become very big. Traffic from them completely eclipses this place, and because everyone gets the email at once it is a really sudden and painful spike.
Most I have encountered (generally via referral tracking) are heavily curated centrally though, and not by users.
- The risk is to build very good echo chambers. One shouldn’t have to read AI slop or despicable opinions during their free time, but some exposure to alternative respectable and not idiotic views should be part of the design.
- Our starting position is the status quo, where site level echo chambers are near total.
X vs BlueSky is a thing after all. Reddit, wikipedia etc. are just farcical.
- >will be based almost entirely on trust networks
Like Facebook/Linkedin?
- Not close. Both those mistake knowing for trusting.
- I'd say that they don't make trust decisions, rather they give information to the user so that they can decide whether to trust and for what purpose.
Linkedin more so than facebook, facebook shows list of common contacts, linkedin shows that plus a literal resume.
- Eventually HN is going to need to charge people $1 to post, just for spam filtering. Maybe donate the money to open source or something.
- $1 is an incredibly low price to pay for advertising and an incredibly high price to pay for legitimately interacting with a community. This would have the exact opposite of the intended effect.
- Charging money does not seem a very good idea in a site like this where you expect users to upload all the content. Also this would require credit card info which is a massive barrier, even if you were to charge just 1 cent.
- No credit card. You have to send a $1 bill by snail mail, which is proof of "work" (mailing the bill) as well as $$. You enter the bill's serial number when you enroll the account, and the account activates when the bill arrives. You can be pretty anonymous this way.
I once proposed a scheme like this where you would donate to charities who would post lists of serial numbers they had received, for this purpose, but it never got anywhere. Maybe we need it more now than we did then.
I guess instead of mailing a $1 bill, if necessary it could be a hand drawn picture of a kitten (artistry not required). Authentication would involve checking the paper for pressure marks made by the pen. I wonder how many would take the trouble to fake that.
- Those of us old enough to remember Compuserve know that the cost of entry was exactly why the quality was so high. I was lucky enough that my employer paid for it. I was also active on various comp.os.* Usenet forums. Both were great sources of quality information but Compuserve stayed “high signal” for longer. Usenet - the birthplace of trolling - eventually degraded to the point of near uselessness. The signal was drowning in noise. Mainly because some people are just shitty. Which is worth remembering here. Behind every AI agent spamming HN (and everywhere else) is a human who thought this was a good idea. Why do they think that? Maybe that’s the line to pursue for how to deal with this issue.
- It worked for years for the SomethingAwful forums. A nominal charge for the ability to post, with plenty of 'timeout' chances for rehabilitation before an outright ban keeps out most of the junk.
It feels wrong at first to pay for commenting on a forum, but the alternative is almost always a gentle slide towards a trash dump. AI means that slide is almost a vertical slope.
- Pay with karma?
- That was Elon's idea for Twitter but the X membership scope grew in scope. $1/m sounds better.
- $1 is not going to stop people from spamming. It's just $1 after all...
- Ooh, it's time to pull out the classics! Please feel free to check the boxes as you see fit, as I am currently too lazy to have Claude do it for me.
Your post advocates a ( ) technical ( ) legislative ( ) market-based ( ) vigilante approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.) ( ) Spammers can easily use it to harvest email addresses ( ) Mailing lists and other legitimate email uses would be affected ( ) No one will be able to find the guy or collect the money ( ) It is defenseless against brute force attacks ( ) It will stop spam for two weeks and then we'll be stuck with it ( ) Users of email will not put up with it ( ) Microsoft will not put up with it ( ) The police will not put up with it ( ) Requires too much cooperation from spammers ( ) Requires immediate total cooperation from everybody at once ( ) Many email users cannot afford to lose business or alienate potential employers ( ) Spammers don't care about invalid addresses in their lists ( ) Anyone could anonymously destroy anyone else's career or business Specifically, your plan fails to account for ( ) Laws expressly prohibiting it ( ) Lack of centrally controlling authority for email ( ) Open relays in foreign countries ( ) Ease of searching tiny alphanumeric address space of all email addresses ( ) Asshats ( ) Jurisdictional problems ( ) Unpopularity of weird new taxes ( ) Public reluctance to accept weird new forms of money ( ) Huge existing software investment in SMTP ( ) Susceptibility of protocols other than SMTP to attack ( ) Willingness of users to install OS patches received by email ( ) Armies of worm riddled broadband-connected Windows boxes ( ) Eternal arms race involved in all filtering approaches ( ) Extreme profitability of spam ( ) Joe jobs and/or identity theft ( ) Technically illiterate politicians ( ) Extreme stupidity on the part of people who do business with spammers ( ) Dishonesty on the part of spammers themselves ( ) Bandwidth costs that are unaffected by client filtering ( ) Outlook and the following philosophical objections may also apply: ( ) Ideas similar to yours are easy to come up with, yet none have ever been shown practical ( ) Any scheme based on opt-out is unacceptable ( ) SMTP headers should not be the subject of legislation ( ) Blacklists suck ( ) Whitelists suck ( ) We should be able to talk about Viagra without being censored ( ) Countermeasures should not involve wire fraud or credit card fraud ( ) Countermeasures should not involve sabotage of public networks ( ) Countermeasures must work if phased in gradually ( ) Sending email should be free ( ) Why should we have to trust you and your servers? ( ) Incompatiblity with open source or open source licenses ( ) Feel-good measures do nothing to solve the problem ( ) Temporary/one-time email addresses are cumbersome ( ) I don't want the government reading my email ( ) Killing them that way is not slow and painful enough Furthermore, this is what I think about you: ( ) Sorry dude, but I don't think it would work. ( ) This is a stupid idea, and you're a stupid person for suggesting it. ( ) Nice try, assh0le! I'm going to find out where you live and burn your house down!- This is what I'm here for. Cheers! :-)
- Heh, I had not seen that one in a while.
This site is designed so that the wannabees are incentivized to lie and show off to get some of the sweet VC the whales are sitting on. The ease of lying at volume is down to zero, and here be nerds trying to solve a human problem with technology. Maybe show first that you can solve spam or bot networks.
Somehow lighthearted solution: employ Unix graybeard volunteers to weed out the garbage. I'd like to see HN showoff slop like "Distributed Kubernetes Package Manager using Blackwell-Hermann CRTDs in 500 lines of Go" get past Linus or Stallman.
- The top of my page reads:
So I don't see people who annoyed me for one or other reason in the past, I auto-hide the top 1000 accounts by word count, and I hide all green users. This was trivial to write for myself and I think more people should work on something like this for themselves.345 comments | 64 hidden | 50 blocked | 15 green - Devils advocate take: I think the quality of the ShowHN projects are in fact getting higher, at least the ones that land in the front page. The issue is that projects that used to take weeks, months, or even years of work now can be done in a weekend or so. It’s been democratizing, but it also means that when we look at these posts we (rightly) see that these new projects aren’t that much effort _with AI assistance_.
So maybe we should just be honest about this: our standards have raised. We want to see Show HN posts that require effort and dedication, that require more than a few hours of prompt flogging.
- I disagree in that the last few I can think of have involved things like services that do not really explain what they do properly and then ask for full permissions to your github account, or claim to be far more than they are (ie "I made this thing" but it's just a shim for someone else's stuff).
- But the issue is not only show HN, even generic posts are increasingly from new accounts, some of them are reaching front page too.
One example: https://news.ycombinator.com/item?id=46884481
- This made the frontpage two days ago: https://news.ycombinator.com/item?id=47275291
Read the comments and you'll see it took time and effort, from people who know at least a little about what they're saying, to point out that it's AI slop that doesn't live up to the claims written in its own docs.
- Well it's not just that... picture a community group talking among themselves and then some rando shows up, yells "I built this thing that you all might like", hangs out for an hour and then is never heard from again.
I think that's great in moderation as it stimulates ideas and discussions, shows us what folks are working on, etc... but this can't become Product Hunt. The reasons for posting here should be vastly different than posting on Product Hunt.
- > It’s been democratizing, but it also means that when we look at these posts we (rightly) see that these new projects aren’t that much effort _with AI assistance_.
This also appears to cause a serious shift in the kind of projects that are submitted (i.e.: towards things that are much more accelerated by AI assistance).
- I'd pose a different perspective, that Show HN in non-hype cycles tend to have a higher self-imposed bar before posting. With the democratizing, there are many posts where time from first commit to Show HN is on the order of hours, 25m being the shortest I have personally seen. I would contend that community standards have not changed meaningfully, but due to the underlying mix changing, the front page changes too.
That being said, there is an above average, low quality submissions sub-trend, that are obviously trying to plant a money tree. This is largely driven by the "look ma, no hands" Ai tools like OpenClaw, mixed (venn) with the crypto crowd looking to make easy money with near-zero effort.
With that being said, I have definitely seen some real bangers that have large Ai contributions. So I am generally in favor of minimally changing how HN works today. One small change would be adding to the Guidelines and FAQ, giving the agents something to read before posting (such that they know that automated submissions are not allowed[1])
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
- Except the quality hasn't been getting higher. Most of those projects wouldn't be considered HN worthy if a human being had made them, they only get the praise they do because they were generated by an LLM and as such they aren't projects so much as demonstrations of the latest model's capabilities.
Also the purpose of Show HN along with HN in general is to spark intellectual curiosity and create interesting conversation, and nothing about LLM generated code does that, because the person who prompted the AI to make it doesn't understand it and can't discuss it in any depth.
- I was thinking about this the other day. If someone made TempleOS today, people wouldn't be as impressed, because they'd just assume they used AI.
They'd assume this, even if they hadn't used AI, and even if AI didn't have to ability to pull it off.
- That dev made many videos about its creation and motivations though and along with their personality I think people would be understanding.
- Yeah, live streaming it would be a good option, I thought of that too.
Not sure I understand your 2nd argument though?
- > Not sure I understand your 2nd argument though?
Sorry, I meant in the context of that original dev their earnest fixation/obsession with their creation came across in their personality that I think made people sympathetic.
- This might be well-intended to restrict bot posting, but it also silences dissent. HN is one of the few places left on the internet where dissenting voices can post. A dissenting voice already has to work against the hivemind, adding more restrictions will increase the echo chamber effect.
- The irony is that the same models generating spam Show HN posts are the ones people are building products with. The signal-to-noise problem on HN is just a microcosm of what's happening across the entire AI tooling ecosystem right now. Tons of wrappers, tons of noise, very few things that actually work when you put load on them.
- How about an opt-in toggle to display the year each account was created?
randusername_2022
I'm right on the boundary of the slopocene, not sure if in or out.
- So now we're going to create a black market for old HN accounts?
Am I too late to get ahead of the curve and stockpile some, while they're still relatively cheap?
- Or grade accounts by the logarithm of how many accounts were registered before them, like Slashdot. (This is tongue in cheek as I assume yours was.)
- I almost emailed dang this morning to offer to help out tho I'm not particularly technical. Few solutions I thought of: 1 - honeypot, hide some links llms can follow if stuff gets posted in it, unlikely to be a human. 2 - Make an captcha that only llms can answer, I recently made 2 social networks, one that humans couldn't join by making the submission question too difficult to figure out quickly. 3- Use an LLM to detect LLMs, the other social network I did for fun (that a small number of people use), an llm that looks for moderation issues does a good job of flagging them. 4- Invites but vary the number you have to give out by account age + karma. The first 3 seem like they'd stop some % for some time, but eventually get old.
- you may have a point, i.e. some mechanism to invoke a behavior that only a bot or LLM could do, that a human would not, e.g. click on this button now in a hidden div/transparent color or measure response time within page load.
the problem is that once this is found out, the circumvention is easy enough to program into bots/LLMS.
are we going to reinvent the voight-kampff test from bladerunner?!?
- Reverse captchas are fun. Click this button 10,000 times to prove that you're a robot!
- We need new ways to prove our humanness.
- in a world of smart LLMs acting dumb is human proof
- Also been an extreme amount of new accounts coming in and posting political content as their first post.
But then again, some of the most prolific, most upvoted accounts on this site constantly flood the site with political content and nothing is ever done about it and they get rewarded for it .. so yeah. I gave up hope a long time ago.
- I was thinking of setting up a system to highlight sock-puppeters and other consistent-rule-violating accounts, as a 'fun project' that might improve the HN experience. I've asked dang in another thread if he has any objections, but am curious to hear other input as well -- is this something people would want? Obviously it would not change the comments that are actually on HN, it would just call out 'bad' contributors more explicitly. I don't actually have experience in this area, so no promises that I'll be able to build it quickly, or take the best approach in the initial implementation.
My initial thought is to set up a devoted account like "sock_puppet_detector", use the infrastructure from https://hackersmacker.org/, and add any likely sock-puppets as 'foes'. Then anyone can install hackersmacker, and add "sock_puppet_detector" as a friend to see sock-puppets highlighted. Likewise for rules violators.
- There have been numerous stories on HN where someone directly involved with the story has created an account specifically to engage in discussion about whatever the story was about.
Losing that seems too high of a price to pay. Yes there are AI generated comments, in the past there has been script generated comments. You can report, downvote, or just ignore and move on. I am aware of posts like this existing, but I feel they are being effectively managed.
Try not to be too offended about the notion of these posts existing. Many of them are not malicious, they just caused by users stepping outside what is considered appropriate, but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.
- > There have been numerous stories on HN where someone directly involved with the story has created an account specifically to engage in discussion about whatever the story was about.
Yes, and sometimes some of the HN automatic filters kills the comments. Remember to "vouch" the comments if they are interesting/relevant, a few "vouches" unkill the comments. And in extreme cases, send an email to hn@ycombinator.com so dang/tomhow can take a look and use some magic to fix the problem.
- >Losing that seems too high of a price to pay.
Assuming the mods just auto-ban new accounts and require them to be vouched and to earn minimum karma before being visible, those comments can be vouched up or approved by the moderators. The poster won't know that they've been banned, of course, because that's how shadowbanning works, so the approval process should be seamless for them.
But how often does that happen versus the AI comments and alt account trolling?
>but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.
The consensus is and has always been clear. Generated comments of any kind have never been allowed. People just don't care, and that's a problem.
And those comments are malicious in effect if not intent. We're here to have conversations with human beings, the intellectual and emotional connection is important. What is the point of having conversations with a machine, much less not knowing one is having a conversation with a machine? If nothing else, it's dehumanizing and a waste of time.
- >those comments are malicious in effect if not intent.
I don't believe that is possible. I think malice requires intent.
- Sufficiently advanced destructiveness is indistinguishable from malice.
- If that were not the case there would be no need for Hanlon's razor.
The fact that Hanlon's razor needs to be said at all is to warn people about attributing malice to instances where none exist.
"Never attribute to malice that which is adequately explained by stupidity."
It's not saying that instances that actions attributable to malice do not appear. It's saying it usually isn't malice.
- I vote against this (and this is coming from some one who believes HN contains a lot of shills)
Bots are recognizable and can be selectively ignored. But an echo chamber that would result from measures like this cannot be, because you cannot see the potential comments and posts that were snuffed because some one didn't bother.
If you want HN to be a place to feel comfortable and your world view to be unchallenged, sure, go ahead. But then we already have reddit.
- Accounts have to start posting somewhen.
Moderators don't have the capacity (and fairly, it is impossible) to check if they are bots or humans.
There are no good solutions, there are hundreds of thousands of intelligences out there, trained millions of hours on how to scam humans, capable of spitting out text tirelessly and shamelessly, and there will be only more of them, tens, hundreds, thousands times more.
- Comments should be allowed from day one, but submissions should require some experience and karma.
- https://news.ycombinator.com/newest - Scroll through there and there are a lot of [dead] submissions by green accounts. They aren't outright banned from submitting, but it often triggers auto moderation. It's like posting a link in one of your first few comments as a green user, that often results in shadow banning automatically.
- Proof of <insert here> seems to be a growing concept. I just wrote and erased a post because half way through I decided to go down a rabbit hole. Long story short it seems like Privacy-Preserving Reputation Systems is a thing. Maybe tech savvy sites like HN could start pioneering using them to drive adoption more widely? This is sort of like encrypted email and other tech areas. It takes adopters pushing things to get the general public to use them. If sites like HN agreed to make them available could that start a real trend?
- It's not like older accounts are necessarily any better.
If you look at the leader board (https://news.ycombinator.com/leaders), you'll find a few old accounts that pretty much do nothing but farm links, posting sometimes dozens of times a day, with a very low percentage of comments. Their high "score" isn't an indicator of quality; they just spam enough that a few get some good upvotes, but most of their submissions are low quality.
- The solution is for the users to be able to mute/hide accounts. It won't matter if an account has 10k points, once you mute it, you won't see what it posts.
- This has long been my biggest issue, much bigger than new accounts spamming slop. There are accounts with 10000x karma that do little more than feed links from the NY Times and similar publications, regardless of their relevance or value.
Each one gets 4-5 karma, a few crack double digits. Post 10 or 20 a day over a year or two and they're five figures. Pure farming.
- Let's turn HN into a place where we all grow old together until it slowly dies when we do.
- That's indeed the problem with restricting new users. Existing community members always want to do that, but it's a recipe for not surviving.
- I'm hoping to do a show HN soon on something I've been working on, but my account is currently only 6 days old. Tips?
Btw, restricting new accounts (based on karma/age/whatever) could be combined with the option to ask mods for permission somehow, although that'd have to be done in a way that that doesn't become too much work.
- Be an active participant. Engage in other discussions threads with curiosity and generosity. Ask good questions, share interesting perspectives. Show you’re human and thoughtful.
The system has long been that anyone can email the mods and ask us to review their project, but the volume has grown so much in recent weeks that it’s hard to get much else done.
- This post from 19 days ago is very close: https://news.ycombinator.com/item?id=47045804
Additionally, dang had replied on it: https://news.ycombinator.com/item?id=47050421
- I definitely think this is solvable via some basic honeypot laiden proof of work.
1. Exist for some time.
2. Vote on stuff that humans would vote for.
3. Avoiding voting on traps.
4. Comment occasionally and productively.
5. Post to a limited existing audience, and receive upvotes.
6. Post limitedly to a general audience.
7. Post generally.
It’s basic earn a reputation behavior.
- I'd suggest instead a lower threshold for [dead]-ing posts and submissions by new accounts when flagged by HN users.
- I personally cycle accounts on this site for pseudo-privacy reasons. HN does not allow you to delete old comments you made and thus the only way to maintain some semblance of control over my profile and privacy is to periodically switch new accounts. I've been doing this for years now. The only real downside for me is that as a new account you don't have the ability to downvote, which is super annoying but something I've learned to live with.
I'm not saying your idea is bad necessarily but giving another perspective.
- I also do this. Pretty much every time I move.
- I really wish there was a setting whereby I could simply hide all comments from accounts less than a year old. The correlation with LLM slop is simply off the charts.
It almost feels like new accounts should be treated like new posts -- it is sort of a service that a select few are willing to undertake to upvote interesting stories early on.
I wish even more I could block specific users (there are some highly prolific, high karma users here who are extremely irritating), but that's harder and is probably best handled client side.
- I have a chrome plugin I made that gives me some personal social features (tagging people), it can block: https://s.h4x.club/yAuNoQDe
- I think this another sign in the flood of slop to come. I really suspect SNR (for whatever definition of signal most use) will continue to drop and mitigating is going be kind of like bailing out the ocean. Maybe a strange consequence of this might be that a real Show HN project would be easier to demo and find at something like a meetup now, if they weren't all kind of dying. Maybe we'll see a revival?
- No, Reddit is insufferable to use precisely because of this, try posting to any subreddit with a new account and your post gets removed because it’s too new or doesn’t have enough karma. Blanket moderation strategies like these make the UX horrible for new users and slows the platforms growth and reach.
- Yeah, some of the "Show HN" posts remind me of Reddit posts in r/javascript. Annoying, regardless of AI or not.
- It used to be so pleasant to read Show HN and find such interesting projects, but nowadays it's rare that any project posting their GitHub has ever read their source code or even comes close to functioning in the way the OP claims.
Such a sad development.
- This is largely the same pattern that happened during the crypto hype cycle, spam posts and complaints. It will likely subside as reality sinks in.
There are still quality submissions by new accounts and HN is good at pulling those needles from the haystack.
- What if now or in the future people with assistive devices are using AI to share what they make?
I believe it's a policy or moderation enforcement issue. Such as banning incomprehensible / low value posts whether generated by AI or not.
- > I don’t want to see HN becoming twitter, which is full of bots
There are barely any bots on Twitter. There were thousands of thousands of bots before 2023, because the API was free. These days running a bot on Twitter is expensive.
Fun fact: a company I worked for in the past had access to an undocumented partners-only API that allowed us to register unlimited number of accounts. I was personally tasked to handle the integration.
- I'm honestly surprised HN isn't used to share more malware/githubs with new accounts too.
- The target audience for malware authors/distributors typically isn't a community full of technically literate software engineers, security practitioners, reverse engineers, malware analysts, etc.
Same reason that burglars don't typically target security camera stores and robbers don't typically target police departments - it's basically a fast-track to early detection, which disrupts the main objective of the adversary.
- some people says Im a robot account to brain wash HN users, how do you think? Am I really a bot?
- I think every moderator on every platform is struggling with this issue, and no one has succeeded so far, so it doesn’t seem that easy.
I think a simple solution (and one that eventually every content platform will have to adopt) is to allow users to tag AI-generated spam. I think that a few years from now this feature should be the norm, like existing basic features on forums such as upvote, downvote, favorites, hide, etc. I know this will require much more development effort than simply blocking new accounts from posting at all. But on the other hand, you can’t block new accounts forever.
- It’s going to end up like the flag button. Disagree with opinion in the post -> tag as AI-generated spam. Not that I disagree with the idea necessarily, probably a few safeguards around it like only certain users can tag and pattern over time would fix it.
- It’s getting really bad. New accounts hours old posting walls of AI-generated garbage comments across dozens of topics. Please restrict new posters, minimally, and perhaps add a little friction to new account sign ups.
- This will be the death knell for HN. You can’t have a modern club that restricts new members from engaging; people don’t have the patience to do the work and take the time anymore.
In addition, I’ve been here in HN since the late 2000s. Look- it’s a new profile. Also sometimes I use AI to help craft better responses. Do with that what you will.
- Way too much blocking on here already....
- After reading this article, I just created an account.
- Fully agree. I have the same impression. Especially, the last couple of days I've experienced an increase of submissions from accounts which were not even 1 hour old. All just promoting some fishy ai generated bs.
- I checked new show HN a couple days ago and it was shocking how most were “flagged dead”, unlike how it was before the AI invasion.
- Resistance is futile.
- can be fun though
- I created a Firefox plugin which takes HN commenters'/submiters' account create date and sort/scales the order/points based on its creation since 2009 (older accounts get more weight). Optionally, the plugin just puts "spoiler" text over accounts created after a certain date (say, 2023 or so).
Unfortunately, I was not able to "reorganize" comments/posts in a manner that I felt was particular "better", and didn't keep the plugin, for whatever that's worth.
I think it would be more prudent to overlay a web-of-trust, where accounts which submitted links/comments that you upvoted are then given significantly higher priority in other threads/feeds (unfortunately downvotes are not made apparent on HN, but factoring downvotes would also help.) Exposing your web-of-trust may also assist others in determining trusted content.
Perhaps this web-of-trust approach is dystopian on the order of MeowMeowBeenz, but I have not heard any other practical solutions to the disintegration of trust which is upon us.
Edit: Elsewhere in this thread HackerSmacker was mentioned, which is what I'm describing. That's exciting, I'll be trying it out later.
- Can we also ban accounts that post racist stuff?
- My experience with reporting stuff to mods is that people who post racist stuff do get banned but I also think there's a difference between holding opinions I consider based in racism or having racist outcomes (which I don't report), and posting actually racist stuff (which I do report).
- Folks here can decide for themselves whether to check green accounts' "Show HN" these days. We are all aware of AI slop and creep in all shape and form.
Moderation is already taxing as it is.
- Yeah, turn this into another Reddit. Great idea!
- “shownew” : “no|yes” option would be nice.
- If everyone turned off new account visibility, we'd just see the same noise 30 days later.. not sure that helps.
- During that time, one would assume mod action would filter out the undesired, thereby “seasoning” accounts.
- Why can't we just introduce a "vouching for" system like lobste.rs
- I’ve been mulling over this for a couple of days too. I have a project I want to share with the HN community that I put a substantial amount of effort into but it was definitely AI assisted (as is literally everything today).
I’ve read all of the source and I drove the architecture but it would be a stretch to say I didn’t ask for assistance on things that felt fuzzy or foreign to me. I also have generally stopped typing code. I still don’t think the LLM made the project though, it feels like my decision making.
If the bar for Show HN becomes no AI whatsoever then you’re just going to see a bunch of people covering their AI tracks. I’m reluctant to post it because I’m afraid of getting blasted by the community for using AI. At the same time, it is work that I’ve poured hundreds of hours into, that I’m proud of and that I think would be of interest to HN.
I read the Obliteratus post that made it to the front page the other day and I agree that is pure slop. While it’s frustrating that it took up front page space, it’s evident that the whole community caught on to the sloppiness of it all immediately and called it out. I just don’t think HN wants to set the precedent that no AI code should be shared.
I also saw a week or two ago that someone open sourced a project of theirs that wasn’t open source in the first place. The reason they stated was that they had vibe coded and were embarrassed to be discovered. If you want to get a concept out quickly with AI, you’re now hesitant to open source because of the precedent set by the community. I think that’s a scary thought to me. I would rather know the tools I’m using are AI generated/assisted and make the value judgement on if I trust the code and project owners.
- I don't think people are blasted for using AI (mostly), I think people are blasted for low effort work, just like pre-LLMs. LLMs just made it way easier to complete low effort projects, so therefore there is more of it.
- Yeah, I agree but as someone in this thread said, if Temple OS came out today there is no way it wouldn’t be immediately derided as AI slop. That’s what worries me.
Blatant slop is obvious. Slop with a modicum of effort is harder. I’m still adjusting my slop-o-meter on other people’s work. It’s easy for me to identify my own slop, it’s not always so obvious when looking at someone else’s AI assisted work.
- Lots of social media platforms need better ways forward. Let's focus on things we can measure and enforce. Let's be honest to ourselves about we know and what we don't.
Think back to prohibition. Just because we want less public drunkenness doesn't mean it is wisest to ban alcohol. One has to ask: what is the chance the ban is successful? What happens when it cuts the wrong way?
To what degree do we care about (1) "human" versus "AI"; (2) comment quality; (3) sensible methods for revealing social preferences? I care a lot more about the latter two than the first. It doesn't have to be a zero sum tradeoff, but I think it is a good starting question.
Let's have that discuss and not try to solve the human vs AI classification problem.
- I understand and appreciate your perspective. I do, however, disagree with your priorities. I mostly read here, but when I participate I want to interact with humans, not chatbots. I would much rather read a human comment with typos and poor grammar than another piece of anodyne LLM output that shows only that the responsible party doesn't value the human interaction that I do.
- This is long, but I put lots of purely human effort* into it, and I hope it clears some things up. Writing it cleared up a lot in my own head.
> I would much rather read a human comment with typos and poor grammar than another piece of anodyne LLM output that shows only that the responsible party doesn't value the human interaction that I do.
I take your meaning. However, that phrasing doesn't cut to the core of it. Rationalists would say "this doesn't carve reality at the joints". Here are my attempt to disentangle, decomplect**, and find common ground. Let me know which of these you disagree with, if any:
1. I care relatively little about typos and grammar, as long as the ideas are clear.
2. I enjoy human connection with people, in person and online. I would prefer to have a person on the other end.
3. Actually, I'd go further... I'd like to have more personable conversations and work past a lot of the common online discussion failure modes (but now I'm wandering off topic).
4. When chatting, I care a lot about the quality of the underlying thinking.
5. I personally don't want to read someone's first knee-jerk take.
6. I prefer to read a thoughtful and clear expression of a human being's experience.
7. On HN in general, I want curious conversation.
8. I understand everybody brings some point of view and sometimes what one would call an agenda.
9. Maybe the top criteria for doing #6 well is: are we interacting with each other per the guidelines? Charitably, in good faith, and with curiosity (#7).
10. As an example of an unwelcome agenda (#8), I don't want to inundated with commercially-fueled marketing-speak. However, to be clear, in this regard, I don't care if it comes from a person or an LLM.
11. Speaking for myself, not for HN, I don't mind if #6 is assisted by LLM editing.
12. Why #11? Because I care more about having a human being in the loop than a human shaping every single aspect. (See also the next point.)
13. One key for me is: does the person stand behind what they post? Meaning: are they accountable to it? Do they own it?
14. In addition to "original thoughts" (as if humans ever really have them!), #13 applies to someone borrowing, remixing, or outright stealing phrases they've heard before.
15. If someone uses an LLM to edit their words, it feels not too different from #14. Except when ... (see #16)
16. Sometimes people use LLMs to not actually put in the work of reflecting and thinking. This is sad for them and sad for people who have to read it.
17. Unfortunately, even without LLMs, some people don't put in the work of thinking and reflecting. See #5.
18. Putting #17 and #18 together, it is NOT the part about the LLM that bothers me! It is the lack of reflecting and thinking!
19. Asking someone else to read your post before sending is totally ok.
20. #15 done well may not be different (and may be better than!) #19.
21. In a forum where a comment is read many more times than it is written, I consider it more respectful to put an appropriate amount of effort into writing.
22. If a person takes the time to write something out in their own words, that is a signal of respect for the audience. Especially in comparison with, say, just replying with a trope.
23. If a person uses an LLM to research and clarify their thinking AND is thoughtful about it (#6) AND stands behind it (#13), that is a signal for respect for the audience.
Fin.***
Here's what I'm driving at: I recommend that people put the effort in to figure out what aspects bother them about this moment in time with so much GenAI output. It is a real PITA to make sense of, but not doing so doesn't make that PITA go away.
* I hope you don't mind that I used a NCLM, a neuro-cognitive language model, to construct this... a.k.a my brain. Snark aside, does the substrate matter, in the long-run?
** Rich Hickey is my home boy.
*** Why do I number my points? Well, I work hard to tease apart my ideas; it is part of my writing and thinking process. Sometimes I put them back into a paragraph, sometimes not. But I like trying it out: I think it makes it easier to refer back to ideas and build upon them.
- Amen. I think the purpose of the bots is to create high-upvoted accounts for the purpose of later flag and downvoting things they've programmed the bots to suppress.
- > I don’t want to see HN becoming twitter
I find it's worse here now than X. Literally every discussion turns into meta and severely politicized. Certain topics you get flagged out by a mob for stating facts.
At least on X reply bots are not allowed anymore. Blue checks are useless tho.
- > I find it's worse here now than X.
I disagree, but in any case the easy solution in that case is to use X instead of HN.
> At least on X reply bots are not allowed anymore.
In theory, maybe.
- HN has mostly turned into a reddit bis since 2023, with tons of topics that have absolutely nothing to do with startup, tech or programming but are directly taken from of r/news ... I'll take bots spamming fake projects over petty divise partisan politics.
- Yep - and if you have an opinion on one particular side that isn’t favored here, it gets flagged.
- I'm honestly surprised how well it's going.
From the perspective of usually just swinging into a post from the front page, when I do see green, it's usually overtly political trolling, and dead from the start. So I had assumed new account = everyone sees your post in gray, at least for a week or two.
I don't envy the "Show HN:" case. It can be intractable, story time:
Last week, there was a "Show HN:" post for a GitHub link, made it all the way to #2. It was a Flutter app, written up as if it did all the stuff you'd want from an open source LLM client. I said to myself "geez, I knew I took too long to deliver the thing I've been working on for 2 years. the MVP version is insanely popular."
-- only after digging into the repo for 10 minutes, with domain expertise, did I realize it was a complete Potemkin village, built by Claude. And even then, I was afraid to post something pointing this out because it required domain expertise, and it could have read as negative rather than principled.
All that to say, some subsets of The AI Poster Problem now require having intimate domain expertise and 10 minutes to evaluate it. :/
Additionally, the Claude 4.6s and GPT-5.4s are better than me at posting on HN now. :/ And I've been here 16 years. The past couple days, any comment I write involving some sort of judgement or argument is by Opus 4.6 or GPT-5.4, via: 1) dump HN post into prompt 2) say "I feel $X about this, write me an HN post that communicates this but not negatively".
I'm a little ashamed to admit if you look through my post history, you'll definitely see a repeated pattern over 16 years of someone who is very negative and has a hard time communicating it constructively. They're smart enough now to extrapolate observations in the way I want to, while avoiding my own tarpits.
- My only problem with the last part is that your tarpits are you, and personally: I want to know you, not some version of you filtered or softened by AI. That to me is what makes HN great, how...jarring the reading experience can be, it's really fun and interesting to see how people communicate their ideas - I think it's admirable that you're making an effort to become more kind and communicate more positively, but fingers cross you don't lose "your voice"! :)
- How about this: ask your LLM to review your post, "does it follow HN rules?", "how would others read it", "If I were the other person, how would I feel about this reply" , "is it convincing to you?" that sort of questions. That'll help, and it'll still be your voice.
And beware of what's already in context. Sometimes ideas that seem obvious given antecedents are not so obvious when taken in isolation.
- [dead]
- [dead]
- [dead]
- [flagged]
- [flagged]
- I also like that idea What’s the criteria you had in mind
- HN does a good job moderating and blocking spam from new accounts.
- Used to. The job has apparently gotten a lot harder now.
- That's one way to block those pesky young innovators from trampling our lawn.
- Please don't post snarky, shallow dismissals. That's been against the guidelines for a long time.
Genuine innovation is what we most want to encourage. That's what Show HN has always been about.
The problem now is that coding assistants have dramatically lowered the bar for getting a product or tool working, without the need for much innovation. We need new ways of identifying projects that are genuinely innovative so that their creators can be fairly rewarded, rather than being drowned out.
- I would actually expect Openclaw bots to be showing up here from time to time now, since there's no explicit documented policy against them.
(edit: And thus such bots can't easily discover that they shouldn't post, afaict)