- For me, practical knowledge comes from trying to figure things out. The more polished and "ELI5" the material is, the less I retain. I've played with quite a few LLM tools that promised to help me "understand anything", but I don't think they help with intuition all that much. For what it's worth, it's not an LLM-specific problem. I like YouTube content like 3Blue1Brown, but I don't think that I retained anything useful from any of it.
I don't question that LLMs are useful for answering questions about codebases, but this is closer to "turn a codebase into a curriculum", and... does that actually work?
- An important tenet of modern education is that true knowledge is that which the learner (re)constructs in their mind. Heuristic learning (i.e. "trying to figure things out") is often a great way to do this.
- Definitely. As instructors, we see this in action all the time. We describe stuff in writing and in lecture, discuss it with the students, and everybody seems to have good understanding. And then we have them implement it.
And that's when the shit hits the fan. :-D Only after concerted effort do the students actually gain understanding.
- On another post I've already argued that Romans already had a proverb "Scribere bis legere." Translated, this means "writing is reading twice".
In practice, what this means is that you have to know how to reproduce the knowledge you've read, in your words. Only then can you be sure you've mastered what you've read. It's the reason for homework and all other stuff we have to do. Reading something five or more times simply does not suffice for our brains.
- As I understand it, in teaching there's an idea of the "Zone of Proximal Development" (ZPD). Some things you can do without help, other things you can only do if others do it for you, and then in between there's all the stuff that you can do with some amount of assistance. Being in this zone is important for learning, at least in theory.
I suspect that's kind of happening here. If you're trying to learn something too abstract or distant from what you currently know, you'll probably use more polished or eli5-y sorts of material, because you don't yet have the skills to understand a more complex version. You're probably not in the ZPD. But if you can figure some things out by yourself, possibly with some amount of help, then you're in the learning zone and can meaningfully progress.
I have similar experiences to you with 3B1B - it's interesting, but I rarely retain anything meaningful after I've finished - and I think it's because he has to explain every part for me to understand what's going on. I'm not in the zone of proximal development because I can't do enough of the work myself. So the end result is an interesting video where someone explains a cool concept to me, but it's not learning because it's not also doing all the foundation work that gets me to the point where I can understand the video for myself.
- > I like YouTube content like 3Blue1Brown
You are the first one I know who said that. Thank you for saying that!
I think his videos are amazing but they are NOT meant to teach you the material.
They are providing a high level intuition which I haven’t found the use for yet.
Perhaps it’s just me, but I do NOT learn from intuition and analogies at all. I need to get lost in the details and rigor first, and then develop my own intuition second, and maybe look at someone else’s intuition third, maybe.
- I recommend watching "The AI Paradox" which speaks to this notion that knowledge comes from figuring things out.
- Doesn't this video make a strong case that LLMs can think?
- There's no shortcut to knowledge and wisdom. But there are a whole lot of sidepaths that don't lead to either.
- Yeah, I need a Feynman style explanation that makes me think rather than just commit facts to my memory.
- One thing I realized while working on large repos is that most “code graph” tools are still fundamentally navigation tools.
You can see structure, dependencies, call graphs, etc., but you still spend a lot of time manually building a mental model of why things exist and how concepts connect across the codebase.
What I’m trying to explore with Understand Anything is whether LLMs + structured graphs can help generate higher-level semantic understanding instead of only visualization.
For example:
1. tracing how a business concept propagates through services/modules 2. mapping requirements ↔ implementation ↔ data flow 3. surfacing architectural patterns automatically 4. helping new contributors build a mental model faster
Still very early obviously, but that’s the direction I’m interested in exploring.
- I was talking to a teacher and she was explaining how everyone is reaching for AI to have everything explained to them. "I'm too dumb to understand things," is the basic assumption people are now growing up with, reaching for AI summaries all the time without trying to understand anything themselves.
Instead of trying to understand things, people are reaching for better tools to have the thinking done for them. We are losing something huge.
- Every major leap forward triggers Luddism in those prone to histrionics.
You have to offload cognition in order to recognize the next abstraction. That's always been how we tackle harder problems.
A good explanation is foreplay, not a replacement for the act itself. If people stop there, that's a premature-pedagogy problem, not an AI problem.
Somewhere, an AI is summarizing this comment for someone right now, and that person understands the issue better than you do.
- This is not just another abstraction. It is something fundamentally different because it is a jump away from deterministic, transparent processes to a probabilistic black box. It's not like a jump from orality to books to digital media, or hand written arithmetic to calculators to programs. These abstractions were solid and dependable and could be relied upon to tackle harder problems. This abstraction is beyond leaky.
The assumption that "that person understands the issue better than you" is bold when the best AI summaries will often give back completely false summaries on any given issue.
- Are those 9.7k real users? I mean, maybe I am too old fashioned, but whenever I tried to use such tools long before AI, it actually didn't help much. It was much easier to read the codebase and find the needed connection on my own.
It reminds me on NX graphs, which are helpful to find the circular dependencies, but other than that, doesn't provide a lot of value as I can see the same kind of structure just looking at the codebase.
Am I doing something wrong with these tools?
- Playing amateur detective here, but "A huge thank you to the community!" blurb was added to the repo on March 20th [0], just before the hockey stick inflection point on the 21st where they got exactly 1800 more stars... then exactly 1000 more the next day, 1000 more after that, then 900... etc. [1] The only point where we see the first sig digit be NOT zero is today. Maybe there's some truncation that github does on their end for the API, but it being exactly +1000 a couple days in a row is indicative.
Wow. I thought the github clout market would be a bit more subtle about it.
[0] https://github.com/Lum1104/Understand-Anything/commit/9866fc...
[1] https://www.star-history.com/lum1104/understand-anything#his...
- You can check the historical star data here:
https://trendshift.io/repositories/23482
Honestly, I’m the author of the project :)
- As fun of a theory as this is, star-history.com just seems to round off the numbers at multiples of a hundred - just look at any other repo on the site.
- Now it'll be, "Author of library with 10K stars on github, front page on HN, #1 ProductHunt"
- > Are those 9.7k real users?
I don't quite get this stars and users connection. The stuff I use, I use, I don't need to star it. The reference is saved somewhere else. I bookmark, which is what the star is, stuff that seems interesting. So basically star for me is that I don't actually use the project, most of the time.
And yes, I glance at stars for a popularity cutoff, but forks, PRs, issues are much more telling.
- Just look at the star graph on the bottom of the readme (itself a sign of a hype driven project with little substance). I highly doubt that hockey stick is organic.
- the hockey stick was probably from when they paid for fake stars: https://awesomeagents.ai/news/github-fake-stars-investigatio...
- [dead]
- Did anyone actually use this on a complex codebase and have any kind of intuition for it ?
Like, having looked at the demo, it feels less intuitive and extra complex than going through the codebase myself with tmux + codex + reading it myself. I think for you to understand the codebase, it should be easier to interact without. This seems to introduce way too many steps to interact with the codebase
- I know I'm bandwagoning but just to make sure the signal beats the noise:
- This looks like vibe-code + fake github stars - There's a difference between "Summarize this verbose report from my the PM so I can get the gist" and "ELI5 this complex subject so I feel like I understand it".
- What evidence is there that this makes any difference at all? There are a gazillion (and one) codebase understanding solutions using knowledge graphs. How do I know if it's any good compared to just using Codex or Claude Code?
- It depends on an individual's personal taste of how he/she understands things, some people like to YOLO and tinker while some like to read docs first before looking at any code and some like to do both synchronously. To me it seems the reason why there is no basis that one solution works better than others is exactly why it's easy to make trending/popular repos these days
- Fake GitHub stars. Move a long.
- Vibe coded projects and fake github stars. Name a more iconic dual.
- is this like Obsidian's graph view? Looks pretty/makes cool screenshots but has no actual value and is just cumbersome to use? (btw, this isn't meant to be a mean comment, just a question after looking at the output.)
- They lost me at the first gif. Scrolling around a large graph that looks like mostly empty space.. seems you can make the same info in a compact screen-sized text with nested <ul>
- For anyone interested in ELI5 on arXiv papers: https://eli.voxos.ai
Similar idea, but much simpler and focused!
- Interesting approach. I built something similar https://github.com/nilbuild/diffity to understand unknown codebases. The difference is it gives you the interactive walk-through with mermaid diagrams, guiding you through the feature or part of the codebase that you're looking at.
- The phrase going around the interwebs is "You can outsource your thinking but not your understanding". A phrase that can at times seem like this weird human<>llm endless loop; depending on what you think you understand and what the llm "thinks" to help you understand, it can seem like an LLM also understand. But it does not.
Its clear one can't really think about anything without building a basic understanding about it. Worth stating that these are distinct from learning. But, I would argue that it is important to know what you *have* to understand now and why is that important. An LLM can help you understand a great many things, you just need to know what you are looking for and that is something no artificial intelligence can really *do* for you. Trial and error, building a sense of self awareness, and talking to people is a better way to know what this is especially for fairly open ended problems.
- I agree with the sentiment on many of these comments. Understanding something is work and that can’t be offloaded to others or even LLMs.
- “Understand anything” more like compressing anything.
- I'd prefer if post titles returned that immediately showed if it was an AI tool. These AI projects seem to be picking more and more random names
- A big az graph with 100s of spagatti nodes is the kind of learning I try to avoid. Is better to just ask directly, "where do i start?", "teach me about...". This is over engineered education.
- Provocative title, then seeing the like 8+ dot folders in the repo really made this seem like some kind of obscure satire at first.
- I'm exhausted by these shiny vibe coded projects that overpromise and underdeliver.
Knowledge comes from doing the hard work, not from being spoon fed information. All these fancy graphs represent a tentative mental model produced as a result of research and learning. Everyone's model is different based on their own experience and focus, so trying to present it as a unique map will more than likely not be conducive to understanding at all. Besides the fact that it will almost certainly miss important details or be hallucinated.
HN users: stop upvoting and promoting this garbage. HN mods: please give us tools to label and filter this content.
- What would the ideal moderation scheme look like?
- There's no such thing, but all content publishing web sites should at the very least provide tools for users to self-moderate, which this forum heavily relies on anyway.
Now that the internet is flooded by machine-generated content, which is often published and promoted autonomously as well, all content should be scanned and labeled with a value that indicates the likelihood of it being machine-generated and published.
I'm thinking of JSON fields like `machine_gen_probability` and `machine_pub_probability` returned by the API. Then the frontend should expose settings to show these labels next to each post and comment, and filtering rules to decide what should be done with content above a certain value (hide, adjust feed rank, etc.). Some people might even want to boost this content, for whatever reason, so making the system flexible would be smart.
The scoring system of course won't be perfect, but I figure that a company like YC should know a few talented individuals that could do a solid job of implementing this. They've certainly profited from investing in companies that cause this problem.
But... considering HN is merely a promotional tool for YC that runs on limited resources as it is, I wouldn't hold my breath that such a system would ever be implemented. So all we're going to get are changes to "guidelines", and hope that the system won't be abused. Which is laughably naive in this day and age. So this forum will most likely be overrun by the noise, and end up with minimal participation from reasonable humans, as is happening and will continue to happen on most online platforms.
- [flagged]