43 points by gmays 22 hours ago | 13 comments
- Previous discussion here (with links to actual primary source):
https://news.ycombinator.com/item?id=48023079
No technical report published yet, unlikely code or weights will be either given VC funding.
- It’s probably something like deepseek’s native sparse attention with content based granularity. They might not be publishing anything because it’s not such a strong value proposition and doing so would lead to commentary that would tank their investment opportunities.
- Or maybe because giving it away would tank their investment opportunities.
- There's ways and means. Pushing something out in the sub-30B range would gain them mindshare and they could keep bigger models to themselves. I can't see any indication of what size their model is though.
- For Claude Code, I feel 1M is enough. I've had a compaction once, but that was because I was forcing Claude to do something it clearly had a hard time understanding.
For general chat bots where the user doesn't understand what a context window is, what do you do about context? Latest few messages and then a memory tool? Compaction?
- I feel the 1M context is way too large —- the model gets ”drunk” way before it gets anywhere near 1 million. Imo the 1M context window is a huge downgrade.
- I use a tool called context-mode. Update agent to save session summary every 100k tokens
- This is so interesting to me, I frequently experience compaction on long running features and still find Claude is better than starting with fully fresh prompts.
Every dev seems to use these tools differently.
- Claude does compaction in the regular web chat interface now, too
- Have they published?
- I believe it, when I see it.
- Waiting for the paper and model card. I believe it when I see it.
- Feels like this is to AI what JPG is to image
- [dead]