- Gemma4 in my view is good enough to do things similar to Gemini 2.5 flash, meaning if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions but it’s not great at using all tools or one shooting things that require a lot of context or “expert knowledge”
If a couple more iterations of this, say gemma6 is as good as current opus and runs completely locally on a Mac, I won’t really bother with the cloud models.
That’s a problem.
For the others anyway.
- I agree. At first I was really turned off by the Gemma 4 line of models because they didn’t function with coding agents as well as the qwen3.5 line of models. However, I found that for other use cases Gemma 4 was very good.
EDIT: I just saw this: “”Ollama 0.20.6 is here with improved Gemma 4 tool calling!”” I will rerun my tests after breakfast.
- similar vibes as "640k ought to be enough for anybody"
- I think the difference is that with LLMs, in a lot of cases you do see some diminishing returns.
I won't deny that the latest Claude models are fantastic at just one shotting loads of problems. But we have an internal proxy to a load of models running on Vertex AI and I accidentally started using Opus/Sonnet 4 instead of 4.6. I genuinely didn't know until I checked my configuration.
AI models will get to this point where for 99% of problems, something like Gemma is gonna work great for people. Pair it up with an agentic harness on the device that lets it open apps and click buttons and we're done.
I still can't fathom that we're in 2026 in the AI boom and I still can't ask Gemini to turn shuffle mode on in Spotify. I don't think model intelligence is as much of an issue as people think it is.
- 100% agree here. The actual practical bottleneck is harness and agentic abilities for most tasks.
It's the biggest thing that stuck out to me using local AI with open source projects vs Claude's client. The model itself is good enough I think - Gemma 4 would be fine if it could be used with something as capable as Claude.
And that's gonna stay locked down unfortunately especially on mobile and cars - it needs access to APIs to do that stuff - and not just regular APIs that were built for traditional invoking.
The same way that websites are getting llm.txts I think APIs will also evolve.
- Agree on the diminishing returns,the Opus 4.6 anecdote is a good signal
- I think security is the issue-ai is good at circumventing this. For example , ai can read paywalled articles you cannot. Do you really want ai to have ‘free range’.?
- I mean to me even difference between Opus and Sonnet is as clear as day and night, and even Opus and the best GPT model. Opus 4.6 just seems much more reliable in terms of me asking it to do something, and that to actually happen.
- It depends what you're asking it though. Sure, in a software development environment the difference between those two models is noticeable.
But think about the general user. They're using the free Gemini or ChatGPT. They're not using the latest and greatest. And they're happy using it.
And I am willing to bet that a lot of paying users would be served perfectly fine by the free models.
If a capable model is able to live on device and solve 99% of people's problems, then why would the average person ever need to pay for ChatGPT or Gemini?
- But even other tasks, like research etc, where dates are important, little details and connections are important, reasoning is important, background research activities or usage of tools outside of software development, and this is where I am finding much of the LLMs most useful for my life.
Even Opus makes mistakes with dates or not understanding news and everything correctly in context with chronological orders etc, and it would be even worse with smaller and less performing models.
Scheduling, planning, researching products, shopping, trip plans, etc...
- You're quick to say "to me" in your comparison.
My experience is very different than yours. Codex and CC yield very differenty result both because of the harness differencess and the model differences, but niether is noticeably better than the other.
Personally, I like Codex better just because I don't have to mess with any sort of planning mode. If I imply that it shouldn't change code yet, it doesn't. CC is too impatient to get started.
- I guess yes, that's a harness difference, and you can also configure CC as a harness to behave very differently, but still with same harness and guidance, "to me" there's still a difference in terms of Opus 4.6 and e.g. GPT 5.4 or which GPT model do you use? I've been using Claude Code, Codex and OpenCode as harnesses presently, but for serious long running implementation I feel like I can only really rely on CC + Opus 4.6.
- Yes 5.4
Perhaps Opus is superior and I'm just jaded.
I come from Cursor before having adopted the TUI tools. Opus was nothing short of pathetic in their environment compared to the -codex models. I would only use it for investigations and planning because it was faster.
Like you've said, though, that could just be a harness issue.
- I have the opposite experience. Codex gets to work much faster than Claude Code. Also I've never seen the need to use planning mode for Claude. If it thinks it needs a plan it will make one automatically.
- I'll drink to the idea that it's all in my head.
- Well you can do a lot with 640k…if you try. We have 16G in base machines and very few people know how to try anymore.
The world has moved on, that code-golf time is now spent on ad algorithms or whatever.
Escaping the constraint delivered a different future than anticipated.
- > you can do a lot with 640k…if you try.
it is economically not viable to try anymore.
"XYZ Corp" won't allow their developers to write their desktop app in Rust because they want to consume only 16MB RAM, then another implementation for mobile with Swift and/or Kotlin, when they can release good enough solution with React + Electron consuming 4GB RAM and reuse components with React Native.
- Strangely enough, AI could turn this on its head. You can have your cake and eat it too, because you can tell Claude/Codex/whatever to build you a full-featured Swift version for iOS and Kotlin for Android and whatever you want on Windows and Mac. There's still QA for the different builds, but you already have to QA each platform separately anyway if you really care that they all work, so in theory that doesn't change.
Of course, it's never that simple in reality; you need developers who know each platform for that to work, because you must run the builds and tell the AI what it's doing wrong and iterate. Currently, you can probably get away with churning out Electron slop and waiting for users to complain about problems instead of QAing every platform. Sad!
- My Commodore 64 begs to differ.
- Especially if the 640k are "in your hand" and the rest is "in the cloud"
- The simple fact is that a 16 GB RAM stick costs much less than the development time to make the app run on less.
- > The simple fact is that a 16 GB RAM stick costs much less than the development time to make the app run on less.
The costs are borne by different people: development by the company, RAM sticks by the customer.
A company is potentially (silently?) adding to the cost of the product/service that the customer has to bear by needed to have more RAM (or have the same amount, but can't do as much with it).
- One stick does. How about all the sticks needed for all the people who want to run the software?
- Still cheaper, since it amortizes over all the software.
- Some software has millions or even billions of users. The cost of 16 GB multiplied by million millions or billions would pay for a lot of refactoring.
That said, I think it’s more of a collective action problem. The person who could pay for the refactor to operate in 640 K is not the same person who has to pay for the 16 GB. And yes, the 16 GB is cheap enough in comparison to other costs that the latter group doesn’t necessarily notice that they are subsidizing inefficient development.
- I think stavros means amortization on an individual level - if all software is bloated and requires 16GB to run then my expense for a 16GB stick is not caused by a single piece of software, but everything I use.
Not that I agree of course :) I’m talking more of the net negative of everyone needing to buy 16gb sticks so developers can YOLO vibe-coded unoptimized garbage. But at least I think the former explanation is what stavros meant :)
- People get hung up on bad optimization. It you are the working at sufficiently large scale, yes, thinking about bytes might be a good use of your time.
But most likely, it's not. At a system level we don't want people to do that. It's a waste of resources. Making a virtue out of it is bad, unless you care more about bytes than humans.
- These bytes are human lives. The bytes and the CPU cycles translate to software that takes longer to run, that is more frustrating, that makes people accomplish less in longer time than they could, or should. Take too much, and you prevent them from using other software in parallel, compounding the problem. Or you're forcing them to upgrade hardware early, taking away money they could better spend in different areas of their lives. All this scales with the number of users, so for most software with any user base, not caring about bytes and cycles is wasting much more people-hours than is saving in dev time.
- Creating people able to do these optimizations costs human life, which is not spend on other things, like building the unoptimized version of another product.
- We're not talking about writing assembly by hand here. If your software has a million daily users and wastes a minute of their day, that's about 9 work-years of labour wasted every single day.
In a 5-year lifecycle that's about 10,000 years of human labour wasted. Yes, I had to quadruple-check this myself.
Does it take 10,000 work-years of effort, per project, to train its developers to write reasonably performant code?
Of course not all of this would translate into actual productivity gains but it doesn't have to.
- What world are you living in where the median piece of software has a million users? Or even a hundredth of that?
- You are failing to consider the opportunity cost of how much more work-years can be saved by making a new feature.
- Look at the whole history of computing. How many times has the pendulum swung from thin to fat clients and back?
I don't think it's even mildly controversial to say that there will be an inflection point where local models get Good Enough and this iteration of the pendulum shall swing to fat clients again.
- Assuming improvements in LLMs follow a sigmoid curve, even if the cloud models are always slightly ahead in terms of raw performance it won't make much of a difference to most people, most of the time.
The local models have their own advantages (privacy, no -as-a-service model) that, for many people and orgs, will offset a small performance advantage. And, of course, you can always fall back on the cloud models should you hit something particularly chewy.
(All IMO - we're all just guessing. For example, good marketing or an as-yet-undiscovered network effect of cloud LLMs might distort this landscape).
- More than "a 3 year old laptop is fine"
My thinkpad is nearly 10 years old, I upgraded it to 32GB of ram and have replaced the battery a couple of times, but it's absolutely fine apart from that.
If AI which was leading edge in 2023 can run on a 2026 laptop, then presumably AI which is leading edge in 2026 will run on a 2029 laptop. Given that 2023 was world changing then that capacity is now on today's laptop
Either AI grows exponentially in which case it doesn't matter as all work will be done by AI by 2035, or it plateaus in say 2032 in which case by 2035 those models will run on a typical laptop.
- The economy is, more or less, a competition.
If someone gets a really great axe and are happy with it, that’s great for them.
But then, other people will be on bulldozers.
They can say they are happy with the axe, but then they are not in the competition at that point.
- I think the article was wondering how many billion dollar bulldozers the world needs. My local hardware store sells a variety of axes. I myself am a happy ax user. I even replace them.
- > it’s not great at using all tools
Glad it wasnt just me - i was impressed with the quality of Gemma4 - it just couldnt write the changes to file 9/10 times when using it with opencode
- https://huggingface.co/google/gemma-4-31B-it/commit/e51e7dcd...
There was an update to tool calling 3 days ago. I haven't tested it myself but hope it helps.
- Wow, that is so much better! I didnt exactly test it extensively but my issues are gone.
- Hmm.. is there an updated onnx?
- > it just couldnt write the changes to file 9/10 times when using it with opencode
You might want to give this a try, it dramatically improves Edit tool accuracy without changing the model: https://blog.can.ac/2026/02/12/the-harness-problem/
- Yep, and to be honest we don't really need local models for intensive tasks. At least yet. You can use openrouter (and others) to consume a wide variety of open models which are capable of using tools in an agentic workflow, close to the SOTA models, which are essentially commodities - many providers, each serving the same model and competing with each-other on uptime, throughput, and price. At some point we will be able to run them on commodity hardware, but for now the fact that we can have competition between providers is enough to ensure that rug pulls aren't possible.
Plus having Gemma on my device for general chat ensures I will always have a privacy respecting offline oracle which fulfils all of the non-programming tasks I could ever want. We are already at the point where the moat for these hyper scalers has basically dissolved for the general public's use case.
If I was OpenAI or Anthropic I would be shitting my pants right now and trying every unethical dark pattern in the book to lock in my customers. And they are trying hard. It won't work. And I won't shed a single tear for them.
- Local models seem somewhere between 9 and 24 months behind. I'm not saying I won't be impressed with what online models will be able to do in two years, but I'm pretty satisfied with the prediction that I won't really need them in a couple of years.
- We still aren't going to be putting 200gb ram on a phone in a couple years to run those local models.
- HBF is coming fast, with the first examples expected to be sampling to users this year.
The storage technology of Flash memory can be optimized to be as fast and more energy-efficient than DRAM at large linear reads, there was just little demand before because doing so costs you ~half of your density and doesn't improve your writes at all. All the flash memory manufacturers realized that this is a huge opportunity for model weights and are now chasing this.
Or in other words, after the initial price peak stabilizes in a few years, it will be reasonable to put ~500GB of weights into a device for ~$100 in memory costs.
- That amount of RAM won’t be necessary. Gemma 4 and comparably sized Qwen 3.5 models are already better than the very best, biggest frontier models were just 12-18 months ago. Now in an 18-36GB footprint, depending on quantization.
- > We still aren't going to be putting 200gb ram on a phone in a couple years to run those local models.
You can already buy an iPhone with 2 TB of storage. The CPU, GPU and Neural Engine all share the same pool of RAM and the SSD is directly connected to all of this. You won’t need 200 GB of RAM to run local models when you essentially have 500 GB of virtual memory.
- We don’t need 200gb of RAM on a phone to run big models. Just 200 GB of storage thanks to Apple’s “LLM in a flash” research.
- Yes, I agree that this is the right solution, because for a locally-hosted model I value more the quality of the output than the speed with which it is produced, so I prefer the models as they were originally trained, not with further quantizations.
While that paper praises the Apple advantage in SSD speed, which allows a decent performance for inference with huge models, nowadays SSD speeds equal or greater than that can be achieved in any desktop PC that has dual PCIe 5.0 SSDs, or even one PCIe 5.0 and one PCIe 4.0 SSDs.
Because I had also independently reached this conclusion, like I presume many others, I have just started to work a week ago on modifying llama.cpp to use in an optimal manner weights stored on SSDs, while also batching many tasks, so that they will share each pass through the SSDs. I assume that in the following months we will see more projects in this direction, so the local hosting of very large models will become easier and more widespread, allowing the avoidance of the high risks associated with external providers, like the recent enshittification of Claude Code.
- > While that paper praises the Apple advantage in SSD speed, which allows a decent performance for inference with huge models, nowadays SSD speeds equal or greater than that can be achieved in any desktop PC that has dual PCIe 5.0 SSDs, or even one PCIe 5.0 and one PCIe 4.0 SSDs.
Apple’s advantage is their unified memory architecture where the CPU, GPU and Neural Engine share the same memory and the SSD is directly connected to the SoC--less latency than PCIe. Memory bandwidth starts at 300+ GB/s.
- In an optimized implementation of model inference, the latency of SSD access has no importance, because no random accesses are done.
The purpose of optimizing model inference for weights stored on SSDs is to achieve a continuous reading from SSDs at the maximum throughput provided by hardware, taking care that any computations and any accesses to the main memory are overlapped over the SSDs reading.
- A lot of people are making the mistake of noticing that local models have been 12-24 months behind SotA ones for a good portion of the last couple years, and then drawing a dotted line assuming that continues to hold.
It simply.. doesn't. The SotA models are enormous now, and there's no free lunch on compression/quantization here.
Opus 4.6 capabilities are not coming to your (even 64-128gb) laptop or phone in the popular architecture that current LLMs use.
Now, that doesn't mean that a much narrower-scoped model with very impressive results can't be delivered. But that narrower model won't have the same breadth of knowledge, and TBD if it's possible to get the quality/outcomes seen with these models without that broad "world" knowledge.
It also doesn't preclude a new architecture or other breakthrough. I'm simply stating it doesn't happen with the current way of building these.
edit: forgot to mention the notion of ASIC-style models on a chip. I haven't been following this closely, but last I saw the power requirements are too steep for a mobile device.
- Don’t underestimate the march of technology. Just look at your phone, it has more FLOPS than there were in the entire world 40 years ago.
- And I think it's very likely that with improved methods you could get opus 4.6 level performance on a wrist watch in few years.
You needed supercomputer to win in chess until you didn't.
Currently local models performance in natural language is much better than any algorithm running on a super computer cluster just few years ago.
- Yeah, but that's the current state of the art after decades of aggressive optimizations, there's no foreseeable future where we'll ever be able to cram several orders of magnitude more ram into a phone.
- We already cram several orders of magnitude more flash storage into phone than RAM (e.g. my phone has 16 GB RAM but 1 TB storage); even now, with some smart coding, if you don't need all that data at the same time for random access at sub millisecond speed, it's hard to tell the difference.
- Agreed. Apple is sells an iPhone Pro Max with 2 TB of storage.
- but it doesn't have that much more flops than it did a couple of years ago.
- Would the model even need that breath of knowledge? Humans just look things up in books or on Wikipedia, which you can store on a plain old HDD, not VRAM. All books ever written fit into about 60TB if you OCR them, and the useful information in them probably in a lot less, that's well within the range of consumer technology.
- Pretty sure there’s at least a couple orders of magnitude in purely algorithmic areas of LLM inference; maybe training, too, though I’m less confident here. Rationale: meat computers run on 20W, though pretraining took a billion years or so.
- There's been plenty of free lunch shrinking models thus far with regards to capability vs parameter count.
Contradicting that trend takes more than "It simply.. doesn't."
There's plenty of room for RAM sizes to double along with bus speed. It idled for a long time as a result of limited need for more.
- The gap between SOTA models and open / local models continues to diminish as SOTA is seeing diminishing returns on scaling (and that seems to be the main way they are "improving"), whereas local models are making real jumps. I'm actually more optimistic local models will catch up completely than I am SOTA will be taking any great leaps forward.
- > if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions
could I ask how you do that? I installed openclaw and set it to use Gemma 4 but it didn't act in an agent mode at all, it only responded in the chat window while doing nothing, and didn't read any files or do anything that you wrote (though I see you do mention that it's not great at using all tools). What are you using exactly?
- I had the same issues. I had to tell it to use sub agents explicitly, and instead of saying set a cron say set an openclaw cron.
I generally do like the model, it’s not a great agent though.
It’s good for summarization tasks, small tool use, and has pretty good world knowledge, though it does hallucinate.
- But that difference atm is the difference between it being OK on its own with a team of subagents given good enough feedback / review mechanisms or having to babysit it prompt by prompt.
By the time gemma6 allows you to do the above the proprietary models supposedly will already be on the next step change. It just depends if you need to ride the bleeding edge but specially because it's "intelligence", there's an obvious advantage in using the best version and it's easy to hype it up and generate fomo.
- > But that difference atm is the difference between it being OK on its own with a team of subagents given good enough feedback
Do people actually build meaningful things like that?
It's basically impossible to leave any AI agent unsupervised, even with an amazing harness (which is incredibly hard to build). The code slowly rots and drifts over time if not fully reviewed and refactored constantly.
Even if teams of agents working almost fully autonomously were reliable from a functional perspective (they would build a functional product), the end product would have ever increasing chaos structurally over time.
I'd be happy to be proven wrong.
- [dead]
- When that happens, you'll have fomo from not using opus 5.x. The numbers that they showed for Mythos show that the frontier is still steadily moving (and maybe even at a faster pace than before)
- I would be surprised about that behavior even for 10% people doing real AI usable work. Very few people buy new motherboard or CPU or gfxcard every 3 months?
Even now just because the latest Anthropic is super great doesn't mean people are not using other models. Not everyone is subscribed to only the best.
- There is a cognitive ceiling for what you can do with smaller models. Animals with simpler neural pathways often outperform whatever think they are capable of but there's no substitute for scale. I don't think you'll ever get a 4B or 8B model equivalent to Opus 4.6. Maybe just for coding tasks but certainly not Opus' breadth.
- The only thing that we are sure can't be highly compressed is knowledge, because you can only fit so much information in given entropy budget without losing fidelity.
The minimal size limits of reasoning abilities are not clear at all. It could be that you don't need all that many parameters. In which case the door is open for small focused models to converge to parity with larger models in reasoning ability.
If that happens we may end up with people using small local models most of the time, and only calling out to large models when they actually need the extra knowledge.
- > and only calling out to large models when they actually need the extra knowledge
When would you want lossy encoding of lots of data bundled together with your reasoning? If it is true that reasoning can be done efficiently with fewer parameters it seems like you would always want it operating normal data searching and retrieval tools to access knowledge rather than risk hallucination.
And re: this discussion of large data centers versus local models, do recall that we already know it's possible to make a pretty darn clever reasoning model that's small and portable and made out of meat.
- > we already know it's possible to make a pretty darn clever reasoning model
There's is a problem though: we know that it is possible, but we don't know how to (at least not yet and as far as I am aware). So we know the answer to "what?" question, but we don't know the answer to "how?" question.
- I would call brains with the needed support infrastructure small.
- I think you underestimate the amount of knowledge needed to deal with the complexities of language in general as opposed to specific applications. We had algorithms to do complex mathematical reasoning before we had LLMs, the drawback being that they require input in restricted formal languages. Removing that restriction is what LLMs brought to the table.
Once the difficult problem of figuring out what the input is supposed to mean was somewhat solved, bolting on reasoning was easy in comparison. It basically fell out with just a bit of prompting, "let's think step by step."
If you want to remove that knowledge to shrink the model, we're back to contorting our input into a restricted language to get the output we want, i.e. programming.
- except you don't want knowledge in the model, and most of that "size" comes from "encoded knowledge", i.e. over fitting. The goal should be to only have language handling in the model, and the knowledge in a database you can actually update, analyze etc. It's just really hard to do so.
"world models" (for cars) maybe make sense for self driving, but they are also just a crude workaround to have a physics simulation to push understanding of physics. Through in difference to most topics, basic, physics tend to not change randomly and it's based on observation of reality, so it probably can work.
Law, health advice, programming stuff etc. on the other hand changes all the time and is all based on what humans wrote about it. Which in some areas (e.g. law or health) is very commonly outdated, wrong or at least incomplete in a dangerous way. And for programming changes all the time.
Having this separation of language processing and knowledge sources is ... hard, language is messy and often interleaves with information.
But this is most likely achievable with smaller models. Actually it might even be easier with a small model. (Through if the necessary knowledge bases are achievable to fit on run on a mac is another topic...)
And this should be the goal of AI companies, as it's the only long term sustainable approach as far as I can tell.
I say should because it may not be, because if they solve it that way and someone manages to clone their success then they lose all their moat for specialized areas as people can create knowledge bases for those areas with know-how OpenAI simple doesn't have access to. (Which would be a preferable outcome as it means actual competition and a potential fair working market.)
- as a concrete outdated case:
TLS cipher X25519MLKEM768 is recommended to be enabled on servers which do support it
last time I checked AI didn't even list it when you asked it for a list of TLS 1.3 ciphers (through it has been widely supported since even before it was fully standardized..)
this isn't surprising as most input sources AI can use for training are outdated and also don't list it
maybe someone of OpenAI will spot this and feet it explicitly into the next training cycle, or people will cover it more and through this it is feed implicitly there
but what about all that many niche but important information with just a handful of outdated stack overflow posts or similar? (which are unlikely to get updated now that everyone uses AI instead..)
The current "lets just train bigger models with more encoded data approach" just doesn't work, it can get you quite far, tho. But then hits a ceiling. And trying to fix it by giving it also additional knowledge "it can ask if it doesn't know" has so far not worked because it reliably doesn't realize it doesn't know if it has enough outdated/incomplete/wrong information encoded in the model. Only by assuring it doesn't have any specialized domain knowledge can you make sure that approach works IMHO.
- I think you are underestimating the strength a small model can get from tool use. There may be no substitute for scale, but that scale can live outside of the model and be queried using tools.
In the worst case a smaller model could use a tool that involves a bigger model to do something.
- Small models are bad at tool use. I have liquidai doing it in the browser but it’s super fragile.
- I don’t really understand this, but I hear it a lot so I know it’s just confusion on my part.
I’m running little models on a laptop. I have a custom tool service made available to a simple little agent that uses the small models (I’ve used a few). It’s able to search for necessary tool functions and execute them, just fine.
My biggest problem has been the llm choosing not to use tools at all, favoring its ability to guess with training data. And once in a while those guesses are junk.
Is that the problem people refer to when they say that small models have problems with tool use? Or is it something bigger that I wouldn’t have run into yet?
- People can correct me if I'm wrong, but I think the core logic behind OpenAI's valuation was essentially that AI would work like search. Google had the best search engine, it became a centre of gravity that sucked everything in and suddenly network effects meant it was the centre of the universe. There seem to be 2 big problems with that though. The first is that for search, queries are both demand for the product and a way of making the product better. The second, is that Google was genuinely the best product for a very long time.
Maybe point (1) was unclear at some point, but I think it's mostly clear today that's not happening. Training the model is modestly distinct from inference.
Point (2) is really funny - because sure, at some point OpenAI was the best, and then Sam Altman blew the place up and spawned a whole host of competitors who could replicate and eventually surpass OpenAI's state of the art.
It now looks like AI is a death march. You must spend billions of dollars to have the best model or you won't be able to sell inference. But even if you do, a whole host of better funded competitors are going to beat you within months so your inference charges better pay off extremely quickly. When the gap between models starts to drop, distribution becomes king and OpenAI can't compete in that field either.
Google can do that. Meta can do that. MSFT probably can do that. Amazon can do that. OpenAI cannot. They do not have the cash to do it.
- I think a large part of its valuation was it's ability to compete with search but thats understating it a bit. Unlike search it could/can be the platform users primarily interact with (ala a social media replacement) while having huge impacts on enterprise work and automation. I think its the combination of the ability for effectively one company to compete on every front in the modern web ecosystem thats contributed to the valuation.
It's also important to note the valuation is not just based off of its possible concrete economic implications in these areas but also future "unknown" possibility ( I.E. whatever "agi" means to investors ). Thats not to say I believe it's possible to achieve this but rather a huge part of Sam Altman's job is increasing valuation through unfounded claims of AGI's possibility and possible impact.
- I've almost forgotten about AGI, that was suppose to be the reason for the valuations and all the hope/fear. Then, it just sort of went away and AI turned into the Software Developer doomsday machine. We're on month 4 since the models got really good at code and we were all going to be out of a job in 6 months. I guess we only have 2 more months of employment left /s
- "Google had the best search engine, it became a centre of gravity..."
Almost no one made serious attempts at competing with Google. And not because of network effects or any other hard blocker. In the early 2000s, the industry just wasn't mature enough to heavily fund serious competition.
By the 2020s the industry has funding and founders ready to jump on any huge opportunity that presents itself.
There are of course downsides, but this competitive landscape in AI seems like a huge net win for users in terms of lower costs and faster progress.
- Microsoft had a good go with Bing.
- Yahoo? MSN Search?
- that's been my feeling for a while now. Google just has to keep up while OpenAI and Anthropic go bankrupt. I can see MSFT and Amazon eventually consuming OpenAI and Anthropic respectively when the money runs out but I still think Google is the eventual winner. I also have been pointing out that Apple making a deal with Google vs trying to do it on their own is another vote in that direction.
- I'm just sad Google was intent on ruining their own product, whether removing + operator (seriously - Google+ is not an excuse, I don't care if it conflicts with search, don't do that) or some of their political censorship
- For actual searching, it seems like RAG would be the way. Instead of rebuilding models, focus on curating datasets and sources.
- This is the classic apple approach - wait to understand what the thing is capable of doing (aka let others make sunk investments), envision a solution that is way better than the competition and then architect a path to building a leapfrog product that builds a large lead.
- Pretty much it. That said, they did try to appease the markets by announcing 'Apple Intelligence' so they didn't appear to be behind everyone.
They did do the smart thing of not throwing too much capital behind it. Once the hype crumbles, they will be able to do something amazing with this tech. That will be a few years off but probably worth the wait.
- For consumers AI has anti hype right now. It's off-putting to see consumer products slapped with a hundred AI labels. I see people talk about how you can turn off all of Apple Intelligence with one toggle rather than hundreds on Samsung.
Firefox is also marketing how easy it is to disable AI.
- I think a lot of people are not hype about AI in their toaster, but... I don't think people are generally turned off form deeper integration in their OS itself. Especially when for some people this is representing ideas similar to how programmer-types get excited about Shortcuts.
Decently accessible automation and discovery, without having to go figure out a bunch of stuff
- > Decently accessible automation and discovery, without having to go figure out a bunch of stuff
Sure, but is this actually happening? Last time I tried, Atlassian's heavily-pushed AI couldn't even turn a Jira ticket number of Confluence into a clickable link. Similarly, Windows has been actively moving away from providing locally-installed applications in the Start menu search towards offering random internet garbage.
I'm all for using a LLM to make something like Siri able to understand both "Siri, turn off the lights" and "Siri, make it dark!" - but that's not what's being pushed onto consumers, because there is no way anyone is going to pay $100/month for any version of that.
- Unfortunately companies seem to be in panic mode about making ANY offering to not become irrelevant that they're giving AI overall a bad reputation. Everyone made a mad dash and didn't spent enough time making that product well thought out. Some got burned by it such as Microsoft.
Everyone seems well convinced AI can just replace 90% of software out there but I've yet to see any evidence of that. Sure it can stand up a blog, get a simple app together pretty quick but once you get into larger scale software it's not capable of doing it by itself and you still need teams of developers working together.
- People like features, benefits, and outcomes. AI isn't a feature, it's a technology that can enable features. But it's being marketed as the only thing that matters.
The user does not give two shits if the new laptop "has AI". This is how Apple has been killing it lately, they market the macbooks being powerful, cheap, with long batteries, and a premium feel. Things the user cares about. Most of the stuff marketers are just blanket labeling "AI" will eventually be shuffled to the background and rebranded with a more specific term to highlight the feature being delivered rather than the fact it's AI".
- Nailed it.
I reckon most humans never learn the valuable lessons of the past.
As you put it - nobody cares about the technology in and of itself. They care about “ok cool what’s in it for me?”. That’s what determines their decision to purchase/use a thing.
- You're right, there is plenty of space for features that require AI to work but that are undistinguishable from "classical" feature. Better autocompletion is a proven one for example.
- Sentiment among my teenage kids and their peers is that AI can fuck right off. It's way over the line into actual hate of anything AI.
- Apple's Neural Engine and the CoreML framework to leverage it are almost a decade old now.
Apple Intelligence was a rebrand, and Apple has made some unique decisions rolling it out.
For instance, the new chatbot version of Siri's hallucinations were seen as unacceptable, so its release was delayed.
Is a chatbot that provides false information regularly really an advancement?
Apple chose not to do photorealistic generative images, so they can't be used for deepfakes.
Apple chose not to add a feature to write text for you, just one to clean up what you write, because they don't want to help kids in school cheat.
- Yeah exactly the Apple Intelligence thing was pure BS to shut people up who kept saying apple was going to get disrupted by missing out.
Apple seems to follow the values that Steve laid out. Tim isn’t a visionary but he seems to follow the principles associated with being disciplined with cash quite well. They haven’t done any stupid acquisitions either. Quite the contrast with OAI.
- Quietly they are doing things on-device. The OCR + copy/paste is genuine goodness - modestly functional.
- This feature has been around since iOS 16 (2022) though - no relationship to Apple Intelligence (2024) or the current LLM hype (2023 onwards).
- That's also literally years behind the competition. https://www.androidpolice.com/2018/05/09/android-ps-new-rece...
- The competition has also attached it to a toxic brand and heavily integrated it with actively user-hostile applications. It doesn't matter if your tech is years ahead when people expect using it will mean your image content info will be sold to anyone willing to pay a cent for it.
- It's more that nontechnical users prefer luxury brands over utility brands. A much smaller issue, which you alluded to, is that some technical users aren't technical enough to know real privacy vs. marketed privacy. This feature exists in base Android, which doesn't require any Google services.
- does anyone else tire of hearing Apple referred to as a "luxury" brand? Sorry its more Honda than LVMH
- LOL at the risk of sounding like a shill, I think Apple was right on time with these features. They added it after on-device CPU/neural engine was finally powerful and efficient enough. These features arrived at once on macs, iphones and ipads, and they arrived at the same time on your friends' devices.
IMO Android suffers from not controlling it's hardware. I can't ever be sure if the hyped new feature will come to my phone because I'm not using a Pixel or a Samsung.
- But everyone talks about it like it was Apple, and isn’t that what matters (to Apple)?
- The text from images feature launched as a Pixel 2 series-only feature.
There's a lot clearer message to consumers on iPhone, since so many features are available on "every phone made in the last five years, once you update the software."
On Android, that feature might be bound to an OS version, or might be rolled out in a Play Store update, it might be specific to just Google or Samsung, or even just to one of their phones. There's much less word of mouth "have you tried this new thing?"
- I've never heard anybody (mis)attribute that to Apple.
- I would have, and I work in tech. I'd guess that most people who use iOS have zero idea of what Android can and can't do, because they never use it and probably never will so what's the point of trying to find out.
- Seconded. I never knew Android had this—but then again I couldn't care less about what Android can do. There is so much stuff fundamentally off-putting for me about the entire Google ecosystem that I'd never consider switching anyway.
- Remember when Google added Car Crash Detection to Pixel in early 2020? Nobody does.
But when Apple added it in iPhone 14 (2022)...
- In french we talk about "le savoir-faire" vs "le faire-savoir" (Know-how vs making it known") and the importance of good communication. Apple are the bestest at it. Remember the iPod shuffle and the lack of screen marketed as a feature to spice up your life.
- Yea, they nailed that with the Newton, Apple Pippin, and the Apple Vision Pro
- How amazing is that Apple car
- Depending on price I would or would not buy an Apple car; but I am quite interested in options for a car that (1) is electric; (2) doesn't spy on me and sell my data; (3) doesn't take video of me and my passengers and do weird things with it; and (4) doesn't support Republicans / white supremacists / Elon Musk.
And I imagine that like-minded consumers are a pretty large market.
- Knowing Apple's track history with materials, I guess the seats will look like used iPad Smart Keyboard Folios after two years.
- (5) Doesn't support a dictatorship with camps.
- Apple learned to hang back from plowing the unsold Lisa's into a landfill.
- The Vision Pro was a Development Kit; Just like the first generation Apple Watch. It's not meant for the consumers, it's meant for the developers among the consumers.
We will see if they ever release a new VisionOS device, but it's not the first time they did that; see also the Apple Watch.
- You can explain away every failed product launch with "it's a developer product", not meant for consumers.
This wasn't like HoloLens or Google Glass. They marketed these devices to consumers and then sold these devices to consumers.
- The Vision Pro is the best AR/VR product ever created.
- All the king's horses and all the king's men couldn't come up with a killer app.
- I think it was $1000 too expensive to take off. It’s also too heavy, they should drop the front screen and ruthlessly save weight.
Chicken and egg problem, if no-one buys it, no-one will develop any killer apps.
Whether it’s pleasant to have any screens that close to your eyes - or ever will be - is maybe the bigger question for VR.
- > Chicken and egg problem, if no-one buys it, no-one will develop any killer apps.
Disagree on this. Going back as far as VisiCalc, it's about a device making space for a killer app, and that killer app selling devices. Apple has torched so much developer good-will that even a lower price wouldn't make the space for a killer app.
When was the last time a new, mobile-first killer app came out?
- When have they done that since the first iPhone in 2007? The watch maybe? Though not sure that's "leapfrog" better than anyone else's smartwatch, but I don't have one so maybe I'm wrong.
- Their own chips, vertically integrating.
- - AirPods
- Apple Watch
- AirTag
Those are a few that come to mind. All do multi-billions in revenue per year.
- None of those are the best product in their category, and all are only huge sellers because Apple anti-competitively privileges them in its ecosystem.
- What’s better than AirPods and AirTags? I want them
- The parent poster is saying (and I agree) that Airpods and Airtags are only superior because Apple anti-competitively privileges their integration with iPhones. It's not that they are better at the hardware level by itself.
And since iPhones form the largest single company's device network in the rich countries, that is a pretty big advantage.
- Surely it's less of an advantage in rich countries because naturally less theft occurs?
- [dead]
- > wait to understand what the thing is capable of doing
My parents use Android to ask “What are the 5 biggest towers in Chicago” or “Remove the people on my picture” while apparently iPhone is only capable of doing “Hey Siri start the Chronometer / There is no contact named Chronometer in your phone”.
My iPhone is lagging a ridiculous 10 years behind. It’s just that I don’t trust Google with my credit card.
- These are software/cloud features. You can install gemini on iphone if you want to talk about towers in Chicago.
The only reason to care about it being OS integrated is to interact with functions of the OS, which siri does fine.
- Apple's AI stuff also uses cloud features, though you can't use them on other platforms. The problem with Apple's new cloud features is that they generally just suck. I'm surprised iCloud works so well with how hard they're fumbling basic stuff like this.
- At least all of the ones I have tried work locally. I’ve entered airplane mode and things like magic eraser in images works fine.
- Siri does not do it fine, it's literally the example the above commenter showed.
- Knowing the building heights around Chicago is not an OS feature. Even if Siri was perfect, they still aren't going to ship a wikipedia object graph on every phone.
Likewise, the phone does not understand removing people from a photo. It is a feature specific to the photo app, and Siri allows you to wire in commands for the features in your app just fine and has for years. If Google decided for competitive reasons to not ship this feature to non-Pixel or non-Android users, thats not a Siri fault. That Apple did not integrate this as a voice command into their Photos app is also not a Siri fault (is it really common to remove all people from a photo, vs specific people?)
- > Hey Siri start the Chronometer / There is no contact named Chronometer in your phone
Is what I was referring to, Siri often fails at even opening apps which is an OS feature. Regardless, even for your examples at a certain point an AI assistant not being able to do certain things while others can does become the fault of that AI.
- Siri is one step below that for me, it still doesn't understand my accent, I feel like its voice recognition didn't improve from 2010...
- "10 years behind" would be an improvement for Siri. It's actively broken much of the time in a way that Google Assistant or Alexa never has been.
- I would argue that they are as bad as each other. I have to repeat most voice commands to Siri and Alexa than getting it right first time. No experience with Google.
- Voice assistants were going to be this revolutionary new category. I think Amazon was going to populate a whole office tower in Boston with Alexa engineers at one point. There have been incremental improvements here and there but, to a first approximation, none of it has really worked out.
- I want the reverse version of this, if Apple can promise me to 'lag behind' for another ten years I'll buy my first Apple device in ten years
- Apple waited on smartphones?
I thought the original iPhone was basically first.
Do you count blackberry and palm pilot as Apple waiting to see?
- > Apple waited on smartphones?
They were not waiting for smartphones, but they did wait for the technology to enable them. They had been working on prototypes for a couple of years before releasing the first iPhone, and smartphones were not really a new thing at that point. What made it possible is improvements in digitisers and batteries (and they were not the first users of the capacitive digitisers in the first iPhones, they were the first to use it at that scale for a full screen), as well as progress on the software side, which took some effort.
It was the same for the first iPod. They jumped when they got a hard drive they thought was small enough to fit in a product they believed was good.
So yeah, they tend to wait and see, but they consider technologies, not only final products.
- I would absolutely count blackberry and palm pilot, along with windows ce-based phones. Just because Apple leap-frogged them (and they all eventually folded those lines of business) doesn't mean they weren't existing products in the market.
The difference, if any, was focus. The premium on smartphones before Apple hit the market was on business/professional users who could afford the high premium. Apple instead targeted making a premium consumer product - that professionals then started to jump to over time, depending on how addicted they were to their blackberry keyboard.
- Apple was considered very late to the smartphone game at the time.
Windows CE was introduced on PDAs around 1996, and was on phones by 2003, so the iPhone was arguably between four and eleven years late depending on how you define the space.
Microsoft’s dominance was a safe bet because they had never really failed to dominate any market at that point in history. Also nobody imagined that the size of the mobile market would eclipse laptops, so “Windows CE already won” wasn’t an absurd statement at all.
- I guess it's just hard for me to consider those even "smartphones" with such small screens without capacitive multi-touch and with web browsers that didn't work properly on so many websites, at least compared to safari on the iPhone.
And even other factors like the music and videos was so poor, granted they were built for business use which didn't really need a good media consumption experience.
- I suggest you watch the original iPhone video launch - Steve compares the iPhone with the existing smartphones of the time.
You’re taking for granted that we know how things panned out in hindsight. A complete touch screen phone with no fixed buttons at that time seemed nuts.
- Will this strategy work every time ? Maybe for AI it will work (market is competitive and Apple just purchases the best model for its consumers).
But this approach may not work in other areas: e.g. building electric batteries, wireless modems, electric cars, solar cell technology, quantum computing etc.
Essentially Apple got lucky with AI but it needs to keep investing in cutting edge technology in the various broad areas it operates in and not let others get too far ahead !
- Their focus is investing in areas where they see something being a competitive differentiator, or where the market has failed to create a competitive environment.
They do not make their own screens because they can source screens from multiple sources and work with those manufacturers to create screens with the properties they want. Same thing with them relying on others for electric batteries - there are plenty of manufacturers to provide batteries to Apple's spec.
They created their own wireless modems because there's only one company they were able to purchase modems from, and those modems did not necessarily have the features Apple wanted.
Apple hasn't announced any interest in selling electric cars, solar cell technology, or quantum computing platforms. I wouldn't expect them to do so until they had a consumer product ready for sale. I doubt they are planning to come out with products in any of these categories soon.
- It works often enough for the company to be wildly successful. They can simply cut their losses and withdraw from industries where it hasn't, such as EVs.
- I think their M chips are a good example. They ran on intel for so long, then did the impossible of changing architecture on Mac, even without much transition pain.
Obviously that was built upon years of iPhone experience, but it shows they can lag behind, buy from other vendors, and still win when it becomes worth it to them.
- How is changing the architecture of a platform that only you make hardware for doing the impossible?
They could change the architecture again tonight, and start releasing new machines with it. The users will adopt because there is literally no other choice.
Every machine they release will be fastest and most capable on the platform, because there is no other option
- The hard part is doing so without completely ruining the existing app ecosystem. Rosetta 2 is genuinely impressive.
- Exactly this! Rosetta + the whole app developer community who really quickly released builds for M chips (voluntary or forced, but it did happen).
I had the initial m1 air, and it was remarkable how useable it was. You'd expect all sorts of friction and issue but mostly things just worked (very fast). Even with some Rosetta overhead it was still fast compared to intel macs.
- Rosetta 1 delivered 50-80% of the performance of native, during the PPC->Intel transition. It turns out, you can deliver not particularly impressive performance and still not ruin your app ecosystem, because developers have to either update to target your new platform, or leave your platform entirely.
You can also voluntarily cut off huge chunks of your own app ecosystem intentionally, by giving up 32bit support and requiring everything to be 64bit capable.
...because users have no other choice when only one vendor controls the both the hardware+software. They can either use the apps still available to them, or they can leave. And the cost of leaving for users is a lot higher.
- Vs. FEX and Prism?
- Yes. Apple put custom hardware support in the M series chips based on the needs of Rosetta 2. The x86_64 performance on Rosetta 2 was often higher at launch than the prior generation of Intel chips running those same binaries natively.
Microsoft and Qualcomm already knew the performance of x86 app emulation on windows was killing the ARM machine lineup, so Qualcomm was working on extensions to their chips and Microsoft on having Windows support them already, but ARM64EC and Prism didn't launch for two years after the M1 shipped.
- FEX uses TSO on M series chips.
- >wireless modems
They (Apple) bought out intel's wireless modems and are using them instead of Qualcomm's chips. IIRC, they aren't the best in class when it comes to raw throughput, but quite good in terms of throughput vs power consumption.
- But Apple doesn't just try to do everything.
They do the things they think they can do very well.
Why would they try to build electric batteries, wireless modems, electric cars, solar cells, or quantum computers, if their R&D hadn't already determined that they would likely be able to do so Very Well?
It's not like any of those are really in their primary lines of business anyway.
- Didn't they rush to integrate ChatGPT into their OS back in 2024? Reality doesn't seem to align with your description.
- I wouldn't describe it as 'rushed'. Its integrated pretty much exactly the way they said it would be, as a fall-back from Siri when you ask world knowledge questions.
The part that doesn't work is having Siri locally smart enough to use it as a tool.
- They certainly announced they were going to. I've yet to meet someone who actually used that integration. Like many of these things, it seems to have been a sop to the investors who were accusing apple of ignoring the AI wave
- It’s even more superpowered than previous implementations of this strategy.
When they made the iPhone, iPod, and Apple Watch they had no specific hardware advantage over competitors. Especially with early iPhone and iPod: no moat at all, make a better product with better marketing and you’ll beat Apple.
Now? Good luck getting any kind of reasonably priced laptop or phone that can run local AI as well as the iPhone/MacBook. It doesn’t matter that Apple Intelligence sucks right now, what matters is that every request made to Gemini is losing money and possibly always will.
This is especially true in 2026 where Windows laptops are climbing in price while MacBooks stay the same.
- All three of those products launched with custom hardware made by partnered manufacturers.
At iPhone launch, I seem to remember Apple still having quite a bit of the flash ram market tied up from their exclusive iPod contracts - Apple basically helped finance new factories to be spun up in return for exclusive access to their production.
The Apple Watch had the S1 system on package, which included an Apple custom CPU. There were a number of miniaturization techniques and custom parts Apple used which I remember competitors lagging on being able to replicate due to the broader market tendency to integrate off the shelf products (but I don't have more part examples or timelines).
Since they try to stay secretive about upcoming products, competitors may only get hints about what Apple is doing through your typical industrial espionage channels until the product comes out. That creates quite a bit of lag then you are starting a new product design cycle based on a product your competitor just hit the market with.
- How do you know Gemini is losing money on inference?
- > How do you know Gemini is losing money on inference?
It's not. People make this claim with zero evidence.
But Google made around $20B profit on Google search in 2025 Q4, and that includes AI search.
- Until the day comes that they properly break out the financials you, nor the other poster have any idea as to what the numbers are.
- And if AI was making lots of money they’d break it out and proudly display it in their financials on its own.
- They can't break it out because it is embedded in other services.
But to quote:
> Overall, we’re seeing our AI investments and infrastructure drive revenue and growth across the board.
and
> Revenue from AI solutions built by our partners increased nearly 300% year-over-year, and commitments from our top 15 software partners grew more than 16X year-over-year.
https://blog.google/company-news/inside-google/message-ceo/a...
- No they choose not to break it out because under accounting principles you can get away with it.
Lmao don’t talk about subjects you clearly are not an expert in.
The only real metric one can use to gauge new investment is the marginal ROIC. Which is very noisy to say the least.
- Ok..
So as I said: Google made around $20B in profit on 2025 Q4 which includes AI search.
Both revenue and profit grew with the introduction of AI search.
So where exactly is this big loss you speak so confidently of?
- I did not comment on whether it was a loss or profit position.
“Until the day comes that they properly break out the financials you, nor the other poster have any idea as to what the numbers are.”
Until one of the private firms goes public nobody has a clean view of what the financials of a model business look like.
- Yep lol. Every investor and portfolio manager in the world is begging for a signal to say the returns really are coming on all this capex spend.
- They're talking about free inference like Android and Google Home devices. No one is paying subscription fees for these and they're running their inference in the cloud. Apple Intelligence, for the most part, is running on the device.
- Isn't some of Gemini's functionality on Android on-device?
- Yes it is
- Is there any evidence of any company making money in inference?
- Apples advantage was that they did everything in house and had the marketing and distribution capabilities. And now you’ve got the ecosystem lock in.
In hindsight it’s obvious why they pulled it off - nobody else could do it. They all had pieces missing.
- Apple aren’t in the business of building chatbots to impress investors (other than some WWDC2024 vaporware they’d rather not talk about any more). They’re in the business of consumer hardware.
Consumers want iPhones and (if Apple are right) some form of AR glasses in the next decade. That’s their focus. There’s a huge amount of machine learning and inference that’s required to get those to work. But it’s under the hood and computed locally. Hence their chips. I don’t see what Apple have to gain by building a competitor to what OpenAI has to offer.
- ~25% of Apple's revenue came from services in FY25 (and 50% from iPhone, ~25% from other hardware). They made $415B in that year, so ~$100B from services alone!
- Services revenue is mostly just 30% from App Store Sales. This means every time a user clicks a pro account for ChatGPT or Claude on their phone, Apple makes more money than they could make with a self deployed model.
- You're not wrong that they collect a ton of rent off AI apps, rumours a few weeks back claimed $900m in fees last year with 75% of that just from OpenAI.
But services revenue is:
- their 36% share of Google Ads for being default search engine, about $21 billion/year of pure profit
- their IAP fees, court testimony reveals 75% profit margin
- their first-party subscriptions, there's an antitrust about iCloud that alleges 52% of iPhone users are on paid plans and that the profit margins are 80-ish percent!
https://9to5mac.com/2026/03/19/report-apple-made-roughly-900...
- > (if Apple are right) some form of AR glasses in the next decade.
Pretty sure this is just a hedge or simple research project and not a main bet.
- Consumers don't necessarily want iPhone. They don't want to be excluded from iMessage, which is a completely different motivation.
- Yeah, that just doesn't pass the simplest sniff tests. I barely use iMessage, and yet I'm an iPhone user. Basically everyone around me is the same.
- Agreed, I’ve been a loyal iPhone user for a long time, and very few people I know use iMessage. I use it with my parents because they don’t have any other messenger, and they don’t even really know it’s iMessage, they just think of it as texting. Everyone I know is using something else for messages, whether it’s Discord, Instagram DMs, WhatsApp, or occasionally Telegram or Snapchat.
- In the US it's mostly iMessage, and that includes people who say it's not mostly iMessage.
iPhones are more expensive, on average, for a similar or worse experience. The thing that drives iphone sales is social. People want iPhones because their friends do, and that's a very good reason.
- > Yeah, that just doesn't pass the simplest sniff tests. I barely use iMessage, and yet I'm an iPhone user.
A single anecdote isn't data. You're not a typical consumer.
The only major market where iPhone outsells Android (number of handsets) is the US, and it's because of iMessage. Android is 70% of the world market and dominates LatAm, Africa, and Asia.
- Why are you comparing a single phone manufacturers market position to the market position of an entire OS?
iOS vs Android isn’t relevant when discussing hardware. It’s Apple vs Samsung etc. iOS doesn’t need majority market ownership for Apple to completely dominate their hardware competitors in a market.
- But now you've argued that people are buying Android instead of iPhone just because of social pressure from their peers.
- That must be an american thing because I guarantee you that it doesn't mean anything for the rest of the world.
- It is, and the iPhone doesn't have overwhelming market share in any other large market, which is my point.
- But they’re still the largest individual player in just about every market they’re in. So there’s clearly a strong demand for iPhones.
- That is a very US centric opinion.
In other part of the globe iphone users are mostly using whatsapp or Line and couldn't care less about imessage.
- And in those countries, iOS has a much smaller market share. You're proving my point.
- The sum of all these smaller markets is still bigger than the US one.
- When measured in terms of mouths, yes, but when measured in terms of surplus spending power than can go to Apple, certainly not.
India has 1.3 Billion people in terms of counting mouths, but not wallets with $1000 to send to Apple for a new iphone.
- US centric view, which I believe to be wrong. UK is predominantly WhatsApp, and the bulk of handsets sold are still iPhones.
Income is a much tighter correlation than messaging platform. Rack up those market shares by phone value and the scales tip even harder.
- > the bulk of handsets sold are still iPhones
According to https://gs.statcounter.com/os-market-share/mobile/united-kin... it's closer to 50/50.
- I doubt 80% of iphone users would be able to tell you if imessage was on or not.
they might say that some people's messages are green, but not much more.
- iMessage is AFAIK only really a big thing in the US.
- Yes, and the US is by far Apple's most important handset market. The other iOS-majority countries are small markets for Apple.
- I totally buy this as someone located in the US, but what is everybody else using? It can’t be WhatsApp? Is everyone sending all their connection graphdata to Meta?
- A lot of SMBs use Instagram to connect to their clients, so Instagram build-in messenger is a default option for a lot of people (especially women) in many parts of the world.
Some places have regional messengers that are very entrenched, like Line in Japan or KakaoTalk in Korea.
WhatsApp is a default option in a large number of countries including most of Middle East, parts of Europe, Brazil, most of Africa, Southern Asia. To me it is surprising, too, because out of all messaging options WhatsApp seems like the least developed and least ergonomic.
And yes, this does mean that most people share whatever data Big Tech wants. They use Meta to talk to each other, auto-upload their photos to Google, click "accept" to every cookie banner so that thousands of no-name companies around the world know where they are and what they are doing at all times.
- People who care about privacy (very very few) use signal, everyone else uses Whatsup
- You understand that Facebook and Instagram are also very popular yes?
- It’s WhatsApp. No one thinks about sending data to Meta. The world is much bigger than the HN bubble, where almost no one thinks about privacy implications.
- Absolutely this. No one cares about privacy. 99,9% population has no clue how tech works. “Oh, it’s an app on my phone.” That’s what typical consumer understands. How text travels from one phone to other is something magical.
Got WhatsApp, because there is no other channel to communicate with customers. It’s literally used by everyone without exceptions. Really scary.
- in my country it's Whatsapp, and has been since before it was acquired by Meta
- Everyone in the UK and Western Europe uses WhatsApp as their primary messenger.
The only time I ever open iMessage is when I get an SMS 2FA verification code or something similar.
Also, in the Middle East everyone also just uses WhatsApp or Telegram.
- No one uses iMessage in my country. Yet iPhones are sought after. Some of us just really like iPhones for the experience - not everything is a conspiracy. People can have different tastes and are more free to choose than people on HN like to believe.
- What country?
- The best part is that it’ll all run on your device, instead of siphoning off your data to the provider. Local first AI.
I think the creatives will also turn around their seething hatred of AI for Apple AI because they use more ethical training data and it feels more like they own their AI, no one’s charging them a subscription fee to use it and then using their private data for training.
- Why do you think "creatives" have a "seething hatred of AI"?
- Have you talked to an artist like a musician, an illustrator or a web designer about AI? It's ripping off their work without credit and making them unemployable.
- A lot of them already use AI already, such as for example Photoshop AI features. It also seems to be a bimodal distribution, there are those who use it without caring what anyone else says about it, especially the loud minority, and there are those who don't use it.
- So why would Apples AI features change their minds on either of those points?
- What I don't get about Apple is when everyone else was giving up on yet another VR attempt, moving into AI, they decide AI isn't worth it, and it was the right time for a me too VR headset.
So no VR, given the price and lack of developer support, and late arrival into AI.
- I think of it like a technology checkpoint. Make sure you got as far as everyone else when they gave up, so when the next innovation in that space comes along you can start back up on even footing.
You want to have your own pathway to production that dodges competitors’ patents, is somewhat defensible itself, maybe a brand, etc.
- It is the same pattern, late on VR, late on AI. Those two tech have a pricing problem. I would guess that Apple is working to create the conditions to make these tech cheap enough to sell it to everyone.
- For everyone that can afford Apple, that is.
They do have the mobile phone market duopoly advantage though, far from the 90's mistakes that almost closed shop.
- > it was the right time for a me too VR headset.
I think it was more that the experience was pretty much there. Hardware takes a loooong time to mature, even more if its a new style or package. I'm assuming that they were prototyping this in 2015-18.
Also, Apple knows that AR glasses, if done right, and not turned into a cesspool of perverts (ie google glasses) will be a massive platform. However its going to take at least another 5 years to get something usable. So if its possible, I expect apple to come out with something just after Meta either gives up or has a string of failures.
- I've had it turned off since Sequoia, and this I truly appreciate. It hasn't nagged me once to turn it or Siri on, and it isn't mandatory.
When I open up JIRA or Slack I am always greeted with multiple new dialogues pointing at some new AI bullshit, in comparison. We hates it precious
- I don't like companies forcing their newest features on me noisily and constantly trying to ship new features and see what sticks so you can't trust whether a feature advertised one week will even be there the next.
However, I have even less patience for companies forcing paid-for third-party ads down my throat on a paid product. Slack at least doesn't sell my eyeballs. Facebook, Twitter, Google's ads are worse to me than new feature dialogues.
Which brings me to Apple. I pay for a $1k+ device, and yet the app store's first result is always a sponsored bit of spam, adware, or sometimes even malware (like the fake ledger wallet on iOS, that was a sponsored result for a crypto stealer). On my other devices, I can at least choose to not use ad-ridden BS (like on android you can use F-Droid and AuroraStore, on Linux my package manager has no ads), but on iOS it's harder to avoid.
Apple hasn't sunk to Google levels in terms of ads, but they've crossed a line.
- I agree. App store is really horrible. Why is it that when I'm searching for a first party or a very very popular, the first result and many of the other results are weird scammy malware like things? I don't particularly care about the stupid homepage ads tho, I think thats just because I have "personalize app store recommendations" turned off.
Search inside Settings (both mac and ios) was also really really stupid for a long while. Why are you taking me to some random accessibility toggle when I'm looking for "displays" ? But I checked right now and it's good.
- I get it but... well I think of App Store as... a store. I don't have to go there.
I'm actually pretty disappointed in the lack of discovery available in the App Store, but I rarely go there. I'm fine with advertising being there. I wish it was better but I'm not offended that there is paid promotion in a store.
- >get letter from bank
>"to fix this, please install our app"
>search BankName
>comes up with other banks, BankNames US app (not the country you are in)
>revolut etc (cant use in the country you are in)
>ten minutes later
even worse when its your telecomm telling you to install their Official App so you can pay your bills or they will cut your cellular service, and you cant find it
- I don’t see what that has to do with (increased) advertising on the App Store (IMO search there never has been good) or the comment you replied to in which colechristensen said: “I'm actually pretty disappointed in the lack of discovery available”.
I think paid advertising may even help improve discoverability on the App Store because, instead of making 10 or 20 to do list apps and hoping to get them to rank high by a combination of sheer luck and SEO tricks, scammers may only make one, and pay to get that to the top of the list.
In super markets product placement is affected by two factors: how much producers are willing to pay for a good spot (e.g. by offering lower wholesale prices if the product gets a more visible place) and vetting by the store owner.
I don’t think different solutions exist in the App Store. Apple doesn’t want to do much vetting, making advertising the only thing that may help (and yes, it would be awesome if there were a store that did do much vetting, but that requires a world where many different stores exist, and we aren’t there (yet))
- > I think paid advertising may even help improve discoverability on the App Store
So my grandmother searching "Powerpoint" and getting malware instead of the microsoft app is good actually?
Let me compare some search terms and see if ads are giving me "better" results:
* ublock - surfshark vpn
* wordle - spammy adware word game
* slack - spammy adware game
* microsoft word - spammy spyware office app (not the one made by MS)
* every bank I could think of - different financial app
Like, this isn't a good user experience. The ads aren't relevant, even when you type in a hyper-popular app's name exactly, something like 80% of the time a competitor has sniped the top spot.
For the "microsoft word" search, the spam app had an identical logo to word, and I have no doubt many people have been fooled. If you look at the reviews, some of the 1 star reviews are detailed complaints, and all the 5 star reviews are inhuman sounding "This helped me do my job" and "great app" reviews.
> I don’t think different solutions exist in the App Store
Sorting roughly by popularity and reviews, and also doing a little more to combat fake reviews, seems like it would be better. It at least would mean that if I searched "bank name" my bank's app would come up, since for every bank I tried the first non-ad result was in fact the bank in question.
It would save grandmothers around the world who just click on the first result.
- > So my grandmother searching "Powerpoint" and getting malware instead of the microsoft app is good actually?
Where do I claim that? My argument is that, with paid advertising, the store may show fewer items, making it easier to find the right thing.
And no, I’m not claiming that’s ideal; only that it c/would be an improvement.
- So you're saying a hypothetically well implemented advertisement service could be better than a hypothetical poorly implemented ranking service.
The reality is right now we have a poorly implemented advertisement service that shows malware, and if you ignore the ads and look at the search results based on relevance, they're clearly better.
The claim "A good ad service would be good" is a truism, but that's not the reality we live in.
- As someone who recently moved to NL from the US I encounter this issue about once a week and it’s blocking me from doing serious things like paying for parking, taxes, utilities or government services, all of which have apps that are only available on the Dutch app store.
I have a separate Dutch Apple ID I can switch to, but each time I log out I risk accidentally deleting all my data.
- > all of which have apps that are only available on the Dutch app store.
This isn’t really on Apple though. Blame the companies/developers for geo gating their apps. It’s a simple checkbox in the store to make it available for other countries.
- That letter from the bank would probably include a QR code linking directly to their app oui?
- [dead]
- Where do you install apps from then?
I get an app recommendation from a friend, I go to the App Store and search for it. I have to be super careful about which link I'm actually clicking on and which app I'm installing, because the App Store is riddled with spam and malware.
I wouldn't mind, except that Apple charge 30% of everything with the justification that they are keeping the ecosystem free of spam and malware...
- I’ve been installing apps from the App Store for more than a decade and have never ever accidentally downloaded spam or malware. I’m sure it’s there but it’s really not “riddled” with it in my experience searching for apps. What it’s riddled with is subscription-based apps whose free tier is worthless
- I thought the justification was that they curate an ecosystem of apps with loyal/paying customers
- I install a new app maybe once every 6 months. I agree that the app store is trash, littered with ads and casino games for kids.
I just don't find it hard to find the app I want, when I want something specific, and install, and then _get the hell out of that shithole_.
- It's best to avoid App Store and look for apps on Google (with ad blocker).
- [dead]
- I haven't noticed this at all and I wonder if you're mistaking curation for advertising? When I open up the App Store I get a panel written "games we love" and a listing of indie games that are clearly not paid for ads. The ads in search are visibly marked as ads, and while I don't particularly like ads in general, they are pretty easy to avoid.
- On iOS, if you open the App Store and click on the Today tab (it's the default tab if you kill and reopen), there's ads interspersed with curations.
For me, the second tile is an ad for Upside, some cashback app
- Mine is Moneris Go, and the top review is titled "Garbage App!!!!" lol
Honestly the last time I remember using the App Store was years ago and I can't recall if they had ads or not. Imo it's distasteful and I wish they didn't have them. Still leagues better than the fucking ads in the start menu which caused me to give up on gaming and Windows forever.
- If I open the app store and search "Gemini", the first result is "ChatGPT (advertisement)"
If I search for my bank, I get another bank. If I search for "Wordle", I get a bunch of ad-supported spamware (both the ad and non-ad results) before the real NYT Games app.
The app store has ads in search results. This is the primary way that my technologically inept relatives end up with the wrong app installed btw, is by searching and clicking the first result, and getting complete trash adware.
Apple should be ashamed of selling out their users.
- Apple keeps nagging me to upgrade to godawful Tahoe. Every time there’s a system update (which includes Safari, Safari TP, CLT etc. updates) Tahoe is always default checked. Even when I specifically click on a Sequoia point update, the Tahoe update is always checked instead of that point release. This has way more destructive potential than “try our new AI feature” in apps.
To add insult to injury, the one AI feature that I may want to evaluate—Claude Code integration in Xcode—is gated behind Tahoe upgrade, even though it has absolutely no reason to do so, given that every other IDE integrates AI features just fine on any recent OS.
Edit: Oh and I’m not getting bombarded in Slack at all, maybe because my company doesn’t pay for any of the AI stuff there. Last time I got a banner or something like that was months ago.
- Nvidia restricts gamer cards in data centers through licensing, eventually they will probably release a cheaper consumer AI card to corner the local AI market that can't be used in data centers if they feel too much of a threat from Apple.
Imagine a future where Nvidia sells the exact same product at completely different prices, cheap for those using local models, and expensive for those deploying proprietary models in data centers.
- Nvidia-Mediatek Arm laptops will compete with Qualcomm and Apple, https://www.forbes.com/sites/jonmarkman/2026/03/16/the-arm-i...
[WSJ] sources expect.. first units in H1 2026, with GTC as the most likely unveiling stage.. NPU reportedly exceeds both Intel and AMD’s current neural processing units.. If the integrated GPU delivers RTX 5070-class performance in a thin laptop form factor, it would eliminate the need for a separate GPU die, fundamentally changing how gaming laptops are designed.- If they can get Valve/Steam for an OS that handles most games well that could in fact be huge if the pricepoint is a bit lower initially but with plenty of unified RAM (both for AI but also games).
That said, gaming laptops cooling issues are so often around the GPU so it'd also require a seasoned manufacturer to make it correctly.
- [dead]
- There’s long been professional segmentation for GPUs, long before people started running AI models on them
- > Nvidia restricts gamer cards in data centers through licensing
So does intel, so do a lot of companies.
but
The processor is only half of the equation, memory volume, type and bandwidth as also a big factor in cost. Sure consumer GPUs are cheaper, but they have less memory and (often) less bandwidth. The proc might be the same, or binned, but thats only part of the price.
- Having your cake and eating it too. Consumer goodwill and printing money.
- I’m confused why he keeps calling out “the Mac Mini craze after claw went viral”. I thought the various versions of claw used remote models, not local models, and I thought the point of using a Mac mini was that it can send and receive iMessages, not anything about the hardware.
- You have a lot of private data. So running it locally makes you use less credits and then you also don't have your emails as training data for cloud models.
- Using the author’s logic, it is Google then that will lead.
Unlike Apple, they have even more devices in the field PLUS they have strong models PLUS Apple uses Google models.
- Google is an advertisement company at the end of the day and that's a conflict of interest with user privacy.
- So is Apple. Worse, Apple is a company that is comfortable with the idea of restricting user control, so you can't get privacy even if you want it.
- > Apple uses Google models
Source?
- The article itself? lol
- When using Siri recently it really struck me how much worse it feels after using ChatGPT. It struggles to understand what I say correctly and you have to give commands in more of a 'computer-friendly' form.
I hope they can at least fix this, as I really only use it as a hands-free system while driving.
- My capex is even less than Apple, I can ship to user's Apple hardware and I can't access iPhone user photos either...so really I'm the winner.
- Thing is, Apple never considered racing against LLM runners. Apple's success comes from human-centered design, it is not trying to launch a me-too product just because it increases their stock price. iPod was not the first MP3 player. iPhone was not even 3G at launch -- in the middle of 3G marketing craze.
They sure got lucky that unified memory is well-suited for running AI, but they just focused on having cost- and energy-efficient computing power. They've been having glasses in sight for the last 10 years (when was Magic Leap's first product?) and these chips have been developed with that in mind. But not only the chips: nothing was forcing Apple to spend the extra money for blazing fast SSD -- but they did.
So yes, Apple is a hardware company. All the services it sells run on their hardware. They've just designed their hardware to support their users' workflows, ignoring distractions.
With that said, LLM makes the GPU + memory bandwidth fun again. NVidia can't do it alone, Intel can't do it alone, but Apple positioned itself for it. It reminds me how everyone was surprised when then introduced 64-bit ARM for everyone: very few people understood what they were doing.
Tbh there are NVidia GPUs that beat Apple perf 2x or 3x, but these are desktop or server chips consuming 10x the power. Now all Apple needs to do is keep delivering performance out of Apple Silicon at good prices and best energy efficiency. Local LLM make sense when you need it immediately, anywhere, privately -- hence you need energy efficiency.
- Any field with abstraction becomes susceptible to ai disruption. In fact, ai susceptibility is proportional to the amount of abstraction. In this sense, the more abstraction then the more ai will displace people (my observation). This turns the millenia old model upside down. Traditionally more abstraction required more schooling and experience and was rewarded with more financial rewards. Until robots and world models become safe, affordable and ubiquitous, the financial apex of careers will be those that are abstraction resistant (technicians, emts, trades, etc) and those protected by requlation and the requlators(politicians, ceos)
- Why is Nvidia so central to LLMs? Because they embraced ML a decade ago. Apple did as well, machine learning is central to so many things in the iPhone. Its not so surprising then, that a strong showing in ML sets you up good for LLMs..
- Apple's accidental moat now is taking the rise of hardware prices due to AI eat into their margins and just expand the mac user base.
- Maybe they thought an investment in a product with lots of substitutes & high capital requirements wasn't very attractive.
- Honestly, I think part of the reason Apple hasn't jumped deep into AI is due to two big reasons:
1) Apple is not a data company.
2) Apple hasn't found a compelling, intuitive, and most of all, consistent, user experience for AI yet.
Regarding point 2: I haven't seen anyone share a hands down improved UX for a user driven product outside of something that is a variation of a chat bot. Even the main AI players can't advertise anything more than, "have AI plan your vacation".
- Put proper LLM into Siri. Encourage developers to expose the functionality of their apps as functions, allow Siri LLM to access those (and sprinkle some magic security dust over it).
Boom, you have an agent in the phone capable of doing all the stuff you can do with the apps. Which means pretty much everything in our life.
- As for consistency, Apple's latest UI shows they don't give a damn any more.
- I'm pretty sure most people didn't notice any kind of inconsistency. I myself have a hard time figuring out what's going on. I'm so focused on doing the work with the computer that I don't have the time to notice what's "wrong" with the OS. Which makes me wonder if the whole thing is blown out of proportion.
- The moat is that they saved their money and can remain in business indefinitely!
- Apple is almost 2 years out from their announcement of Apple Intelligence. It has barely delivered on any of the hype. New Siri was delayed and barely mentioned in the last WWDC; none of the features are released in China.
In other news, people keep buying iPhones, and Apple just had its best quarter ever in China. AAPL is up 24% from last year.
- i dont even care about apple intelligence. stays off, not sure anyone really cares about it who is also interested in what this ai shenanigans is about on a local device. i think people keep conflating apple intelligence with all these convos about how macs are kinda dope for joe consumer wanting to tinker with llms.
that's the other part of the story that matters, not apple intelligence. this writeup tries to touch on that, apple is uniquely positioned to do really well in this arena if/when local llm's becoming commodities that can do really impressive stuff. we're getting there a lot faster than we thought, someone had a trillion parameter qwen3,5 model going on his 128gb macbook and now people are thinking of more creative ways to swap out whats in memory as needed.
- A lot of the people that bought iPhones are now buying Macs as well.
- Indeed, a lot of the people that bought iPhones are now buying Macs with a binned version of the chip they already bought. So much so that Apple is in danger of running out of them.
- It's almost like people don't actually want LLMs all over their core tools...
- > I am actually of the opinion that without some kind of bailout, OpenAI could be bankrupt in the next 18-24 months, but I am horrible at predictions
I find this intriguing.. Does anyone here have enough insight to speculate more?
- 1) Put data on X/Y chart 2) Find ruler and pencil 3) Draw line
Doing this you will make all kind of fun predictions.
- I don't think I have unique insight on this but the common belief is they are desperately trying to reach AGI or a least have some halo model that will allow them to rise over the other companies. The problem is they have a hilariously large monthly burn paying for compute. If they don't produce something, they are in trouble if investors stop offering capital.
- > Think about the App Store. Apple didn’t build the apps, they built the platform where apps ran best, and the ecosystem followed.
As far as I remember Apple basically got forced into opening the platform to 3rd party developers. Not by regulation but by public pressure. It wasn't their initial intention to allow it.
- there are always three elements in the equations of business model: 1. marginal cost 2. marginal revenue 3. value created
for llm providers, i always believe the key is to focus on high value problems such as coding or knowledge work, becaues of the high marginal cost of having new customers - the token burnt. and low marginal revenue if the problem is not valuable enough. in this sense no llm providers can scale like previous social media platforms without taking huge losses. and no meaning user stickiness can be built unless you have users' data. and there is no meaningful business model unless people are willing to pay a high price for the problem you solve, in the same way as paying for a saas.
i am really not optimistic about the llm providers other than anthropic. it seems that the rest are just burning money, and for what? there is no clear path for monetization.
and when the local llm is powerful enough, they will soon be obsolete for the cost, and the unsustainable business model. in the end of the day, i do agree that it is the consumer hardware provider that can win this game.
- I am super bullish on Google, they are my best bet to earn from models. Mostly because they are vertically integrated (other revenue streams) + open to provide services to other companies (Apple deal).
- I just realized that next year Apple's Neural Engine will be 10 years old, just like the "NPUs will change AI forever!" puff pieces.
Here's to another 10 years of scuffed Metal Compute Shaders, I guess.
- What I think was a wasted opportunity was not bringing the xserve back, being one of the few e2e solutions out there at scale.
- The whole premise is that if you don't get to AGI first then you loose. The idea is that Anthropic with AGI could build a better version of Apple, or whatever it wants.
This was the conversation like 1 year ago. What has changed?
- Nothing changed, it's new ground, we are searching it with a search light. From some vantage points our view on things may feel quite complete, even insightful. Then we look at if differently and feel lost. It's a process we are in together.
- if you actually got to A.G.I, why would you rent it out ?
- So Apple’s AI acceleration and memory architecture is accidental, but nvidia’s is not?
- Nvidia has research papers on accelerating Machine Learning as far back as 2014: https://research.nvidia.com/publications?f%5B0%5D=research_a...
- Apple's website from 2017 https://machinelearning.apple.com/research?page=1&sort=oldes...
That's also the year where they released on-chip acceleration for certain things, so they probably started a year or 2 before working on that tech? Not as accidental as assumed.
- Apple's Neural Engine from 2017 is an NPU that's basically obsolete today in light of Metal Compute Shaders. It was accidental, and Apple is redesigning their GPU architecture to subsume it.
CUDA on the other hand continues to be relevant, and the compute capabilities from 2014 are still instrumental for accelerating training and acceleration workloads.
- Looks like Apple fell into a winning/winnable position in the AI wars. Their privacy/safety first culture is the cause of them not embracing AI as effectively as other more maverick styles. Their AI was always hindered by privacy, and local first AI is their savour.
- Apple is just waiting for all the slop to inevitably crash to see what actually works
- In the larger scheme of things, the great winner will be open source, as we'll simply use AI to recreate the entire MacOS ecosystem :)
- If AI coding does go anywhere and stays affordable, this would be a great outcome.
- I think AI needs to greatly accelerate open hardware design and make advanced manufacturing more accessible to really make a dent.
User facing software is not the limiting factor in AI assisted replacement of Apple products.
- maybe “The Only Way to Win is Not to Play”
- > Pure strategy, luck, or a bit of both? I keep going back and forth on this, honestly, and I still don’t know if this was Apple’s strategy all along, or they didn’t feel in the position to make a bet and are just flowing as the events unfold maximising their optionality.
Maximizing the available options is in fact a "strategy", and often a winning one when it comes to technology. I would love to be reminded of a list of tech innovators who were first and still the best.
Anyway, hasn't this always been Apple's strategy?
- That’s actually by design. Apple never jumps on the tech hype bandwagon.
they wait until the dust settles before making their well-thought-out moves.
Every time they’ve jumped the hype train too quickly it hasn’t worked out, like Siri for example.
- How do you rate Vision Pro? It was not the first one, but it was certainly the best one. Total dud though, while Meta Ray Bans are selling like hot cakes (irrespective of what you think of the company)
- I think the article is missing a whole aspect on how Apple is ensuring to not face actual competition while they're "playing it safe":
Even if the investment is overblown, there is market-demand for the services offered in the AI-industry. In a competitive playing field with equal opportunities, Apple would be affected by not participating. But they are establishing again their digital market concept, where they hinder a level playing field for Apple users.
Like they did with the Appstore (where Apple is owning the marketplace but also competes in it) they are setting themselves up as the "the bakn always wins" gatekeeper in the Apple ecosystem for AI services, by making "Apple Intelligence" an ecosystem orchestration layer (and thus themselves the gatekeeper).
1. They made a deal with OpenAI to close Apple's competitive gap on consumer AI, allowing users to upgrade to paid ChatGPT subscriptions from within the iOS menu. OpenAI has to pay at least (!) the usual revenue share for this, but considering that Apple integrated them directly into iOS I'm sure OpenAI has to pay MORE than that. (also supported by the fact that OpenAI doesn't allow users to upgrade to the 200USD PRO tier using this path, but only the 20USD Plus tier) [1]
2. Apple's integration is set up to collect data from this AI digital market they created: Their legal text for the initial release with OpenAI already states that all requests sent to ChatGPT are first evaluated by "Apple Intelligence & Siri" and "your request is analyzed to determine whether ChatGPT might have useful results" [2]. This architecture requires(!) them to not only collect and analyze data about the type of requests, but also gives them first-right-to-refuse for all tasks.
3. Developers are "encouraged" to integrate Apple Intelligence right into their apps [3]. This will have AI-tasks first evaluated by Apple
4. Apple has confirmed that they are interested to enable other AI-providers using the same path [4]
--> Apple will be the gatekeeper to decide whether they can fulfill a task by themselves or offer the user to hand it off to a 3rd party service provider.
--> Apple will be in control of the "Neural Engine" on the device, and I expect them to use it to run inference models they created based on statistics of step#2 above
--> I expect that AI orchestration, including training those models and distributing/maintaining them on the devices will be a significant part of Apple's AI strategy. This could cover alot of text and image processing and already significantly reduce their datacenter cost for cloud-based AI-services. For the remaining, more compute-intensive AI-services they will be able to closely monitor (via above step#2) when it will be most economic to in-source a service instead of "just" getting revenue-share for it (via above step#1).
So the juggernaut Apple is making sure to get the reward from those taking the risk. I don't see the US doing much about this anti-competitive practice so far, but at least in the EU this strategy has been identified and is being scrutinized.
[1] https://help.openai.com/en/articles/7905739-chatgpt-ios-app-...
[2] https://www.apple.com/legal/privacy/data/en/chatgpt-extensio...
[3] https://developer.apple.com/apple-intelligence/
[4] https://9to5mac.com/2024/06/10/craig-federighi-says-apple-ho...
- It's the same everywhere: great fundamentals pay off. It's true of martial arts, dance, and absolutely about software platforms. You just have to trust that process and invest in it, which Apple does (although frustratingly not enough!).
- > Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.
Well.. no. The Stargate expansion was cancelled the orginally planned 1.2MW (!) datacenter is going ahead:
> The main site is located in Abilene, Texas, where an initial expansion phase with a capacity of 1.2 GW is being built on a campus spanning over 1,000 acres (approximately 400 hectares). Construction costs for this phase amount to around $15 billion. While two buildings have already been completed and put into operation, work is underway on further construction phases, the so-called Longhorn and Hamby sections. Satellite data confirms active construction activity, and completion of the last planned building is projected to take until 2029.
> The Stargate story, however, is also a story of fading ambitions. In March 2026, Bloomberg reported that Oracle and OpenAI had abandoned their original expansion plans for the Abilene campus. Instead of expanding to 2 GW, they would stick with the planned 1.2 GW for this location. OpenAI stated that it preferred to build the additional capacity at other locations. Microsoft then took over the planning of two additional AI factory buildings in the immediate vicinity of the OpenAI campus, which the data center provider Crusoe will build for Microsoft. This effectively creates two adjacent AI megacampus locations in Abilene, sharing an industrial infrastructure. The original partnership dynamics between OpenAI and SoftBank proved problematic: media reports described disagreements over site selection and energy sources as points of contention.
https://xpert.digital/en/digitale-ruestungsspirale/
> Micron’s stock crashed. [the link included an image of dropping to $320]
Micron’s stock is back to $420 today
> One analysis found a max-plan subscriber consuming $27,000 worth of compute with their 200$ Max subscription.
Actually, no. They'd miscalculated and consumed $2700 worth of tokens.
The same place that checked that claim also points out:
> In fact, Anthropic’s own data suggests the average Claude Code developer uses about $6 per day in API-equivalent compute.
https://www.financialexpress.com/life/technology-why-is-clau...
I like Apple's chips, but why do we put up with crappy analysis like this?
- Apple's reality distortion field is really really strong. People love to claim Apple is doing 4D chess, when in reality Apple has certain strengths but AI is anything but.
Which is why they were completely caught offguard with botched rollout of Apple Intelligence. Even when they were playing to their strengths, things have not gone for them (Apple Vision Pro). Liquid Glass has had mixed reception, and that's often explained away as "Apple is setting up a world for Spatial Computing by unifying design language" and when the lead designer was fired it was like "Thank God Alan Dye is gone, he was bad for Apple anyway".
So essentially, Apple can do no wrong.
- This seems mistaken to me. The core idea is that LLMs are commoditizing and that the UI (Siri in this case) is what users will stick with.
But... what's the argument that the bulk of "AI value" in the coming decade is going to be... Siri Queries?! That seems ridiculous on its face.
You don't code with Siri, you don't coordinate automated workforces with Siri, you don't use Siri to replace your customer service department, you don't use Siri to build your documentation collation system. You don't implement your auto-kill weaponry system in Siri. And Siri isn't going to be the face of SkyNet and the death of human society.
Siri is what you use to get your iPhone to do random stuff. And it's great. But ... the world is a whole lot bigger than that.
- Apple never competed in the "AI race" in the first place, because they already knew they were already at the finish line.
This was really unsurprising [0].
- Your linked comment argues the opposite.
> Won't be surprised for the re-introduction of Xserve again but for AI.
This means, Apple is gonna spend a lot of money standing up data centers (CapEx). And the article in question is essentially saying that Apple is smart not to spend any money.
It sounds like there's a bit of wishful thinking on - Whatever Apple is doing is 4D chess. Apple not spending any money - That's genuis. Apple re-introducing Xserve racks - genius.
- > This is an obvious moat for Apple who can offer a cheaper alternative for training, inference AI server farms.
According to Bloomberg, Apple's inference server farms are a flop: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...
the chips [...] are not powerful enough to run the latest frontier models like Gemini, which the new Siri will be based on
- For the love of all that's holy - folks please stop using AI to publish smart sounding texts. While you may think you are "polishing" your text, you are just disrespecting your readers. Write in your own words.
- But why do I feel like the quality of the software from Apple declined sharply in recent years? The liquid glass design feels very unpolished and not well thought out throughout almost everywhere… seems like even Apple can’t resist falling victim to AI slop
- I don’t think it’s AI slop. Even before modern generative AI, I’ve noticed a decline in Apple’s software quality.
Rather, I feel that Apple has forgotten its roots. The Mac was “the computer for the rest of us,” and there were usability guidelines backed by research. What made the Mac stand out against Windows during a time when Windows had 95%+ marketshare was the Mac’s ease of use. The Mac really stood out in the 2000s, with Panther and Tiger being compelling alternatives to Windows XP.
I think Apple is less perfectionistic about its software than it was 15-20 years ago. I don’t know what caused this change, but I have a few hunches:
0. There’s no Steve Jobs.
1. When the competition is Windows and Android, and where there’s no other commercial competitors, there’s a temptation to just be marginally better than Windows/Android than to be the absolute best. Windows’ shooting itself in the foot doesn’t help matters.
2. The amazing performance and energy efficiency of Apple Silicon is carrying the Mac.
3. Many of the people who shaped the culture of Apple’s software from the 1980s to the 2000s are retired or have even passed away. Additionally, there are not a lot of young software developers who have heard of people like Larry Tesler, Bill Atkinson, Bruce Tognazzini, Don Norman, and other people who shaped Apple’s UI/UX principles.
4. Speaking of Bruce Tognazzini and Don Norman, I am reminded of this 2015 article (https://www.fastcompany.com/3053406/how-apple-is-giving-desi...) where they criticized Apple’s design as being focused on form over function. It’s only gotten worse since 2015. The saving grace for Apple is that the rest of the industry has gone even further in reducing usability.
I think what it will take for Apple to readopt its perfectionism is if competition forced it to.
- I agree that there is a decline in usability. If you took a Mac from those early days, it is still very usable and everything is where you'd expect it to be. In recent years this has changed and the general iOS-ification of the OS has occurred. I have avoided upgrading to Tahoe due to seeing how awful my wife's iPhone looks now. It looks like a children's toy.
- Software quality decline has been a recognised trend long before LLMs took the limelight. Apple included.
- [dead]
- [flagged]
- Don't worry, when apple introduce it, it'll be revolutionary and 10% thinner.
- Apple will just drip feed locally running models that enable minor conveniences. They will probably drop the Apple Intelligence label later and just have things with their own names like "magic eraser".
- Apple have had Siri for decades without any meaningful movement. If you think Apple is suddenly going to get better, that's just wishful thinking. Apple neither has the expertise nor the capability to do any of that. They'd hvae demonstrated that with Siri long time back.
What Apple does it build beautiful hardware. The software has been shambles for a really long time.
- I like how we are acting like this market is so novel and emergent revering the luck of some while lamenting the failures of others when it was all "roadmapped" a decade ago. It's like watching a Shaanxi shadow puppet show with artificial folk lore about the origins of the industry. I hate reality television!
- one day people will realize that Tim Cook as one of the best killer CEOs.
by now - by now he has more hits than Steve Jobs. His precision, and being able to manage risk maybe due to his supply chain background have made Apple into the killer it is today.
if we were in the age of Robber barons he would've been up there with them.