- I have been using lemonade for nearly a year already. On Strix Halo I am using nothing else - although kyuz0's toolboxes are also nice (https://kyuz0.github.io/amd-strix-halo-toolboxes/)
Nowadays you get TTS, STT, text & image generation and image editing should also be possible. Besides being able to run via rocm, vulkan or on CPU, GPU and NPU. Quite a lot of options. They have a quite good and pragmatic pace in development. Really recommend this for AMD hardware!
Edit: OpenAI and i think nowaday ollama compatible endpoints allow me to use it in VSCode Copilot as well as i.e. Open Web UI. More options are shown in their docs.
- How much of a speedup might I get for, say, Qwen3.5-122B if I were to run with lemonade on my Strix Halo vs running it using vulkan with llama.cpp ?
- You would get similar performance. Lemonade is designed as a turnkey (optimized for AMD Hardware) for local AI models. The software helps you manage backends (llama.cpp, flm, whispercpp, stable‑diffusion.cpp, etc) for different GenAI modalities from a single utility.
On the performance side, lemonade comes bundled with ROCm and Vulkan. These are sourced from https://github.com/lemonade-sdk/llamacpp-rocm and https://github.com/ggml-org/llama.cpp/releases respectively.
- Have you used it with any agents or claw? If so, which model do you run?
- I have two Strix Halo devices at hand. Privately a framework desktop with 128gb and at work 64GB HP notebook. The 64GB machine can load Qwen3.5 30B-A3B, with VSCode it needs a bit of initial prompt processing to initialize all those tools I guess. But the model is fighting with the other resources that I need. So I am not really using it anymore these days, but I want to experiment on my home machine with it. I just dont work on it much right now.
Lemonade has a Web UI to set the context size and llama.cpp args, you need to set context to proper number or just to 0 so that it uses the default. If its too low, it wont work with agentic coding.
I will try some Claw app, but first need to research the field a bit. But I am using different models on Open Web UI. GPT 120B is fast, but also Qwen3.5 27B is fine.
- Qwen3-Coder-Next works well on my 128GB Framework Desktop. It seems better at coding Python than Qwen3.5 35B-A3B, and it's not too much slower (43 tg/s compared to 55 tg/s at Q4).
27B is supposed to be really good but it's so slow I gave up on it (11-12 tg/s at Q4).
- Agreed. Qwen3-coder-next seems like the sweetspot model on my 128GB Framework Desktop. I seem to get better coding results from it vs 27b in addition to it running faster.
- As another data point.
Running Qwen3.5 122B at 35t/s as a daily driver using Vulcan llama.cpp on kernel 7.0.0rc5 on a Framework Desktop board (Strix Halo 128).
Also a pair of AMD AI Pro r9700 cards as my workhorses for zimageturbo, qwen tts/asr and other accessory functions and experiments.
Finally have a Radeon 6900 XT running qwen3.5 32B at 60+t/s for a fast all arounder.
If I buy anything nvidia it will be only for compatibility testing. AMD hardware is 100% the best option now for cost, freedom, and security for home users.
- How is the performance for Z-Image on the R9700s?
- Are the dedicated GPU cards on another machine or you’re using eGPU with the framework?
- Been running local LLMs on my 7900 XTX for months and the ROCm experience has been... rough. The fact that AMD is backing an official inference server that handles the driver/dependency maze is huge. My biggest question is NPU support - has anyone actually gotten meaningful throughput from the Ryzen AI NPU vs just using the dGPU? In my testing the NPU was mostly a bottleneck for anything beyond tiny models.
- > Been running local LLMs on my 7900 XTX for months and the ROCm experience has been... rough.
Just out of curiosity... how so?
I only ask because I've been running local models (using Ollama) on my RX 7900 XTX for the last year and a half or so and haven't had a single problem that was ROCm specific that I can think of. Actually, I've barely had any problems at all, other than the card being limited to 24GB of VRAM. :-(
I'm halfway tempted to splurge on a Radeon Pro board to get more VRAM, but ... haven't bitten the bullet yet.
- I have had way better perf with Vulcan than ROCm on kernel 7.0.0. They made some major improvements. 20%+ speedups for me.
- the npu is more for power efficiency when on battery. I don't think it's a replacement for gpu.
- what kind of tps slowdown would you realistically on an npu vs gpu?
- Is... is this named because they have a lemon they're trying to make the most of?
- I think saying "L-L-M" sounds kind of like "lemon," so this is an LLM-aid (sounds like lemonade).
- so obvious and yet I didn't connect the dots. thank you
- wait until you discover the LuLuleMonade -connection /s
- If life keeps giving it them, they should instead invent a combustible lemon.
- Do they know who you are? They're the guys who are going to blow your house up ... with the lemons.
- On an unrelated note, do you think this software supports running models from a CD?...
- Lemonsqueeze was considered too violent
- If you run it in a cluster, does it become a Lemon Party?
- Feels like this is sitting somewhere between Ollama and something like LM Studio, but with a stronger focus on being a unified “runtime” rather than just model serving.
The interesting part to me isn’t just local inference, but how much orchestration it’s trying to handle (text, image, audio, etc). That’s usually where things get messy when running models locally.
Curious how much of this is actually abstraction vs just bundling multiple tools together. Also wondering if the AMD/NPU optimizations end up making it less portable compared to something like Ollama in practice.
- It bundles tools, model selection, and overall management.
It’s portable in the sense it will install on any of the supported OS using CPU or vulkan backends. But it only supports out of the box ROCM builds and AMD NPUs. There is a way to override which llama.cpp version it uses if you want to run it on CUDA, but that adds more overhead to manage.
If you have an AMD machine and want to run local models with minimal headache…it’s really the easiest method.
This runs on my NAS, handles my home assistant setup.
I have a strix halo and another server running various CUDA cards I manage manually by updating to bleeding edge versions of llama.cpp or vllm.
- Note that the NPU models/kernels this uses are proprietary and not available as open source. It would be nice to develop more open support for this hardware.
- Are they? The docs say "You can also register any Hugging Face model into your Lemonade Server with the advanced pull command options"
- That won't give you NPU support, which relies on https://github.com/FastFlowLM/FastFlowLM . And that says "NPU-accelerated kernels are proprietary binaries", not open source.
- I bought one of their machines to play around with under the expectation that I may never be able to use the NPU for models. But I am still angry to read this anyway.
- AMD/Xilinx's software support for the NPU is fully open, it's only FFLM's models that are proprietary. See https://github.com/amd/iron https://github.com/Xilinx/mlir-aie https://github.com/amd/RyzenAI-SW/ . It would be nice to explore whether one can simply develop kernels for these NPU's using Vulkan Compute and drive them that way; that would provide the closest unification with the existing cross-platform support for GPU's.
- Been running lemonade for some time on my Strix Halo box. It dispatches out to other backends that they include, like diffusion and llama. I actually don't like their combined server, and what I use instead is their llama CPP build for ROCm.
https://github.com/lemonade-sdk/llamacpp-rocm
But I'm not doing anything with images or audio. I get about 50 tokens a second with GPT OSS 120B. As others have pointed out, the NPU is used for low-powered, small models that are "always on", so it's not a huge win for the standard chatbot use case.
- Even small NPUs can offload some compute from prefill which can be quite expensive with longer contexts. It's less clear whether they can help directly during decode; that depends on whether they can access memory with good throughput and do dequant+compute internally, like GPUs can. Apple Neural Engine only does INT8 or FP16 MADD ops, so that mostly doesn't help.
- I’ve read the website and the news announcement, and I still don’t understand what it is. An alternative to LM Studio? Does it support MLX or metal on Macs? I’m assuming it will optimize things for AMD, but are you at a disadvantage using other GPUs?
- >Does it support MLX or metal on Macs?
This is answered from their Project Roadmap over on Github[0]:
Recently Completed: macOS (beta)
Under Development: MLX support
[0] https://github.com/lemonade-sdk/lemonade?tab=readme-ov-file#...
- It's alternative to LM Studio in a way that it's an abstraction over multiple runtimes. AMD part is that it supports FastFlowML runtime which is the only way to utilize NPU on Ryzen AI CPUs on linux.
- It’s an easy way to get started and maintain a local AI stack that concentrates on AMD optimization. It is a one stop install for endpoints for sst, tts, image generation, and normal LLM. It has its own webui for management and interacting with the endpoints.
It also has endpoints that are compatible with OpenAI, Ollama, and Anthropic so you can throw any tool that is compatible with those and it will just run.
- I think LM Studio itself uses other software to actually make use of LLMs. If that other software does not support your NPUs, then you are not going to get much performance out of those. This Lemonade thing I am guessing is one such other software, that LM Studio could be using.
- Surprising that the Linux setup instructions for the server component don't include Docker/Podman as an option, its Snap/PPA for Ubuntu and RPM for Fedora.
Maybe the assumption is that container-oriented users can build their own if given native packages?
- They do have some container options, though I definitely think they should be added to the release page: https://lemonade-server.ai/install_options.html#docker
- Why should this be on the "Releases"? Shouldn't that just be for build artifacts? Pre-built containers belong on a registry, no?
I suppose a Dockerfile could be included but that also seems unconventional.
- I just meant on the instructions part of the releases page (since they already have some installation instructions), not the artifacts themselves.
- A fun observation: pulling models sends ~200mbit of progress updates to your browser
- I’m currently optimizing FLUX to run on a cluster of consumer 8GB VRAM cards (RTX 4060s). I noticed Lemonade emphasizes NPU and GPU orchestration. Have you found that offloading the 'aesthetic scoring' or 'text encoding' to the NPU significantly frees up VRAM for the main diffusion process, or is the overhead of moving tensors back and forth too high on consumer hardware?
- Anyone compare to ollama? I had good success with latest ollama with ROCm 7.4 on 9070 XT a few days ago
- I just compared this on my Mac book M1 Max 64GB RAM with the following:
Model: qwen3.59b Prompt: "Hey, tell me a story about going to space"
Ollama completed in about 1:44 Lemonade completed in about 1:14
So it seems faster in this very limited test.
- It is optimized for compatibility across different APIs as well as has specific hardware builds for AMD GPUs and NPUs. It’s run by AMD.
Under the hood they are both running llama.cpp, but this has specific builds for different GPUs. Not sure if the 9070 is one, I am running it on a 370 and 395 APU.
- I'm also curious about this one, also I want to compare this to vLLM.
- Seconded. Currently on ollama for local inference, but I am curious how it compares.
- Lemonade is using llama.cpp for text and vision with a nightly ROCm build. It can also load and serve multiple LLMs at the same time. It can also create images, or use whisper.cpp, or use TTS models, or use NPU (e.g Strix Halo amdxdna2), and more!
- better than Vulkan?
- In my experience using llama.cpp (which ollama uses internally) on a Strix Halo, whether ROCm or Vulkan performs better really depends on the model and it's usually within 10%. I have access to an RX 7900 XT I should compare to though.
- Perhaps I should just google it, but I'm under the impression that ollama uses llama.cpp internally, not the other way around.
Thanks for that data point I should experiment with ROCm
- I meant ollama uses llama.cpp internally. Sorry for the confusion.
- From what I understand, ROCm is a lot buggier and has some performance regressions on a lot of GPUs in the 7.x series. Vulkan performance for LLMs is apparently not far behind ROCm and is far more stable and predictable at this time.
- For me Vulkan performs better on integrated cards, but ROCm (MIGraphX) on 7900 XTX.
- Wrong layer. Vulkan is a graphics and compute API, while Lemonade is an LLM server, so comparing them makes about as much sense as comparing sockets to nginx. If your goal is to run local models without writing half the stack yourself, compare Lemonade to Ollama or vLLM.
- I was talking about ROCm vs Vulkan. On AMD GPUs, Vulkan has been commonly recognized as the faster API for some time. Both have been slower than CUDA due to most of the hosting projects focusing entirely on Nvidia. Parent post seemed to indicate that newer ROCm releases are better.
- Yes, Vulkan is currently faster due to some ROCm regressions: https://github.com/ROCm/ROCm/issues/5805#issuecomment-414161...
ROCm should be faster in the end, if they ever fix those issues.
- Just in case anyone isn't aware. NPUs are low power, slow, and meant for small models.
- I wonder what was the imagined use case? TBH I was seriously thinking about buying a framework desktop but the NPU put me off.. I don't get why I should have to pay money for a bunch of silicon that doesn't do anything. And now that there's some software support... it still doesn't do anything? Why does it even exist at all then?
- The NPU is entirely useless for the Framework Desktop, and really all Strix Halo devices. Where it could be useful is cell phones with the examples mentioned by @naasking (audio-text and text-audio processing), and maybe IoT.
- Small models aren't entirely useless, and the NPU can run LLMs up to around 8B parameters from what I've seen. So one way they could be useful: Qwen3 text to speech models are all under 2B parameters, and Open AI's whisper-small speech to text model is under 1B parameters, so you could have an AI agent that you could talk to and could talk back, where, in theory, you could offload all audio-text and text-audio processing to the low power NPU and leave the GPU to do all of the LLM processing.
- You could always offload some layers to the NPU for lower power use and leave the rest to the GPU. If the latter is power throttled (common for prefill, not for decode) that will be a performance improvement.
- Neat, they have rpm, deb, and a companion AppImage desktop app[1]! Surprised I wasn't aware of this project before. Definitely going to give it a try.
[1]: https://github.com/lemonade-sdk/lemonade/releases/tag/v10.0....
- Maybe it's a language barrier problem, but "by AMD" makes me think its a project distributed by AMD. Is that actually the case? I'm not seeing any reason to believe it is.
- It’s a community project supported and sponsored by AMD according to their GitHub; https://github.com/lemonade-sdk/lemonade
AMD employees work on it/have been making blog posts about it for a bit.
- > You can reach us by filing an issue, emailing lemonade@amd.com
Found this on the github readme.
- It is mostly developed by AMD and used to be hosted on the AMD github iirc
- I’m looking forward to trying this currently Strix halo’s npu isn’t accessible if you’re running Linux, and previously I don’t think lemonade was either. If this opens up the npu that would be great! Resolute raccoon is adding npu support as well.
- Maybe you have seen NPU support via FLM already: https://lemonade-server.ai/flm_npu_linux.html
"FastFlowLM (FLM) support in Lemonade is in Early Access. FLM is free for non-commercial use, however note that commercial licensing terms apply. "
- The NPU works on Linux (Arch at least) on Strix Halo using FastFlowLM [1]. Their NPU kernels are proprietary though (free up to a reasonable amount of commercial revenue). It's neat you can run some models basically for free (using NPU instead of CPU/GPU), but the performance is underwhelming. The target for NPUs is really low power devices, and not useful if you have an APU/GPU like Strix Halo.
- I thought the NPU has been available since something like 6.12?
- Cool but is there a reason they can't just make PRs for vLLM and llama.cpp? Or have their own forks if they take too long to merge?
- They use the latest llama.cpp under the hood but built for specific AMD GPU hardware.
Lemonade is really just a management plane/proxy. It translates ollama/anthropic APIs to OpenAI format for llama.cpp. It runs different backends for sst/tts and image generation. Lets you manage it all in one place.
- Wow this is super interesting. This creates a local “Gemini” front end and all. This is more or less a generative AI aggregator where it installs multiple services for different gen modes. I’m excited to try this out on my strix halo. The biggest issue I had is image and audio gen so this seems like a great option.
- It's pretty annoying that you need vendor specific APIs and a large vendor specific stack to do anything with those NPUs.
This way software adoption will be very limited.
- Which specific NPU’s?
- Forget all the vibe coded slop or Ollama. Lemonade is the real deal and very good, been using about a year now.
AMD are doing gods work here
- my most powerful system is Ryzen+Radeon, so if there are tools that do all the hard work of making AI tools work well on my hardware, I'm all for it. I find it very frustrating to get LLMs, diffusion, etc. working fast on AMD. It's way too much work.
- [dead]
- [dead]
- the unified api is interesting, but i've found that 'openai compatible' can be leaky. when i switched a rag agent from openai to a local server, my function calling broke even though
- [dead]
- [dead]
- [dead]
- so... what does it do? i dont get it Lol
- Initial read suggests it is a mini-swiss army knife, because it seems to be able to do a lot ( based on website claims anyway ). The app integration seems to suggest they want to be more of a control dashboard.
- this is funny I’m working on building an AI project called lemonade right now