• Tiny models that you can just run locally sound pretty sweet. I can see a lot of privacy‑minded folks liking this since you don’t have to phone home to an API for every request. Curious how big the trade‑off is between size and accuracy once you get beyond simple classification tasks, I see you can "bring your own data" too instead of just throwing a bunch of synthetic stuff at it.... I wonder how well that works.
    • Retraining & Data Generation: You can retrain your own models any time (even after deployment), generate more data, or swap in a different LLM for data synthesis. This lets you tune performance for your use case, whether you want more accuracy or just a smaller model.
  • [dead]