5 points by Nive11 11 hours ago | 6 comments
- One thing I am still unsure about is whether people prefer fully model agnostic tools or more opinionated assistants that trade flexibility for better behavior in specific scenarios. Curious what others think.
- which all models do you use
- At the moment it supports Gemini and Groq.
I started with those because they gave predictable latency and behavior while I was validating the UX and reasoning flow.
I’m planning an experimental mode that allows curl style requests so the assistant can talk to arbitrary providers or custom endpoints without being tied to a specific model. That should make it easier to plug in other hosted or local setups.
Longer term I’m trying to balance flexibility with keeping the default behavior sane and reliable.
- cool stuff btw . I'll try it and let you know my feedback