• What I love about OpenClaw is that I was able to send it a message on Discord with just this github URL and it started sending me voice messages using it within a few minutes. It also gave me a bunch of different benchmarks and sample audio.

    I'm impressed with the quality given the size. I don't love the voices, but it's not bad. Running on an intel 9700 CPU, it's about 1.5x realtime using the 80M model. It wasn't any faster running on a 3080 GPU though.

    • yeah we'll add some more professional-sounding voices and also support for diy custom voices. we tried to add more anime/cartoon-ish voices to showcase the expressivity.

      Regarding running on the 3080 gpu, can you share more details on github issues, discord or email? it should be blazing fast on that. i'll add an example to run the model on gpu too.

  • Would an Android app of this be able to replace the built in tts?
    • yes, our mobile sdk is coming soon(eta 2 weeks) so we should be able to replace the built-in version of it. can you share what tts use-case you're thinking of?
      • I use an epub reader like Moon+ with the built in TTS to turn epubs into audiobooks, and I tried Kokoro TTS but the issue was too much lag between sentences plus it doesn't preprocess the next sentence while it reads out the current one.
        • Working on a reader and server that use pockettts to turn epubs into audio books https://github.com/gabrielcsapo/compendus shows a virtual scroller for the text and audio
        • okay this seems pretty doable, i think i know someone who is working on an epub reader using kittentts. if they don't post about it, i'll do it once its done.
  • This is awesome, well done. Been doing lot of work with voice assistants, if you can replicate voice cloning Qwen3-TTS into this small factor, you will be absolute legends!
    • thanks a lot, our voice cloning model will be out by May. we're experimenting w some very cool ways of doing voice cloning at 15M but will have a range of models going upto 500M
  • The example.py file says "it will run blazing fast on any GPU. But this example will run on CPU."

    I couldn't locate how to run it on a GPU anywhere in the repo.

    • thanks for the feedback. i'll add an example of running it on gpu.
  • You should put examples comparing the 4 models you released - same text spoken by each.
  • A lot of good small TTS models in recent times. Most seem to struggle hard on prosody though.

    Kokoro TTS for example has a very good Norwegian voice but the rhythm and emphasizing is often so out of whack the generated speech is almost incomprehensible.

    Haven't had time to check this model out yet, how does it fare here? What's needed to improve the models in this area now that the voice part is more or less solved?

    • small models struggle with prosody due to limited capacity. this version does much better than the precious one and is the best among other <25MB models. Kokoro is a really good model for its size, its competitive on artificial analysis too. i think by the next release we should have something kokoro quality but a fifth of the size. Adding control for rhythm seems to be quite important too, and we should start looking at that for other languages.
    • That, and also using English words in the middle of another language phrase confuses them a lot.
      • yes. the current release of our model is english-only. so other languages are not expected to perform well. we'll try to look out for this in our multilingual release.
  • are there plans to output text alignment?
    • yes, we just started working on this yesterday haha, great that you mentioned it. once we have it working it'll be out soon.
      • that would be awesome, I was using pockettts then I had to run it through whisper to get the accurate alignment. Not super productive for realtime work.
  • Really cool to see innovation in terms of quality of tiny models. Great work!
    • thanks a lot. small model quality is improving exponentially. This 15M is way better than the 80M model from our previous launch (V0.1).
  • One of the core features I look for is expressive control.

    Either in the form of the api via pitch/speed/volume controls, for more deterministic controls.

    Or in expressive tags such as [coughs], [urgently], or [laughs in melodic ascending and descending arpeggiated gibberish babbles].

    the 25MB model is amazingly good for being 25MB. How does it handle expressive tags?

    • thank you so much. Right now, it cannot handle expressive tags. what kind of tags would be most helpful according to you?
      • Emotion based tagging control would be the most helpful narrowing it down. Tags like [sarcastically] [happily] [joyfully] [fearfully]: so a subsection of adverbs.

        A stretch goal is 'arbitrary tags' from [singing] [sung to the tune of {x}] [pausing for emphasis] [slowly decreasing speed for emphasis] [emphasizing the object of this sentence] [clapping] [car crash in the distance] [laser's pew pew].

        But yeah: instruction/control via [tags] is the deciding feature for me, provided prompt adherence is strong enough.

        Also: a thought...

        Everyone is using [] for different kinds of tags in this space: which is very simple. Maybe it makes sense to differentiate kinds of tags? I.E. [tags for modifying how text is spoken] vs {tags for creating sounds not specifically speech: not modifying anything... but instead it's own 'sound/word'}

        • yeah i think to start with, narrowing it down to a few tags would be most helpful and we'll probably start w that first. Thanks a lot!
  • 25MB is impressive. What's the tradeoff vs the 80M model — is it mainly voice quality or does it also affect pronunciation accuracy on less common words?
    • 80M model is the highest quality while also being quite efficient. it is superior in terms of pronunciation accuracy for less common words, and also is more stable in terms of speed. its my fav model. i think the 40M is quite similar to 80M for most usecases. 15M is for resource cpus, loading onto a browser etc.

      The new 15M is way better than the previous 80M model(v0.1). So we're able to predictably improve the quality which is very encouraging.

  • What's the actual install size for a working example? Like similar "tiny" projects, do these models actually require installing 1GB+ of dependencies?
    • Running the example is 3 MiB for the repo, +667 MiB of Python dependencies, +86 MiB of models that will get downloaded from HuggingFace. =756 MiB.

      (That's using the example as-is. If you switch it to the smaller model, modify the above with +57 MiB of models from HuggingFace, or =727 MiB.)

    • My quick test showed 670m of python libraries required on top of the model.
  • This is great. Demo looks awesome.
  • This would be great as a js package - 25mb is small enough that I think it'd be worth it (in-browser tts is still pretty bad and varies by browser)
    • great idea, we're on it. we're also working on a mobile sdk. a browser sdk would be really cool too.
  • There's a number of recent, good quality, small TTS models.

    If the author doesn't describe some detail about the data, training, or a novel architecture, etc, I only assume they just took another one, do a little finetuning, and repackage as a new product.

  • I'm still looking for the "perfect" setup in order to clone my voice and use it locally to send voice replies in telegram via openclaw. Does anyone have auch a setup?

    I want to be my own personal assistant...

    EDIT: I can provide it a RTX 3080ti.

    • You need to provide info on your hardware. Pocket-TTS does cloning on CPU, but for me randomly outputs something pretty weird sounding mixed in with like 90% good outputs. So it hasn't been quite stable enough to run without checking output. But maybe it depends on your voice sample.

      Qwen 3 TTS is good for voice cloning but requires GPU of some sort.

    • Is it not just to train a model on your voice recordings and just use that to generate audio clips from text?
  • A lot of these models struggle with small text strings, like "next button" that screen readers are going to speak a lot.
    • I think I tried on my Android everything I could try and 1. outside webpage reading, not many options; 2. as browser extensions, also not many (I don't like to copy URLs in your app) 3. they all insist reading every little shit, not only buttons but also "wave arrow pointing directly right" which some people use in their texts. So basically reading text aloud is a bunch of shitty options. Anyone jumping in this market opening?
      • we'd love to serve this use-case. i'll make a demo for this next week and comment here with it.
  • How much work would it be to use the C++ ONNX run-time with this instead of Python? Is it a Claudeable amount of work?

    The iOS version is Swift-based.

    • shouldn't be hard. what backend/hardware are you interested in running this with? i'll add an example for using C++ onnx model. btw check out roadmap, our inference engine will be out 1-2 weeks and it is expected to be faster than onnx.
      • desktop CPUs running inference on a single background thread would be the ideal case for what I'm considering.
  • Thanks for working on this!

    Is there any way to get those running on iPhone ? I would love to have the ability for it to read articles to me like a podcast.

    • yes, we're releasing an official mobile sdk and inference engine very soon. if you want to use something until then, some folks from the oss community have built ways to run kitten on ios. if you search kittentts ios on github you should find a few. if you cant find it, feel free to ping me and i can help you set it up. thanks a lot for your support and feedback!
  • Thanks for open sourcing this.

    Is there any way to do a custom voice as a DIY? Or we need to go through you? If so, would you consider making a pricing page for purchasing a license/alternative voice? All but one of the voices are unusable in a business context.

    • thanks a lot for the feedback. yes, we're working on a diy way to add custom voices and will also be releasing a model with more professional voices in the next 2-3 weeks. as of now, we're providing commercial support for custom voices, languages and deployment through the support form on our github. can you share more about your business use-case? if possible, i'd like to ensure the next release can serve that.
      • Right now it's outgoing calls for a small business client that checks information. Although if they call back they don't mind an automated system, on outgoing calls the person answering will often hang up if they detect AI right away, so we use a realistic custom voice with an accent.

        This is a mind numbing task that requires workers to make hundreds of calls each day with only minor variations, sometimes navigating phone trees, half the time leaving almost the exact same message.

        Anyway, I believe almost all such businesses will be automated within months. Human labour just cannot compete on cost.

  • Is it English only?
    • as of now its english only. the training for multilingual model is underway and should be out in April! what languages are you most interested in? Right now, we are providing deployments for custom languages + voices through support form on the github.
      • French, Spanish, German would go a long way.
  • I'm thinking of giving "voice" to my virtual pets (think Pokemon but less than a dozen). The pets are made up animals but based on real animal, like Mouseier from Mouse (something like that). Is this possible?

    Tldr: generate human-like voice based on animal sound. Anyway maybe it doesn't make sense.

    • it'd be an interesting experiment to try what kind of information is extracted from the samples of the pet sounds. it'd be so cool if it can just get the features of the audio and then still be able to reproduce the audio in english lol. we would need a really good "speaker" encoder i think.