- Amazing!
I just tried the OCR capabilities with a photo of a DIN A4 page which was written with a typewriter. The image isn't the easiest to interpret. The text perspective is distorted because the page is part of a book and the page margin toward the spine of the book is very small. There are also many inline corrections due to typing errors while the page was written (backspace couldn't erase characters back then, and arrow keys couldn't be used to add text in between existing words). Over the past months I've tried to use several LLMs on this very same image already (1 out of 200 pages that seek digitization). The result is by far the most accurate so far. Only some very minor errors (which are also non-trivial for human translators) were made.
This page induced costs of about 25 cent. I assume I could tweak the input image a little more to consume less input tokens. OCR-ing all 200 pages would otherwise cost a juicy 50$ - although there is a generous 20$ of free credits.
Induced cost: 108.8k Input tokens => 16,32 cent 24.5k Output tokens => 8,58 cent
// Edit: I just re-tried the same task utilizing a capability of the API to only run a specific part of the model (e.g. _only_ OCR). This cuts cost by 3x (to ~8c/page) but significantly worsens the result. The result is missing entire lines of the original document. There are also many error in the text that was recognized.
- New account created ~5 hours after this post, with a single comment specifically praising the model / product. I want to believe, but this sort of astroturfing isn't very encouraging.
- Yup run task mode runs a much smaller part of the model when can drop quality of scans. The issue with run task we have to figure out is how much of the model is needed just for OCR and how to activate the right parts. A lot more improvements coming here with the same cost reduction.
I'd be happy to test it against your sample and see how we can get good results at a lower per page cost. Feel free to email me yoeven@interfaze.ai
- Have you tried this task using an actual OCR model like Google Cloud Vision AI? I am not sure if this is what Gemini uses under the hood but multi-modal LLMs are not designed to extract text like this so it should be no surprise it's not good at it?
- Google Cloud Vision AI is a specialized model built on CNNs frameworks which is part of the Interfaze architecture which is an hybrid so you get best of both worlds. Google cloud vision was pretty far behind other specalized models like PaddleOCR etc anyways so if you're looking for a pure CNN, check them out.
You can find the explanation and the comparison in the article, which we benchmarked pure CNN models, pure LLM models and a hybrid architecture like ours.
- Ok that's...just cheating. You can't take a benchmark like MMLU designed to test the performance of a single general language model and compare it to performance of a small specialized model designed to do well on MMLU.
- It wasn't designed to do well on MMMLU, it's a general model designed for deterministic task like OCR, object detection, STT and more and a by product of that is great language abilities. It still has a transformer backbone giving great language skills while being good at other stuff.
See the full benchmark: https://interfaze.ai/leaderboards
- Potentially stupid question: Does that mean we can chain them together line UNIX command line programs ? That would be so, so intuitive.
- > These are deep neural network architectures that are task-specific for things like OCR, translation, or GUI detection. The way they consume and see data is trained to be task specific, which makes them up to 100x more accurate at their specific task. They also produce useful metadata like bounding boxes and confidence scores, letting developers build predictable workflows they can rely on.
Does code extraction and manipulation fit in that? Would interfaze be the agent that a coding agent uses?
- Code extraction maybe, not something we have tested or built for but you could give it a try.
Code manipulation probably not since it's a lot smaller of a model compared to a Claude Opus which is SOTA for code generation/manipulation.
Generally code generation is a non-deterministic task by nature and general LLMs tend to be better at them.
- The idea of what to change is perhaps an llm task but the job of doing the find replace and that kind of tooling is something LLMs actually struggle with and have all kinds or crutches and try retry loops to paste over in coding agents etc.
- Interesting approach! One question though: can the model do column detection?
The first OCR example returns output that does not detect the article columns - the bounding box is the entire first line.
- It can, you could try prompting the model to use object detection vision and text extraction, we realized when we purely extract text it does amazing at word/sentence level bounds since the text acts as the anchor. However, when you treat it as a object detection problem, it sees that chunk of text as a segment allowing you the extract it as one column bound. Give that a try.
- This is very cool, though I don't understand exactly what they've done here. Is it some kind of LLM with convolutional layers added?
The graph doesn't exactly make it clear but it describes a pipeline that goes beyond the LLM, so the CNN could be a separate model there.
- Here’s the academic paper behind it: https://arxiv.org/abs/2602.04101
- Interfaze.ai at YC Launch Live - May 8th, 2026 https://youtu.be/S9Lgp2hWBsE?t=4185
- So is this basically a task-specific MoA transformer arch with a DNN that helps make routing decisions? Trying to understand this.
- The other way round, task specific DNNs adapted to share the same vector space as omni-transformers with generalized vision, audio encoders.
E.g. For an OCR task, the first pass will be handled by the CNN, converted to shared tokens which the transformer can consume, correct any issues if needed and a decoder that can handle both the DNN and transformer output.
- Smaller models really arent great at structured output. If this works it would be great for a local model that might not be as good but as long as it respects structured output will be vastly more useful.
- We have a full benchmark breakdown specifically on structured output that you can take a look at https://interfaze.ai/leaderboards/structured-output-benchmar...
- > Smaller models really arent great at structured output.
That doesn't seem to hold true. Consider gpt-5.4-nano which supports structured output just fine.
https://developers.openai.com/api/docs/models/gpt-5.4-nano
It seems like a concern that's orthogonal to the model size.
- I genuinely doubt that they are just lying though lol
- This is cool, Id love to be able to fine tune on this architecture. Is this something on the roadmap ever?
- It isn't on our roadmap right now since in most cases it should work out of the box and if it doesn't we'll work with you to train that into the model generally.
However, if we see enough people who has something super niche that our model can't handle, we might start considering a fine tuning service
- What I want are precise and tight bounding boxes. Why is this so difficult?
- The PP-DocLayoutV3 [1] bounding boxes are pretty good in my experience, if you want boxes around individual document headings or paragraphs. If you want boxes around individual words, similar to what's shown in the Interfaze screen shot [2], Apple has a LiveText "token" model that's proprietary but free/bundled with macOS and iOS. There are easy to use Python bindings here: https://github.com/straussmaximilian/ocrmac
I presume that some otherwise-great OCR models (like Chandra) have terrible bounding boxes because generating good bounding boxes just wasn't a training priority. A lot of people are using OCR models to bulk-process documents without a lot of care for how the layout is preserved. It matters a lot if (e.g.) you want to be able to update and re-print old documents, but it doesn't matter if you are just transcribing whole documents for indexing/chunking/translation.
[1] https://huggingface.co/PaddlePaddle/PP-DocLayoutV3
[2] https://r2public.jigsawstack.com/interfaze/examples/dense_te...
- For sure there a tons of OCR bounding models and tons of other models like SAM 3 for segmentation.
Interfaze is a more powerful version of them combined into a single model, you can run multi turn tasks like extract all the text and object from this document then translate or generate a report.
It's like getting the best of both worlds from pure DNN/CNN models like Paddle and the flexibility and nuace of an LLM while outperforming both in accuracy.
- does it handle source code extraction from images?
how do I run it locally?
- yeah it would treat it like an OCR task and extract it, you could prompt it to format it better with the code alignment.
We serve it though an API. Check out the docs: https://interfaze.ai/docs
It's free to gets started.
- Great in the benchmarks but not as good in the real world, sorry to say. Just gave it a try in my STT bot, it's worse than whisper
- Use it run task mode if you're doing a one to one comparison to whisper, it's going to be a lot faster too.
Here's a good example: https://interfaze.ai/docs/audio/speech-to-text#long-audio-tr...
- Similar to a large action model?
- Not directly, LAMs tend to be focused a lot on tool calling or trained for a set of specific action for example in the robotics field. Good tool calling might be a good by product of Interfaze but wasn't specifically trained for that use case.
The focus has been for deterministic outputs that require high accuracy. In situations where there is "one right answer"
- [flagged]
- [dead]