Was just looking at the benchmarks for it on artificialanalysis.ai and it looks great for the size. Probably best available for general use if you’re looking for something under 40b parameters I’d say. Even more impressive is the agentic capabilities and the fact that it is actually decent in terms of hallucinations (not amazing given it’s a small-medium size model but decent).
I’ve been using it for the past few days and the output quality seems to be on par or slightly better than 3.5 27b. The biggest issue is the token usage that has exploded with this revision. It can easily reason for 20k-25k tokens on a question where the qwen3.5 models used 10k. Since it runs more than 3 times faster, it still finished earlier than the 27b, but I won’t have any context/vram left to ask multiple questions.
Yes I did see that as well. That does seem to be the real Achilles heel here. Will have to try it myself to see how much it exacerbates context size limitations given I would be running it on a single 24 GB VRAM GPU. I wonder if adjusting reasoning effort parameters could make a difference without affecting quality too much?
Was just looking at the benchmarks for it on artificialanalysis.ai and it looks great for the size. Probably best available for general use if you’re looking for something under 40b parameters I’d say. Even more impressive is the agentic capabilities and the fact that it is actually decent in terms of hallucinations (not amazing given it’s a small-medium size model but decent).
I’ve been using it for the past few days and the output quality seems to be on par or slightly better than 3.5 27b. The biggest issue is the token usage that has exploded with this revision. It can easily reason for 20k-25k tokens on a question where the qwen3.5 models used 10k. Since it runs more than 3 times faster, it still finished earlier than the 27b, but I won’t have any context/vram left to ask multiple questions.
Artificial Analysis has similar findings.
Yes I did see that as well. That does seem to be the real Achilles heel here. Will have to try it myself to see how much it exacerbates context size limitations given I would be running it on a single 24 GB VRAM GPU. I wonder if adjusting reasoning effort parameters could make a difference without affecting quality too much?