The Qwen3.5 models are still the best local models I’ve used, so I’m excited to see how this updated version performs.

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    13 days ago

    Was just looking at the benchmarks for it on artificialanalysis.ai and it looks great for the size. Probably best available for general use if you’re looking for something under 40b parameters I’d say. Even more impressive is the agentic capabilities and the fact that it is actually decent in terms of hallucinations (not amazing given it’s a small-medium size model but decent).

    • TheCornCollector@piefed.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      12 days ago

      I’ve been using it for the past few days and the output quality seems to be on par or slightly better than 3.5 27b. The biggest issue is the token usage that has exploded with this revision. It can easily reason for 20k-25k tokens on a question where the qwen3.5 models used 10k. Since it runs more than 3 times faster, it still finished earlier than the 27b, but I won’t have any context/vram left to ask multiple questions.

      Artificial Analysis has similar findings. Bar graph of output tokens for different models. Qwen3.6 35b: 140 million, Qwen3.5 35b: 100 million

      • FrankLaskey@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Yes I did see that as well. That does seem to be the real Achilles heel here. Will have to try it myself to see how much it exacerbates context size limitations given I would be running it on a single 24 GB VRAM GPU. I wonder if adjusting reasoning effort parameters could make a difference without affecting quality too much?