I’m just here for the moral superiority.🌱
Mainly interested in FOSS
Currently in uni and working part-time as a developer and system administrator.

PC Specs
CPU: 7800X3D
GPU: 7900XTX
Memory: 64GB
System: Arch

  • 3 Posts
  • 4 Comments
Joined 4 months ago
cake
Cake day: January 15th, 2026

help-circle



  • I’ve been using it for the past few days and the output quality seems to be on par or slightly better than 3.5 27b. The biggest issue is the token usage that has exploded with this revision. It can easily reason for 20k-25k tokens on a question where the qwen3.5 models used 10k. Since it runs more than 3 times faster, it still finished earlier than the 27b, but I won’t have any context/vram left to ask multiple questions.

    Artificial Analysis has similar findings. Bar graph of output tokens for different models. Qwen3.6 35b: 140 million, Qwen3.5 35b: 100 million