

1·
12 days agoI have a 24GB AMD 7900XTX, and it’s a 35b parameter model.


I have a 24GB AMD 7900XTX, and it’s a 35b parameter model.


This weekend I had an LLM walk me through setting up some home server stuff and networking. I tried using Proton’s Lumo and Qwen 3.6 locally. I have to say Qwen was the more impressive of the two models. When I first tried running models locally like llama 4, I remember thinking to myself that this was a dead end and big servers would always have the advantage, but it seems like we’re hitting a turning point where many things can be done locally.
It used to be named “Clawd”, intentionally trying to mimic Claude Code. Claude Code was, until they accidentally vibe-released the source code, a proudly closed-source AI client. So OpenClaw is very intentionally marketing themselves as the anti-Claude.