

Thanks! I was running some models on my RX 9070XT, but only Ollama works flawlessly. I couldn’t make llama.Cpp to run Gemma4 or the newer Qwen - maybe I’m hitting that incompatibility, but probably it’s the skill issue.
P40 doesn’t look very appealing. 32 GB V100 costs about the same as 2xP40, less VRAM in total, but it’s faster, will use less power.
But I’m not sure if I follow you on the PCIe… If I run a model that spans multiple GPUs, doesn’t PCIe bandwidth matter?
Wow, I didn’t think you were running 176GB worth of GPUs on a consumer board! I don’t have an extra board, and my gaming PC that has 9070XT is not a good basis for multi GPU build - it has a cheap mATX motherboard with too few slots and lanes. So it’s going to be a new build. Used EPYC boards look interesting for that.