Ask ChatGPT to estimate the carbs in your lunch. Now ask it again. And again. Five hundred times. You’d expect the same answer each time. It’s the same photo, the same model, the same question. But you won’t get the same answer. Not even close — and the differences are large enough to cause a
The models themselves are actually entirely deterministic. The non-determinism you see is actually artificially introduced at the application layer to make the output seem more human. It’s usually controlled by a setting called “heat”, which when set to 0 will give completely reproducible results.
This is correct, I suppose you’re talking about the final softmax layer?
When I said they are bad at determinism, I was talking about reasoning on deterministic rules not having deterministic output. For example, LLMs make logical deduction errors, calculation errors etc.
The models themselves are actually entirely deterministic. The non-determinism you see is actually artificially introduced at the application layer to make the output seem more human. It’s usually controlled by a setting called “heat”, which when set to 0 will give completely reproducible results.
This is correct, I suppose you’re talking about the final softmax layer? When I said they are bad at determinism, I was talking about reasoning on deterministic rules not having deterministic output. For example, LLMs make logical deduction errors, calculation errors etc.