Which is even worse, right? Acknowledging that there's a problem but missing the actual problem and instead doubling down on it?
LLMs are not a sampling technique, it's non-deterministic autocomplete. So the answer that comes back isn't within some understood range, you can ask the same question and be 5%, 50% or 100% incorrect each time it's asked.
(And yes, you can do lots of things to mitigate all these issues, but again, wtf are we even doing at that point? These are solved problems! We have well-defined and efficient methods, along with supercomputers for laptops and instead we're blowing all that power on petascale ELIZA.)