Good point being made
here about the idea that large language models like
chatGPT "hallucinates." The program doesn't hallucinates, it get things wrong, and therefore we need to be skeptical about it's answers. The problem is that skepticism is work that many do not want to do.
#