LLM: Only Hallucinations

Large Language Models: Only Hallucinations

      LLM hallucinations are mentioned from time to time. And I run into this in my own usage of LLM’s. But in a sense, all LLM’s do is hallucinate. They can’t help it, it’s how they are built.

      Per the definition of hallucinate, ‘see or hear things that do not exist outside the mind’. LLM’s have no senses as we would understand them. They have no eyes to see, ears to hear. All they have is a statistical analysis of words. Thus anything they ‘think’ is a hallucination.

      Here is an example/thought experiment:
I ask you to help me figure out if there is enough room on my bookshelf for a new book.
I tell you how many books are currently on the shelf, it’s dimensions, the size of the new book, etc.
You would most likely be picturing the bookshelf as you try to answer the question.

As the bookshelf is not present, does this mean you are hallucinating?
In answering that question, does it make a difference if I made up the bookshelf? (ie, I was lying)



      Now this doesn’t mean hallucinations can’t be useful. They can. Using the bookshelf example above, if you’ve got enough good information, your imaginary bookshelf can be used to answer useful questions. Or not, depending on the details.

      Which brings us back to the LLM’s. All they know is what we tell them. Both in their original assembly and in the prompts we give them. They have no way to independently verify their input. Thus all responses from them are hallucinations. Just, sometimes, the hallucinations roughly match reality. Or are at least close enough to be useful.

Leave a Reply