Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem isn't whether they have more or less hallucinations. The problem is that they have them. And as long as they hallucinate, you have to deal with that. It doesn't really matter how you prompt, you can't prevent hallucinations from happening and without manual checking, eventually hallucinations will slip under the radar because the only difference between a real pattern and a hallucinated one is that one exists in the world and the other one doesn't. This is not something you can really counter with more LLMs either as it is a problem intrinsic to LLMs


Humans also hallucinate. We have an error rate. Your argument makes little sense in absolutist terms.


> Humans also hallucinate

"LLM hallucinations" and hallucinations are essentially different. Human hallucinations are related to perceptual experiences not memory errors like in the case of LLMs. Humans with certain neurological conditions hallucinate. Humans with healthy brains don't.

This habit of misapplying terms needs to stop. Humans are not backpropagation algorithms nor whatever random concept you read about in a comp sci book.


The more appropriate term is confabulate, and healthy humans do it all the time. I merely used the common, but technically incorrect term for the phenomenon in LLMs. FYI, my PhD focused on human memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: