Hallucination
Hallucination refers to instances when a model outputs information that is not based on real or true data. This can occur in generative models, like large language models, where the system provides responses that may appear plausible but are incorrect or completely made up. For example, when asked factual questions, a model may invent details or references that don't exist, posing a challenge for applications demanding high accuracy.