OpenAI explains persistent “hallucinations” in AI, where models produce plausible but false answers. The issue stems from training and accuracy-focused evaluations that reward guessing. GPT‑5 reduces errors, and reforming benchmarks to value uncertainty could further lower hallucinations.