OpenAI admits GPT-5 hallucinates: ‘Even advanced AI models can produce confidently wrong answers’–Here's why

6 months ago 8
OpenAI explains persistent “hallucinations” in AI, where models produce plausible but false answers. The issue stems from training and accuracy-focused evaluations that reward guessing. GPT‑5 reduces errors, and reforming benchmarks to value uncertainty could further lower hallucinations.
Read Entire Article