Digital Bites
Share
Whole School AI


Why AI Sometimes Makes Things Up

By Mrs Hudson-Findley (Director of Digital Learning, Enterprise and Sustainability)

One of the strangest things about modern AI is how confidently it can be wrong. Ask a chatbot for a fact and it may give you a fluent, well-written answer that sounds convincing but is quietly inaccurate. This behaviour is often called a “hallucination.”

The reason is simple. Tools like ChatGPT, Copilot and Gemini do not look things up in the way people do. They predict what a sensible answer should look like based on patterns in language. When reliable information is missing or unclear, they may fill in the gaps with something that sounds right rather than something that is true.

This is why how we prompt AI matters. Instead of asking,
“Explain climate change,”
try something more robust:
“Explain climate change, list the key claims you are making, and show me the sources or assumptions behind each one.”

This approach forces the AI to slow down and reveal where its information comes from, making it easier to spot weaknesses or errors.

AI can be an extraordinary assistant.

But digital fluency means knowing when to trust it, when to challenge it, and how to verify it for yourself, reflecting our approach to AI in education, which emphasises understanding how these tools work and using them with judgement and care.







You may also be interested in...

Digital Bites