Prompt Engineering Course
Avoiding Hallucinations
Hallucination occurs when a language model produces information that appears confident but is factually incorrect, fabricated, or unsupported by the input context.
This is one of the most critical challenges in prompt engineering because hallucinations can silently break applications, mislead users, and cause real-world damage.
Avoiding hallucinations is not about making the model smarter. It is about designing prompts that restrict what the model is allowed to say.
Why Hallucinations Happen
Language models are trained to generate the most likely continuation of text.
When information is missing, unclear, or underspecified, the model fills gaps using learned patterns rather than verified facts.
Hallucinations commonly occur when:
- The prompt asks for unknown or unavailable facts
- The task scope is too broad
- No constraints are provided
- The model is encouraged to be creative
A Simple Hallucination Example
Consider this prompt.
Who was the CEO of Company X in 1995?
If the model does not know Company X, it may still generate a confident answer.
This is not because it knows the answer, but because it is optimizing for plausibility.
The Core Principle to Prevent Hallucinations
Models should be instructed to:
- Use only provided information
- Admit uncertainty
- Avoid guessing
Prompt engineering enforces these rules explicitly.
Technique 1: Constrain the Knowledge Source
Always tell the model where it is allowed to get information from.
Answer the question using only the information provided below.
If the answer is not present, say "Information not available."
Context:
...
This removes the model’s incentive to invent facts.
Technique 2: Allow Explicit Uncertainty
Models often hallucinate because they are not given permission to say “I don’t know.”
You must explicitly allow it.
If you are unsure or the answer cannot be determined, respond with:
"I don’t know based on the provided information."
This single instruction significantly reduces fabricated responses.
Technique 3: Narrow the Task Scope
Broad prompts increase hallucination risk.
Compare these two instructions.
Explain everything about Kubernetes.
Versus:
Explain Kubernetes pod scheduling at a high level.
Limit the explanation to 200 words.
Narrow scope reduces the need for speculation.
Technique 4: Ask for Evidence or Citations
Requiring justification forces the model to ground its answers.
For each claim, include a brief explanation of where the information comes from.
If no source is available, state that explicitly.
This discourages unsupported assertions.
Technique 5: Use Step-by-Step Reasoning Carefully
Step-by-step reasoning improves accuracy but can also introduce hallucinations if the initial assumptions are wrong.
Always combine reasoning with constraints.
Reason step by step using only the provided data.
Do not introduce external facts.
Hallucinations in Production Systems
In real applications, hallucinations can:
- Break customer trust
- Cause legal or compliance issues
- Generate incorrect automated actions
This is why production prompts always include guardrails.
How Learners Should Practice Avoiding Hallucinations
When testing prompts:
- Intentionally ask unanswerable questions
- Observe if the model guesses
- Add constraints and retry
- Compare behavior changes
Learning to prevent hallucinations is a core professional skill.
Practice
What is the most effective way to reduce hallucinations?
Why should models be allowed to say “I don’t know”?
Broad prompts increase the risk of what?
Quick Quiz
Hallucination refers to:
What reduces hallucinations the most?
What should a model do when information is missing?
Recap: Hallucinations are reduced by constraining scope, sources, and allowing uncertainty.
Next up: Ethical prompting and responsible use of large language models.