What are AI Hallucinations?
AI and ML

What are AI Hallucinations?

Ayush Kudesia
Ayush Kudesia

If you've ever used a large language model such as Gemini, ChatGPT, or Claude, you may have noticed a small disclaimer—the output may be factually incorrect.

In simple terms, that inaccurate information is an AI hallucination.

What are AI Hallucinations? - Inaccurate information disclaimer

AI hallucinations are when an LLM perceives nonexistent patterns or objects to create nonsensical or inaccurate information.

As we rely more and more on these generative AI models to assist with our tasks, it is crucial to understand: 

  • What are AI hallucinations
  • What causes AI hallucinations
  • How grounding and other methods can minimize AI hallucinations

Read this article to find out the answer to these questions. Let’s start!

What are AI Hallucinations?

AI hallucinations
Source

AI hallucinations refer to the phenomenon where AI algorithms generate outputs that are not based on their training data or do not follow any identifiable pattern. 

The term "hallucination" is used metaphorically to describe these outputs, as they are not based on reality. It is similar to how humans sometimes see figures in the clouds or faces on the moon, even though they are not actually there. 

In the same way, AI hallucinations are misinterpretations of data that can lead to incorrect or nonsensical outputs.

AI hallucinations can take many different forms, including:

Incorrect predictions: The AI makes an inaccurate or unlikely prediction based on the data, e.g., forecasting rain when the weather is clear.

False positives: The AI falsely identifies something as a threat or issue when it is not, e.g., flagging a legitimate transaction as fraudulent.

False negatives: The AI fails to detect a real threat or issue, e.g., not recognizing a cancerous tumor in a medical scan.

Nonsensical outputs: The AI generates outputs that are completely nonsensical or unrelated to the input data.

Factual errors: The AI states something as fact that is objectively incorrect or contradicted by its training data.

Visual artifacts: For image generation, the AI creates visually unrealistic or inconsistent elements, e.g., objects with incorrect proportions.

Biased or offensive outputs: The AI output exhibits social biases or generates offensive/unethical content not aligned with its intended use.

The causes of AI hallucinations

When you give AI any prompt, it tries to generate an accurate response based on its pre-trained data.

But sometimes, especially if the question is complex or the data is limited, the AI might fill in the gaps in its training data with its best guess. While seemingly plausible, this guess could be completely wrong.So why do these hallucinations happen? Here are some common causes:

Insufficient or biased training data

AI models are trained on massive text, code, or image datasets. However, no dataset can encompass everything. When you make a request outside its training data, the model might make an educated guess or generate fabricated information that sounds true but isn't.

If that data is limited, the AI's understanding of the topic will always be limited. Similarly, AI models trained on biased data may produce outputs reflecting those biases. 

Lack of context

AI models rely heavily on context to generate accurate responses. However, when the context is unclear or missing, AI models may misinterpret prompts or questions, leading to hallucinations.

Overfitting

Overfitting in machine learning happens when a model is excessively trained on a specific dataset, causing it to memorize the data instead of learning patterns. 

This results in poor performance on new, unseen data, defeating the purpose of machine learning, which is to generalize well to new data for accurate predictions and classifications.

Overfitting, therefore, leads AI to hallucinate as the LLM generates responses based on the memorized training data, even if it's irrelevant to the request.

Read more about overfitting.

Poorly written AI prompts

The quality of your AI prompts can also lead an AI to hallucinate. The way you ask a question or give AI instructions matters. 

The AI may struggle to understand the intended meaning or context if your prompts are vague or complex. 

Cause of AI hallucinations - Poorly written AI prompts

Write clear and well-structured prompts. Break down complex requests into simple step-by-step instructions, and provide all the necessary context upfront. This will help AI hallucinate less and generate more relevant and accurate responses.

Learn more about writing AI prompts.

Now that we know AI can hallucinate, let’s explore some of the reasons why AI hallucinates.

AI Hallucination rates

Hallucination rates indicate the frequency at which an AI model generates incorrect or misleading information.

Let’s look at the hallucination rate for some popular LLMs:

Model

Hallucination rate

Factual consistency rate

GPT-4

3.0 %

97.0 %

GPT-4 Turbo

3.0 %

97.0 %

GPT-3.5 Turbo

3.5 %

96.5 %

Google Gemini Pro

4.8 %

95.2 %

Anthropic Claude 3 Sonnet

6.0 %

94.0 %

Anthropic Claude 3 Opus

7.4 %

92.6 %

Anthropic Claude 2

8.5 %

91.5 %

Source: Hughes Hallucination Evaluation Model (HHEM) leaderboard

GPT-4 has the lowest hallucination rate, at just 3%. However, that 3% can be a lot when dealing with high-stakes situations such as healthcare, finance, or legal contexts.

How grounding minimizes AI hallucinations

Just as you wouldn't expect a human analyst to produce a crucial report solely from memory or answer complex geopolitical questions without access to current data, generative AI systems also require a solid foundation of accurate information. 

The solution to AI hallucinations is "grounding," which involves providing the AI with a reliable source of information to generate accurate responses. 

What are AI hallucinations - How grounding minimizes AI hallucinations

By grounding, AI is equipped with the necessary context and background knowledge to generate factually correct responses. 

It is like fact-checking for AI. It is a strategy for minimizing AI hallucinations, ensuring that LLMs provide accurate and reliable information. 

Here's how it works:

  • You give AI a prompt to generate a response
  • The AI searches trusted databases and information sources for relevant data
  • Based on the retrieved information, the AI crafts a response to your question
  • If the AI can't find enough information, it will decline to answer. 
  • This response prevents the AI from making up facts and keeps you from receiving misleading or inaccurate information.

While grounding helps reduce hallucinations, it still can’t reduce it to 0%. The accuracy of the response now depends on the quality of trusted information sources. 

That’s why researchers are constantly developing new techniques to improve AI hallucinations alongside grounding.

✍🏻

Additional techniques for improving AI hallucinations

1. Reinforcement Learning with Human Feedback (RLHF)

As the name suggests, RLHF is a technique that helps improve AI’s responses more suitable to humans.

What are AI hallucinations - Reducing hallucinations with Reinforcement Learning with Human Feedback (RLHF)

Human experts grade the AI's outputs, rewarding high grades for accurate and helpful responses and firmly correcting its mistakes. Over time, the AI learns to align its responses with what humans find useful. It's like teaching a child—you encourage good behavior and correct bad behavior until the child learns.

2. Instruction tuning

Instruction tuning is the technique of training an AI to follow specific instructions. Think of it as giving your AI a clear set of rules to follow before generating a response. These instructions could include details about the topic, desired tone, or even ethical considerations. 

Providing clear directives helps AI stay focused and on track, minimizing the chances of it going off tangents or providing irrelevant information.

Conclusion

Grounding is a solid way to keep AI hallucinations in check! 

Other techniques like RLHF and instruction tuning are also helping to make AI more reliable.

While GPT-4's 3% hallucination rate is great, it's not perfect. This is why it’s important to cross-verify AI outputs with human oversight.

Now, the final question is: Will we ever make the journey from 3% hallucinations to zero? Maybe one day we'll have an AI that can write blog posts without needing a human to check the facts—although, let's be honest, where's the fun in that?

Read Next:

What to Expect From GPT-5
What is GPT-5? What to expect from it? Will it achieve AGI? What’s GPT-5 release date? Find answers to these questions and more.

Try Fireflies for free