Core Concepts
Hallucinations: When AI models confidently generate false or fabricated information. This happens because LLMs predict the next word based on patterns in their training data—they don't access a truth database or verify facts.
RAG (Retrieval-Augmented Generation): A two-step process that reduces hallucinations by combining search with generation. First, the system retrieves relevant information from external sources. Second, it provides this retrieved information to the LLM as context when generating responses.
Key Difference: ChatGPT relies primarily on training patterns from its knowledge cutoff date, making it prone to hallucinations about recent events or obscure topics. Perplexity searches the web first and provides cited answers, reducing hallucinations. NotebookLM only uses your uploaded documents, grounding responses in your sources.
Exercise: Compare AI with and without RAG
-
Ask ChatGPT about a recent, obscure study
Test ChatGPT's tendency to hallucinate by asking about something specific and recent:
Tell me about the 2023 study on [insert your research area] published in [insert specific journal]. What were the main findings?Observe: ChatGPT may confidently provide details about a study that doesn't exist, or conflate multiple studies.
-
Ask the same question in Perplexity
Now try the identical query in Perplexity, which uses RAG by searching the web first.
Observe: Perplexity searches current sources and provides citations. If it can't find the study, it will say so.
-
Compare the outputs
- Does it provide specific citations or sources?
- Can you verify the information through the provided sources?
- How confident does each response sound?
- Which would you trust for academic work?
-
Advanced: Test NotebookLM with your own documents
Upload 3-5 research papers or course materials to NotebookLM, then ask questions about them.
Understanding the Difference
Click each card to see how RAG changes AI behavior:
Without RAG
ChatGPT (Standard Mode)
Click to see details →
Without RAG
- Pros: Fast, creative, good for brainstorming
- Cons: May hallucinate facts, no source verification
- Best for: Drafting, ideation, general knowledge
With RAG
Perplexity / NotebookLM
Click to see details →
With RAG
- Pros: Cited answers, verifiable sources, current info
- Cons: May miss nuance, limited to available sources
- Best for: Research, fact-checking, current events
Click the cards above to flip them and see more details
Reflection Questions
- When would you use a tool without RAG versus one with RAG?
- How does this impact your evaluation of student work that may use AI?
- What strategies can you teach students to verify AI-generated information?
- How might NotebookLM change the way you approach literature reviews?
Key Takeaways
- Hallucinations occur because LLMs predict patterns, not truth.
- RAG reduces hallucinations by retrieving and citing sources before generating responses.
- ChatGPT is best for creative/drafting tasks; Perplexity/NotebookLM are best for research and fact-checking.
- Always verify AI-generated facts, especially for academic or research purposes.
Mark as Complete
Have you finished this activity?