Graded Quiz: Introduction to RAG :Build RAG Applications: Get Started (BM RAG and Agentic AI Professional Certificate) Answers 2025
Question 1
How does RAG ensure more accurate responses for specialized queries?
❌ Retraining the model
❌ Increasing token count
❌ Embedding all queries in parameters
✅ Using external knowledge sources to augment the prompt
Explanation:
RAG retrieves relevant domain knowledge and injects it into the prompt, improving accuracy.
Question 2
How should Marina ensure an accurate answer about the planet with the most moons?
✅ Use RAG to retrieve the latest information from a reliable source
❌ Embed all moon facts
❌ Rely on pre-trained parameters
❌ Use multiple models for consensus
Explanation:
RAG ensures the answer reflects the most up-to-date scientific data.
Question 3
How does retrieval improve LLM accuracy?
❌ Increases model training size
❌ Eliminates external input
❌ Modifies model parameters
✅ Augments the prompt with relevant, up-to-date information
Explanation:
Retrieval pulls relevant chunks from a knowledge base, boosting correctness.
Question 4
What should Alex focus on for accurate company policy answers?
❌ Rely only on pre-trained model
❌ Increase token count
✅ Keep the knowledge base up-to-date and well-organized
❌ Embed all possible queries
Explanation:
A clean, current knowledge base ensures accurate retrieval for RAG.
Question 5
How does RAG help with a chatbot for complex company mobile policies?
❌ Manual rule-based system
❌ Retrain model
❌ Pre-trained model magically knows
✅ Embed policy documents and retrieve relevant vectors to generate accurate responses
Explanation:
RAG embeds policy docs into vectors and retrieves the most relevant chunks during queries.
Question 6
Role of neural networks in embedding:
❌ Identify chunks
❌ Convert text to tokens
✅ Transform token IDs into numeric vectors
❌ Encode tokens into IDs
Explanation:
Embedding models convert token IDs into dense numerical vectors.
Question 7
What does tokenization accomplish?
❌ Splits docs into chunks + vectors
❌ Rearranges tokens
❌ Splits vectors into tokens
✅ Splits text into tokens and assigns numerical IDs
Explanation:
Tokenization maps text → tokens → token IDs, which embeddings convert into vectors.
🧾 Summary Table
| Q No. | Correct Answer | Key Concept |
|---|---|---|
| 1 | Use external knowledge sources | RAG purpose |
| 2 | Use RAG for latest info | Up-to-date answers |
| 3 | Retrieval augments prompt | Accuracy boost |
| 4 | Maintain knowledge base | RAG reliability |
| 5 | Embed + retrieve policy data | Domain-specific RAG |
| 6 | Convert token IDs → vectors | Embedding step |
| 7 | Split text into token IDs | Tokenization |