Skip to content

Graded Quiz: Build RAG Apps with LlamaIndex :Build RAG Applications: Get Started (BM RAG and Agentic AI Professional Certificate) Answers 2025

Which step does LlamaIndex combine with prompt augmentation?

❌ User prompt acceptance
❌ Vector storage
❌ LLM response generation
Document chunking

Explanation:
LlamaIndex tightly integrates document chunking + prompt augmentation during retrieval and query processing.


Question 2

Purpose of LangChainNodeParser?

❌ Load directories
Wrap LangChain’s text splitters
❌ Embed prompts
❌ Store vectors

Explanation:
LangChainNodeParser allows LlamaIndex to use LangChain’s text-splitting tools.


Question 3

Purpose of embedding the user’s prompt in LlamaIndex?

❌ Load into directory
❌ Split the prompt
❌ Store prompt
Retrieve relevant chunks from the vector store

Explanation:
Embedding the prompt allows similarity search against the vector store to find relevant context.


Question 4

Which LangChain splitter resembles LlamaIndex’s SentenceSplitter?

❌ SemanticChunker
❌ CharacterTextSplitter
❌ TokenTextSplitter
RecursiveCharacterTextSplitter

Explanation:
Both try to split cleanly on semantic boundaries like paragraphs/sentences when possible.


Question 5

Purpose of LlamaIndex’s “prompt synthesizer”?

❌ Retrieve nodes
Perform prompt augmentation internally & pass augmented prompt to LLM
❌ Combine augmentation + embedding
❌ Manually send prompt

Explanation:
The synthesizer handles prompt construction behind the scenes, simplifying RAG.


Question 6

Key feature of VectorStoreIndex?

❌ No embedding support
❌ Only in-memory
Integrates external vector DBs into a native class
❌ Requires manual metadata setup

Explanation:
VectorStoreIndex works with many vector DBs (Pinecone, FAISS, Chroma, etc.) under one unified interface.


Question 7

Which framework suits Anna’s needs?

❌ LangChain (LlamaIndex incapable)
❌ LangChain (better native tools)
LlamaIndex, because it provides superior native solutions for document loading & in-memory vector storage
❌ LlamaIndex (LangChain incapable)

Explanation:
LlamaIndex was built specifically to handle:

  • native document loaders

  • native embeddings

  • internal vector stores

  • robust RAG pipelines

with optional integration to external libraries.


🧾 Summary Table

Q No. Correct Answer Key Concept
1 Document chunking LlamaIndex RAG flow
2 Wraps text splitters NodeParser purpose
3 Retrieve relevant chunks Prompt embedding
4 RecursiveCharacterTextSplitter Similar splitter
5 Internal prompt augmentation Prompt synthesizer
6 External vector DB integration VectorStoreIndex
7 LlamaIndex Best native RAG utilities