Contextual RAG View on GitHub | Improves retrieval by combatting the “lost in the middle” problem. This technique uses an LLM to generate succinct context for each document chunk, then prepends that context to the chunk before embedding, leading to more accurate retrieval. |
Matryoshka Embeddings View on GitHub | Demonstrates a RAG pipeline using Matryoshka Embeddings with LanceDB and LlamaIndex. This method allows for efficient storage and retrieval of nested, variable-sized embeddings. |
HyDE (Hypothetical Document Embeddings) View on GitHub | An advanced RAG technique that uses an LLM to generate a “hypothetical” document in response to a query. This hypothetical document is then used to retrieve actual, similar documents, improving relevance. |
Late Chunking View on GitHub | An advanced RAG method where documents are retrieved first, and then chunking is performed on the retrieved documents just before synthesis. This helps maintain context that might be lost with pre-chunking. |
Agentic RAG View on GitHub | This tutorial demonstrates how to build a RAG system where multiple AI agents collaborate to retrieve information and generate answers, leading to more robust and intelligent applications. |