Skip to main content

Quickstart

LlamaIndex is a well-known framework for building LLM-powered agents over your data with LLMs and workflows. You can build your LlamaIndex pipeline and persist your metadata and embeddings in LanceDB via the LanceDBVectorStore class. First, install the LlamaIndex-LanceDB integration.
bash
pip install llama-index-vector-stores-LanceDB
Run the below script as an example. The vector store connector will open an existing LanceDB directory or create the directory if it does not exist.

Filtering

For metadata filtering, you can use a Lance SQL-like string filter as demonstrated in the example above. Additionally, you can also filter using the MetadataFilters class from LlamaIndex: For complete documentation, refer here. This example uses the colbert reranker. Make sure to install necessary dependencies for the reranker you choose. In the snippet above, you can change/specify query_type when creating the engine/retriever to use different search strategies, such as vector search or FTS.

API reference

LlamaIndex Vector Stores API reference

See the official LlamaIndex Vector Stores API reference for more details.