[Alpha] Vector Database

Warning: This is an experimental feature. To our knowledge, this is stable, but there are still rough edges in the experience. Contributions are welcome!

Overview

Vector database allows user to store and retrieve embeddings. Feast provides general APIs to store and retrieve embeddings.

Integration

Below are supported vector databases and implemented features:

Vector Database
Retrieval
Indexing
V2 Support*
Online Read

Pgvector

[x]

[ ]

[]

[]

Elasticsearch

[x]

[x]

[]

[]

Milvus

[x]

[x]

[x]

[x]

Faiss

[ ]

[ ]

[]

[]

SQLite

[x]

[ ]

[x]

[x]

Qdrant

[x]

[x]

[]

[]

*Note: V2 Support means the SDK supports retrieval of features along with vector embeddings from vector similarity search.

Note: SQLite is in limited access and only working on Python 3.10. It will be updated as sqlite_vec progresses.

Note: Milvus and SQLite implement the v2 retrieve_online_documents_v2 method in the SDK. This will be the longer-term solution so that Data Scientists can easily enable vector similarity search by just flipping a flag.

Examples

  • See the v0 Rag Demo for an example on how to use vector database using the retrieve_online_documents method (planning migration and deprecation (planning migration and deprecation).

  • See the v1 Milvus Quickstart for a quickstart guide on how to use Feast with Milvus using the retrieve_online_documents_v2 method.

Prepare offline embedding dataset

Run the following commands to prepare the embedding dataset:

The output will be stored in data/city_wikipedia_summaries.csv.

Initialize Feast feature store and materialize the data to the online store

Use the feature_store.yaml file to initialize the feature store. This will use the data as offline store, and Milvus as online store.

Run the following command in terminal to apply the feature store configuration:

Note that when you run feast apply you are going to apply the following Feature View that we will use for retrieval later:

Let's use the SDK to write a data frame of embeddings to the online store:

Prepare a query embedding

During inference (e.g., during when a user submits a chat message) we need to embed the input text. This can be thought of as a feature transformation of the input data. In this example, we'll do this with a small Sentence Transformer from Hugging Face.

Retrieve the top K similar documents

First create a feature store instance, and use the retrieve_online_documents_v2 API to retrieve the top 5 similar documents to the specified query.

Generate the Response

Let's assume we have a base prompt and a function that formats the retrieved documents called format_documents that we can then use to generate the response with OpenAI's chat completion API.

Configuration and Installation

We offer Milvus, PGVector, SQLite, Elasticsearch and Qdrant as Online Store options for Vector Databases.

Milvus offers a convenient local implementation for vector similarity search. To use Milvus, you can install the Feast package with the Milvus extra.

Installation with Milvus

Installation with Elasticsearch

Installation with Qdrant

Installation with SQLite

If you are using pyenv to manage your Python versions, you can install the SQLite extension with the following command:

And you can the Feast install package via:

Last updated

Was this helpful?