LogoLogo
v0.49-branch
v0.49-branch
  • Introduction
  • Blog
  • Community & getting help
  • Roadmap
  • Changelog
  • Getting started
    • Quickstart
    • Architecture
      • Overview
      • Language
      • Push vs Pull Model
      • Write Patterns
      • Feature Transformation
      • Feature Serving and Model Inference
      • Role-Based Access Control (RBAC)
    • Concepts
      • Overview
      • Project
      • Data ingestion
      • Entity
      • Feature view
      • Feature retrieval
      • Point-in-time joins
      • [Alpha] Saved dataset
      • Permission
      • Tags
    • Use Cases
    • Components
      • Overview
      • Registry
      • Offline store
      • Online store
      • Feature server
      • Batch Materialization Engine
      • Provider
      • Authorization Manager
      • OpenTelemetry Integration
    • Third party integrations
    • FAQ
  • Tutorials
    • Sample use-case tutorials
      • Driver ranking
      • Fraud detection on GCP
      • Real-time credit scoring on AWS
      • Driver stats on Snowflake
    • Validating historical features with Great Expectations
    • Building streaming features
    • Retrieval Augmented Generation (RAG) with Feast
  • How-to Guides
    • Running Feast with Snowflake/GCP/AWS
      • Install Feast
      • Create a feature repository
      • Deploy a feature store
      • Build a training dataset
      • Load data into the online store
      • Read features from the online store
      • Scaling Feast
      • Structuring Feature Repos
    • Running Feast in production (e.g. on Kubernetes)
    • Customizing Feast
      • Adding a custom batch materialization engine
      • Adding a new offline store
      • Adding a new online store
      • Adding a custom provider
    • Adding or reusing tests
    • Starting Feast servers in TLS(SSL) Mode
  • Reference
    • Codebase Structure
    • Type System
    • Data sources
      • Overview
      • File
      • Snowflake
      • BigQuery
      • Redshift
      • Push
      • Kafka
      • Kinesis
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
      • Couchbase (contrib)
    • Offline stores
      • Overview
      • Dask
      • Snowflake
      • BigQuery
      • Redshift
      • DuckDB
      • Couchbase Columnar (contrib)
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
      • Clickhouse (contrib)
      • Remote Offline
    • Online stores
      • Overview
      • SQLite
      • Snowflake
      • Redis
      • Dragonfly
      • IKV
      • Datastore
      • DynamoDB
      • Bigtable
      • Remote
      • PostgreSQL
      • Cassandra + Astra DB
      • Couchbase
      • MySQL
      • Hazelcast
      • ScyllaDB
      • SingleStore
      • Milvus
    • Registries
      • Local
      • S3
      • GCS
      • SQL
      • Snowflake
    • Providers
      • Local
      • Google Cloud Platform
      • Amazon Web Services
      • Azure
    • Batch Materialization Engines
      • Snowflake
      • AWS Lambda (alpha)
      • Spark (contrib)
    • Feature repository
      • feature_store.yaml
      • .feastignore
    • Feature servers
      • Python feature server
      • [Alpha] Go feature server
      • Offline Feature Server
      • Registry server
    • [Beta] Web UI
    • [Beta] On demand feature view
    • [Alpha] Vector Database
    • [Alpha] Data quality monitoring
    • [Alpha] Streaming feature computation with Denormalized
    • Feast CLI reference
    • Python API reference
    • Usage
  • Project
    • Contribution process
    • Development guide
    • Backwards Compatibility Policy
      • Maintainer Docs
    • Versioning policy
    • Release process
    • Feast 0.9 vs Feast 0.10+
Powered by GitBook
On this page
  • Overview
  • Integration
  • Examples
  • Prepare offline embedding dataset
  • Initialize Feast feature store and materialize the data to the online store
  • Prepare a query embedding
  • Retrieve the top K similar documents
  • Generate the Response
  • Configuration and Installation

Was this helpful?

Edit on GitHub
Export as PDF
  1. Reference

[Alpha] Vector Database

Previous[Beta] On demand feature viewNext[Alpha] Data quality monitoring

Last updated 8 days ago

Was this helpful?

Warning: This is an experimental feature. To our knowledge, this is stable, but there are still rough edges in the experience. Contributions are welcome!

Overview

Vector database allows user to store and retrieve embeddings. Feast provides general APIs to store and retrieve embeddings.

Integration

Below are supported vector databases and implemented features:

Vector Database
Retrieval
Indexing
V2 Support*
Online Read

Pgvector

[x]

[ ]

[]

[]

Elasticsearch

[x]

[x]

[]

[]

Milvus

[x]

[x]

[x]

[x]

Faiss

[ ]

[ ]

[]

[]

SQLite

[x]

[ ]

[x]

[x]

Qdrant

[x]

[x]

[]

[]

*Note: V2 Support means the SDK supports retrieval of features along with vector embeddings from vector similarity search.

Note: SQLite is in limited access and only working on Python 3.10. It will be updated as progresses.

We will be deprecating the retrieve_online_documents method in the SDK in the future. We recommend using the retrieve_online_documents_v2 method instead, which offers easier vector index configuration directly in the Feature View and the ability to retrieve standard features alongside your vector embeddings for richer context injection.

Long term we will collapse the two methods into one, but for now, we recommend using the retrieve_online_documents_v2 method. Beyond that, we will then have retrieve_online_documents and retrieve_online_documents_v2 simply point to get_online_features for backwards compatibility and the adopt industry standard naming conventions.

Note: Milvus and SQLite implement the v2 retrieve_online_documents_v2 method in the SDK. This will be the longer-term solution so that Data Scientists can easily enable vector similarity search by just flipping a flag.

Examples

Prepare offline embedding dataset

Run the following commands to prepare the embedding dataset:

python pull_states.py
python batch_score_documents.py

The output will be stored in data/city_wikipedia_summaries.csv.

Initialize Feast feature store and materialize the data to the online store

Use the feature_store.yaml file to initialize the feature store. This will use the data as offline store, and Milvus as online store.

project: local_rag
provider: local
registry: data/registry.db
online_store:
  type: milvus
  path: data/online_store.db
  vector_enabled: true
  embedding_dim: 384
  index_type: "IVF_FLAT"


offline_store:
  type: file
entity_key_serialization_version: 3
# By default, no_auth for authentication and authorization, other possible values kubernetes and oidc. Refer the documentation for more details.
auth:
    type: no_auth

Run the following command in terminal to apply the feature store configuration:

feast apply

Note that when you run feast apply you are going to apply the following Feature View that we will use for retrieval later:

document_embeddings = FeatureView(
    name="embedded_documents",
    entities=[item, author],
    schema=[
        Field(
            name="vector",
            dtype=Array(Float32),
            # Look how easy it is to enable RAG!
            vector_index=True,
            vector_search_metric="COSINE",
        ),
        Field(name="item_id", dtype=Int64),
        Field(name="author_id", dtype=String),
        Field(name="created_timestamp", dtype=UnixTimestamp),
        Field(name="sentence_chunks", dtype=String),
        Field(name="event_timestamp", dtype=UnixTimestamp),
    ],
    source=rag_documents_source,
    ttl=timedelta(hours=24),
)

Let's use the SDK to write a data frame of embeddings to the online store:

store.write_to_online_store(feature_view_name='city_embeddings', df=df)

Prepare a query embedding

During inference (e.g., during when a user submits a chat message) we need to embed the input text. This can be thought of as a feature transformation of the input data. In this example, we'll do this with a small Sentence Transformer from Hugging Face.

import torch
import torch.nn.functional as F
from feast import FeatureStore
from pymilvus import MilvusClient, DataType, FieldSchema
from transformers import AutoTokenizer, AutoModel
from example_repo import city_embeddings_feature_view, item

TOKENIZER = "sentence-transformers/all-MiniLM-L6-v2"
MODEL = "sentence-transformers/all-MiniLM-L6-v2"

def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[
        0
    ]  # First element of model_output contains all token embeddings
    input_mask_expanded = (
        attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    )
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(
        input_mask_expanded.sum(1), min=1e-9
    )

def run_model(sentences, tokenizer, model):
    encoded_input = tokenizer(
        sentences, padding=True, truncation=True, return_tensors="pt"
    )
    # Compute token embeddings
    with torch.no_grad():
        model_output = model(**encoded_input)

    sentence_embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
    sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
    return sentence_embeddings

question = "Which city has the largest population in New York?"

tokenizer = AutoTokenizer.from_pretrained(TOKENIZER)
model = AutoModel.from_pretrained(MODEL)
query_embedding = run_model(question, tokenizer, model).detach().cpu().numpy().tolist()[0]

Retrieve the top K similar documents

First create a feature store instance, and use the retrieve_online_documents_v2 API to retrieve the top 5 similar documents to the specified query.

context_data = store.retrieve_online_documents_v2(
    features=[
        "city_embeddings:vector",
        "city_embeddings:item_id",
        "city_embeddings:state",
        "city_embeddings:sentence_chunks",
        "city_embeddings:wiki_summary",
    ],
    query=query_embedding,
    top_k=3,
    distance_metric='COSINE',
).to_df()

Generate the Response

Let's assume we have a base prompt and a function that formats the retrieved documents called format_documents that we can then use to generate the response with OpenAI's chat completion API.

FULL_PROMPT = format_documents(rag_context_data, BASE_PROMPT)

from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get("OPENAI_API_KEY"),
)
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": FULL_PROMPT},
        {"role": "user", "content": question}
    ],
)

# And this will print the content. Look at the examples/rag/milvus-quickstart.ipynb for an end-to-end example.
print('\n'.join([c.message.content for c in response.choices]))

Configuration and Installation

Milvus offers a convenient local implementation for vector similarity search. To use Milvus, you can install the Feast package with the Milvus extra.

Installation with Milvus

pip install feast[milvus]

Installation with Elasticsearch

pip install feast[elasticsearch]

Installation with Qdrant

pip install feast[qdrant]

Installation with SQLite

If you are using pyenv to manage your Python versions, you can install the SQLite extension with the following command:

PYTHON_CONFIGURE_OPTS="--enable-loadable-sqlite-extensions" \
    LDFLAGS="-L/opt/homebrew/opt/sqlite/lib" \
    CPPFLAGS="-I/opt/homebrew/opt/sqlite/include" \
    pyenv install 3.10.14

And you can the Feast install package via:

pip install feast[sqlite_vec]

See the v0 for an example on how to use vector database using the retrieve_online_documents method (planning migration and deprecation (planning migration and deprecation).

See the v1 for a quickstart guide on how to use Feast with Milvus using the retrieve_online_documents_v2 method.

We offer , , , and as Online Store options for Vector Databases.

sqlite_vec
Rag Demo
Milvus Quickstart
Milvus
PGVector
SQLite
Elasticsearch
Qdrant