LogoLogo
v0.42-branch
v0.42-branch
  • Introduction
  • Community & getting help
  • Roadmap
  • Changelog
  • Getting started
    • Quickstart
    • Architecture
      • Overview
      • Language
      • Push vs Pull Model
      • Write Patterns
      • Feature Transformation
      • Feature Serving and Model Inference
      • Role-Based Access Control (RBAC)
    • Concepts
      • Overview
      • Project
      • Data ingestion
      • Entity
      • Feature view
      • Feature retrieval
      • Point-in-time joins
      • [Alpha] Saved dataset
      • Permission
      • Tags
    • Components
      • Overview
      • Registry
      • Offline store
      • Online store
      • Batch Materialization Engine
      • Provider
      • Authorization Manager
    • Third party integrations
    • FAQ
  • Tutorials
    • Sample use-case tutorials
      • Driver ranking
      • Fraud detection on GCP
      • Real-time credit scoring on AWS
      • Driver stats on Snowflake
    • Validating historical features with Great Expectations
    • Building streaming features
  • How-to Guides
    • Running Feast with Snowflake/GCP/AWS
      • Install Feast
      • Create a feature repository
      • Deploy a feature store
      • Build a training dataset
      • Load data into the online store
      • Read features from the online store
      • Scaling Feast
      • Structuring Feature Repos
    • Running Feast in production (e.g. on Kubernetes)
    • Customizing Feast
      • Adding a custom batch materialization engine
      • Adding a new offline store
      • Adding a new online store
      • Adding a custom provider
    • Adding or reusing tests
    • Starting Feast servers in TLS(SSL) Mode
  • Reference
    • Codebase Structure
    • Type System
    • Data sources
      • Overview
      • File
      • Snowflake
      • BigQuery
      • Redshift
      • Push
      • Kafka
      • Kinesis
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
    • Offline stores
      • Overview
      • Dask
      • Snowflake
      • BigQuery
      • Redshift
      • DuckDB
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
      • Remote Offline
    • Online stores
      • Overview
      • SQLite
      • Snowflake
      • Redis
      • Dragonfly
      • IKV
      • Datastore
      • DynamoDB
      • Bigtable
      • Remote
      • PostgreSQL
      • Cassandra + Astra DB
      • Couchbase
      • MySQL
      • Hazelcast
      • ScyllaDB
      • SingleStore
    • Registries
      • Local
      • S3
      • GCS
      • SQL
      • Snowflake
    • Providers
      • Local
      • Google Cloud Platform
      • Amazon Web Services
      • Azure
    • Batch Materialization Engines
      • Snowflake
      • AWS Lambda (alpha)
      • Spark (contrib)
    • Feature repository
      • feature_store.yaml
      • .feastignore
    • Feature servers
      • Python feature server
      • [Alpha] Go feature server
      • Offline Feature Server
    • [Beta] Web UI
    • [Beta] On demand feature view
    • [Alpha] Vector Database
    • [Alpha] Data quality monitoring
    • [Alpha] Streaming feature computation with Denormalized
    • Feast CLI reference
    • Python API reference
    • Usage
  • Project
    • Contribution process
    • Development guide
    • Backwards Compatibility Policy
      • Maintainer Docs
    • Versioning policy
    • Release process
    • Feast 0.9 vs Feast 0.10+
Powered by GitBook
On this page
  • Overview
  • Integration
  • Example
  • Prepare offline embedding dataset
  • Initialize Feast feature store and materialize the data to the online store
  • Prepare a query embedding
  • Retrieve the top 5 similar documents
  • Configuration

Was this helpful?

Edit on GitHub
Export as PDF
  1. Reference

[Alpha] Vector Database

Previous[Beta] On demand feature viewNext[Alpha] Data quality monitoring

Last updated 5 months ago

Was this helpful?

Warning: This is an experimental feature. To our knowledge, this is stable, but there are still rough edges in the experience. Contributions are welcome!

Overview

Vector database allows user to store and retrieve embeddings. Feast provides general APIs to store and retrieve embeddings.

Integration

Below are supported vector databases and implemented features:

Vector Database
Retrieval
Indexing

Pgvector

[x]

[ ]

Elasticsearch

[x]

[x]

Milvus

[ ]

[ ]

Faiss

[ ]

[ ]

SQLite

[x]

[ ]

Qdrant

[x]

[x]

Note: SQLite is in limited access and only working on Python 3.10. It will be updated as progresses.

Example

See for an example on how to use vector database.

Prepare offline embedding dataset

Run the following commands to prepare the embedding dataset:

python pull_states.py
python batch_score_documents.py

The output will be stored in data/city_wikipedia_summaries.csv.

Initialize Feast feature store and materialize the data to the online store

Use the feature_store.yaml file to initialize the feature store. This will use the data as offline store, and Pgvector as online store.

project: feast_demo_local
provider: local
registry:
  registry_type: sql
  path: postgresql://@localhost:5432/feast
online_store:
  type: postgres
  vector_enabled: true
  vector_len: 384
  host: 127.0.0.1
  port: 5432
  database: feast
  user: ""
  password: ""


offline_store:
  type: file
entity_key_serialization_version: 2

Run the following command in terminal to apply the feature store configuration:

feast apply

Note that when you run feast apply you are going to apply the following Feature View that we will use for retrieval later:

city_embeddings_feature_view = FeatureView(
    name="city_embeddings",
    entities=[item],
    schema=[
        Field(name="Embeddings", dtype=Array(Float32)),
    ],
    source=source,
    ttl=timedelta(hours=2),
)

Then run the following command in the terminal to materialize the data to the online store:

CURRENT_TIME=$(date -u +"%Y-%m-%dT%H:%M:%S")  
feast materialize-incremental $CURRENT_TIME  

Prepare a query embedding

from batch_score_documents import run_model, TOKENIZER, MODEL
from transformers import AutoTokenizer, AutoModel

question = "the most populous city in the U.S. state of Texas?"

tokenizer = AutoTokenizer.from_pretrained(TOKENIZER)
model = AutoModel.from_pretrained(MODEL)
query_embedding = run_model(question, tokenizer, model)
query = query_embedding.detach().cpu().numpy().tolist()[0]

Retrieve the top 5 similar documents

First create a feature store instance, and use the retrieve_online_documents API to retrieve the top 5 similar documents to the specified query.

from feast import FeatureStore
store = FeatureStore(repo_path=".")
features = store.retrieve_online_documents(
    feature="city_embeddings:Embeddings",
    query=query,
    top_k=5
).to_dict()

def print_online_features(features):
    for key, value in sorted(features.items()):
        print(key, " : ", value)

print_online_features(features)

Configuration

Installation with SQLite

If you are using pyenv to manage your Python versions, you can install the SQLite extension with the following command:

PYTHON_CONFIGURE_OPTS="--enable-loadable-sqlite-extensions" \
    LDFLAGS="-L/opt/homebrew/opt/sqlite/lib" \
    CPPFLAGS="-I/opt/homebrew/opt/sqlite/include" \
    pyenv install 3.10.14

And you can the Feast install package via:

pip install feast[sqlite_vec]

Installation with Elasticsearch

pip install feast[elasticsearch]

Installation with Qdrant

pip install feast[qdrant]

We offer , , and as Online Store options for Vector Databases.

sqlite_vec
https://github.com/feast-dev/feast-workshop/blob/rag/module_4_rag
PGVector
SQLite
Elasticsearch
Qdrant