Introduction

What is Feast?

Feast (Feature Store) is a customizable operational data system that re-uses existing infrastructure to manage and serve machine learning features to realtime models.

Feast allows ML platform teams to:

  • Make features consistently available for training and serving by managing an offline store (to process historical data for scale-out batch scoring or model training), a low-latency online store (to power real-time prediction), and a battle-tested feature server (to serve pre-computed features online).

  • Avoid data leakage by generating point-in-time correct feature sets so data scientists can focus on feature engineering rather than debugging error-prone dataset joining logic. This ensure that future feature values do not leak to models during training.

  • Decouple ML from data infrastructure by providing a single data access layer that abstracts feature storage from feature retrieval, ensuring models remain portable as you move from training models to serving models, from batch models to realtime models, and from one data infra system to another.

Note: Feast today primarily addresses timestamped structured data.

Who is Feast for?

Feast helps ML platform teams with DevOps experience productionize real-time models. Feast can also help these teams build towards a feature platform that improves collaboration between engineers and data scientists.

Feast is likely not the right tool if you

  • are in an organization that’s just getting started with ML and is not yet sure what the business impact of ML is

  • rely primarily on unstructured data

  • need very low latency feature retrieval (e.g. p99 feature retrieval << 10ms)

  • have a small team to support a large number of use cases

What Feast is not?

Feast is not

  • an ETL / ELT system: Feast is not (and does not plan to become) a general purpose data transformation or pipelining system. Users often leverage tools like dbt to manage upstream data transformations.

  • a data orchestration tool: Feast does not manage or orchestrate complex workflow DAGs. It relies on upstream data pipelines to produce feature values and integrations with tools like Airflow to make features consistently available.

  • a data warehouse: Feast is not a replacement for your data warehouse or the source of truth for all transformed data in your organization. Rather, Feast is a light-weight downstream layer that can serve data from an existing data warehouse (or other data sources) to models in production.

  • a database: Feast is not a database, but helps manage data stored in other systems (e.g. BigQuery, Snowflake, DynamoDB, Redis) to make features consistently available at training / serving time

Feast does not fully solve

  • reproducible model training / model backtesting / experiment management: Feast captures feature and model metadata, but does not version-control datasets / labels or manage train / test splits. Other tools like DVC, MLflow, and Kubeflow are better suited for this.

  • batch + streaming feature engineering: Feast primarily processes already transformed feature values (though it offers experimental light-weight transformations). Users usually integrate Feast with upstream systems (e.g. existing ETL/ELT pipelines). Tecton is a more fully featured feature platform which addresses these needs.

  • native streaming feature integration: Feast enables users to push streaming features, but does not pull from streaming sources or manage streaming pipelines. Tecton is a more fully featured feature platform which orchestrates end to end streaming pipelines.

  • feature sharing: Feast has experimental functionality to enable discovery and cataloguing of feature metadata with a Feast web UI (alpha). Feast also has community contributed plugins with DataHub and Amundsen. Tecton also more robustly addresses these needs.

  • lineage: Feast helps tie feature values to model versions, but is not a complete solution for capturing end-to-end lineage from raw data sources to model versions. Feast also has community contributed plugins with DataHub and Amundsen. Tecton captures more end-to-end lineage by also managing feature transformations.

  • data quality / drift detection: Feast has experimental integrations with Great Expectations, but is not purpose built to solve data drift / data quality issues. This requires more sophisticated monitoring across data pipelines, served feature values, labels, and model versions.

Example use cases

Many companies have used Feast to power real-world ML use cases such as:

  • Personalizing online recommendations by leveraging pre-computed historical user or item features.

  • Online fraud detection, using features that compare against (pre-computed) historical transaction patterns

  • Churn prediction (an offline model), generating feature values for all users at a fixed cadence in batch

  • Credit scoring, using pre-computed historical features to compute probability of default

How can I get started?

The best way to learn Feast is to use it. Head over to our Quickstart and try it out!

Explore the following resources to get started with Feast:

Last updated