In this tutorial, we use feature stores to generate training data and power online model inference for a ride-sharing driver satisfaction prediction model. Feast solves several common issues in this flow:
Training-serving skew and complex data joins: Feature values often exist across multiple tables. Joining these datasets can be complicated, slow, and error-prone.
Feast joins these tables with battle-tested logic that ensures point-in-time correctness so future feature values do not leak to models.
*Upcoming: Feast alerts users to offline / online skew with data quality monitoring.
Online feature availability: At inference time, models often need access to features that aren't readily available and need to be precomputed from other datasources.
Feast manages deployment to a variety of online stores (e.g. DynamoDB, Redis, Google Cloud Datastore) and ensures necessary features are consistently available and freshly computed at inference time.
Feature reusability and model versioning: Different teams within an organization are often unable to reuse features across projects, resulting in duplicate feature creation logic. Models have data dependencies that need to be versioned, for example when running A/B tests on model versions.
Feast enables discovery of and collaboration on previously used features and enables versioning of sets of features (via feature services).
Feast enables feature transformation so users can re-use transformation logic across online / offline usecases and across models.
Step 1: Install Feast
Install the Feast SDK and CLI using pip:
In this tutorial, we focus on a local deployment. For a more in-depth guide on how to use Feast with GCP or AWS deployments, see Running Feast with GCP/AWS
pipinstallfeast
Step 2: Create a feature repository
Bootstrap a new feature repository using feast init from the command line.
feastinitfeature_repocdfeature_repo
Creating a new Feast repository in /home/Jovyan/feature_repo.
Let's take a look at the resulting demo repo itself. It breaks down into
data/ contains raw demo parquet data
example.py contains demo feature definitions
feature_store.yaml contains a demo setup configuring where data sources are
# This is an example feature definition filefrom google.protobuf.duration_pb2 import Durationfrom feast import Entity, Feature, FeatureView, FileSource, ValueType# Read data from parquet files. Parquet is convenient for local development mode. For# production, you can use your favorite DWH, such as BigQuery. See Feast documentation# for more info.driver_hourly_stats =FileSource( path="/content/feature_repo/data/driver_stats.parquet", event_timestamp_column="event_timestamp", created_timestamp_column="created",)# Define an entity for the driver. You can think of entity as a primary key used to# fetch features.driver =Entity(name="driver_id", value_type=ValueType.INT64, description="driver id",)# Our parquet files contain sample data that includes a driver_id column, timestamps and# three feature column. Here we define a Feature View that will allow us to serve this# data to our model online.driver_hourly_stats_view =FeatureView( name="driver_hourly_stats", entities=["driver_id"], ttl=Duration(seconds=86400*1), features=[Feature(name="conv_rate", dtype=ValueType.FLOAT),Feature(name="acc_rate", dtype=ValueType.FLOAT),Feature(name="avg_daily_trips", dtype=ValueType.INT64), ], online=True, batch_source=driver_hourly_stats, tags={},)
The key line defining the overall architecture of the feature store is the provider. This defines where the raw data exists (for generating training data & feature values for serving), and where to materialize feature values to in the online store (for serving).
Valid values for provider in feature_store.yaml are:
local: use file source / SQLite
gcp: use BigQuery / Google Cloud Datastore
aws: use Redshift / DynamoDB
To use a custom provider, see adding a custom provider. There are also several plugins maintained by the community: Azure, Postgres, and Hive. Note that the choice of provider gives sensible defaults but does not enforce those choices; for example, if you choose the AWS provider, you can use Redis as an online store alongside Redshift as an offline store.
Step 3: Register feature definitions and deploy your feature store
The apply command scans python files in the current directory for feature view/entity definitions, registers the objects, and deploys infrastructure. In this example, it reads example.py (shown again below for convenience) and sets up SQLite online store tables. Note that we had specified SQLite as the default online store by using the local provider in feature_store.yaml.
feastapply
# This is an example feature definition filefrom google.protobuf.duration_pb2 import Durationfrom feast import Entity, Feature, FeatureView, FileSource, ValueType# Read data from parquet files. Parquet is convenient for local development mode. For# production, you can use your favorite DWH, such as BigQuery. See Feast documentation# for more info.driver_hourly_stats =FileSource( path="/content/feature_repo/data/driver_stats.parquet", event_timestamp_column="event_timestamp", created_timestamp_column="created",)# Define an entity for the driver. You can think of entity as a primary key used to# fetch features.driver =Entity(name="driver_id", value_type=ValueType.INT64, description="driver id",)# Our parquet files contain sample data that includes a driver_id column, timestamps and# three feature column. Here we define a Feature View that will allow us to serve this# data to our model online.driver_hourly_stats_view =FeatureView( name="driver_hourly_stats", entities=["driver_id"], ttl=Duration(seconds=86400*1), features=[Feature(name="conv_rate", dtype=ValueType.FLOAT),Feature(name="acc_rate", dtype=ValueType.FLOAT),Feature(name="avg_daily_trips", dtype=ValueType.INT64), ], online=True, batch_source=driver_hourly_stats, tags={},)
To train a model, we need features and labels. Often, this label data is stored separately (e.g. you have one table storing user survey results and another set of tables with feature values).
The user can query that table of labels with timestamps and pass that into Feast as an entity dataframe for training data generation. In many cases, Feast will also intelligently join relevant tables to create the relevant feature vectors.
Note that we include timestamps because want the features for the same driver at various timestamps to be used in a model.
from datetime import datetime, timedeltaimport pandas as pdfrom feast import FeatureStore# The entity dataframe is the dataframe we want to enrich with feature valuesentity_df = pd.DataFrame.from_dict( {"driver_id": [1001, 1002, 1003],"label_driver_reported_satisfaction": [1, 5, 3], "event_timestamp": [ datetime.now() -timedelta(minutes=11), datetime.now() -timedelta(minutes=36), datetime.now() -timedelta(minutes=73), ], })store =FeatureStore(repo_path=".")training_df = store.get_historical_features( entity_df=entity_df, features=["driver_hourly_stats:conv_rate","driver_hourly_stats:acc_rate","driver_hourly_stats:avg_daily_trips", ],).to_df()print("----- Feature schema -----\n")print(training_df.info())print()print("----- Example features -----\n")print(training_df.head())
We now serialize the latest values of features since the beginning of time to prepare for serving (note: materialize-incremental serializes all new features since the last materialize call).
At inference time, we need to quickly read the latest feature values for different drivers (which otherwise might have existed only in batch sources) from the online feature store using get_online_features(). These feature vectors can then be fed to the model.