Search…
Quickstart
In this tutorial we will
  1. 1.
    Deploy a local feature store with a Parquet file offline store and Sqlite online store.
  2. 2.
    Build a training dataset using our time series features from our Parquet files.
  3. 3.
    Materialize feature values from the offline store into the online store.
  4. 4.
    Read the latest features from the online store for inference.
You can run this tutorial in Google Colab or run it on your localhost, following the guided steps below.

Overview

In this tutorial, we use feature stores to generate training data and power online model inference for a ride-sharing driver satisfaction prediction model. Feast solves several common issues in this flow:
  1. 1.
    Training-serving skew and complex data joins: Feature values often exist across multiple tables. Joining these datasets can be complicated, slow, and error-prone.
    • Feast joins these tables with battle-tested logic that ensures point-in-time correctness so future feature values do not leak to models.
    • Feast alerts users to offline / online skew with data quality monitoring
  2. 2.
    Online feature availability: At inference time, models often need access to features that aren't readily available and need to be precomputed from other datasources.
    • Feast manages deployment to a variety of online stores (e.g. DynamoDB, Redis, Google Cloud Datastore) and ensures necessary features are consistently available and freshly computed at inference time.
  3. 3.
    Feature reusability and model versioning: Different teams within an organization are often unable to reuse features across projects, resulting in duplicate feature creation logic. Models have data dependencies that need to be versioned, for example when running A/B tests on model versions.
    • Feast enables discovery of and collaboration on previously used features and enables versioning of sets of features (via feature services).
    • Feast enables feature transformation so users can re-use transformation logic across online / offline usecases and across models.

Step 1: Install Feast

Install the Feast SDK and CLI using pip:
Bash
1
pip install feast
Copied!

Step 2: Create a feature repository

Bootstrap a new feature repository using feast init from the command line.
Bash
1
feast init feature_repo
2
cd feature_repo
Copied!
Output
1
Creating a new Feast repository in /home/Jovyan/feature_repo.
Copied!
Let's take a look at the resulting demo repo itself. It breaks down into
  • data/ contains raw demo parquet data
  • example.py contains demo feature definitions
  • feature_store.yaml contains a demo setup configuring where data sources are
feature_store.yaml
example.py
1
project: my_project
2
registry: data/registry.db
3
provider: local
4
online_store:
5
path: data/online_store.db
Copied!
1
# This is an example feature definition file
2
3
from datetime import timedelta
4
5
from feast import Entity, FeatureView, Field, FileSource, ValueType
6
from feast.types import Float32, Int64
7
8
# Read data from parquet files. Parquet is convenient for local development mode. For
9
# production, you can use your favorite DWH, such as BigQuery. See Feast documentation
10
# for more info.
11
driver_hourly_stats = FileSource(
12
path="/content/feature_repo/data/driver_stats.parquet",
13
timestamp_field="event_timestamp",
14
created_timestamp_column="created",
15
)
16
17
# Define an entity for the driver. You can think of entity as a primary key used to
18
# fetch features.
19
# Entity has a name used for later reference (in a feature view, eg)
20
# and join_key to identify physical field name used in storages
21
driver = Entity(name="driver", value_type=ValueType.INT64, join_keys=["driver_id"], description="driver id",)
22
23
# Our parquet files contain sample data that includes a driver_id column, timestamps and
24
# three feature column. Here we define a Feature View that will allow us to serve this
25
# data to our model online.
26
driver_hourly_stats_view = FeatureView(
27
name="driver_hourly_stats",
28
entities=["driver"], # reference entity by name
29
ttl=timedelta(seconds=86400 * 1),
30
schema=[
31
Field(name="conv_rate", dtype=Float32),
32
Field(name="acc_rate", dtype=Float32),
33
Field(name="avg_daily_trips", dtype=Int64),
34
],
35
online=True,
36
source=driver_hourly_stats,
37
tags={},
38
)
39
40
driver_stats_fs = FeatureService(
41
name="driver_activity",
42
features=[driver_hourly_stats_view]
43
)
Copied!
The key line defining the overall architecture of the feature store is the provider. This defines where the raw data exists (for generating training data & feature values for serving), and where to materialize feature values to in the online store (for serving).
Valid values for provider in feature_store.yaml are:
  • local: use file source with SQLite/Redis
  • gcp: use BigQuery/Snowflake with Google Cloud Datastore/Redis
  • aws: use Redshift/Snowflake with DynamoDB/Redis
Note that there are many other sources Feast works with, including Azure, Hive, Trino, and PostgreSQL via community plugins. See Third party integrations for all supported datasources.
A custom setup can also be made by following adding a custom provider.

Inspecting the raw data

The raw feature data we have in this demo is stored in a local parquet file. The dataset captures hourly stats of a driver in a ride-sharing app.
1
import pandas as pd
2
pd.read_parquet("data/driver_stats.parquet")
Copied!
Demo parquet data: data/driver_stats.parquet

Step 3: Register feature definitions and deploy your feature store

The apply command scans python files in the current directory for feature view/entity definitions, registers the objects, and deploys infrastructure. In this example, it reads example.py (shown again below for convenience) and sets up SQLite online store tables. Note that we had specified SQLite as the default online store by using the local provider in feature_store.yaml.
Bash
example.py
1
feast apply
Copied!
1
# This is an example feature definition file
2
3
from datetime import timedelta
4
5
from feast import Entity, FeatureView, Field, FileSource, ValueType
6
from feast.types import Float32, Int64
7
8
# Read data from parquet files. Parquet is convenient for local development mode. For
9
# production, you can use your favorite DWH, such as BigQuery. See Feast documentation
10
# for more info.
11
driver_hourly_stats = FileSource(
12
path="/content/feature_repo/data/driver_stats.parquet",
13
timestamp_field="event_timestamp",
14
created_timestamp_column="created",
15
)
16
17
# Define an entity for the driver. You can think of entity as a primary key used to
18
# fetch features.
19
# Entity has a name used for later reference (in a feature view, eg)
20
# and join_key to identify physical field name used in storages
21
driver = Entity(name="driver", value_type=ValueType.INT64, join_keys=["driver_id"], description="driver id",)
22
23
# Our parquet files contain sample data that includes a driver_id column, timestamps and
24
# three feature column. Here we define a Feature View that will allow us to serve this
25
# data to our model online.
26
driver_hourly_stats_view = FeatureView(
27
name="driver_hourly_stats",
28
entities=["driver"], # reference entity by name
29
ttl=timedelta(seconds=86400 * 1),
30
schema=[
31
Field(name="conv_rate", dtype=Float32),
32
Field(name="acc_rate", dtype=Float32),
33
Field(name="avg_daily_trips", dtype=Int64),
34
],
35
online=True,
36
source=driver_hourly_stats,
37
tags={},
38
)
39
40
driver_stats_fs = FeatureService(
41
name="driver_activity",
42
features=[driver_hourly_stats_view]
43
)
Copied!
Output
1
Registered entity driver_id
2
Registered feature view driver_hourly_stats
3
Deploying infrastructure for driver_hourly_stats
Copied!

Step 4: Generating training data

To train a model, we need features and labels. Often, this label data is stored separately (e.g. you have one table storing user survey results and another set of tables with feature values).
The user can query that table of labels with timestamps and pass that into Feast as an entity dataframe for training data generation. In many cases, Feast will also intelligently join relevant tables to create the relevant feature vectors.
  • Note that we include timestamps because want the features for the same driver at various timestamps to be used in a model.
Python
1
from datetime import datetime, timedelta
2
import pandas as pd
3
4
from feast import FeatureStore
5
6
# The entity dataframe is the dataframe we want to enrich with feature values
7
entity_df = pd.DataFrame.from_dict(
8
{
9
# entity's join key -> entity values
10
"driver_id": [1001, 1002, 1003],
11
12
# label name -> label values
13
"label_driver_reported_satisfaction": [1, 5, 3],
14
15
# "event_timestamp" (reserved key) -> timestamps
16
"event_timestamp": [
17
datetime.now() - timedelta(minutes=11),
18
datetime.now() - timedelta(minutes=36),
19
datetime.now() - timedelta(minutes=73),
20
],
21
}
22
)
23
24
store = FeatureStore(repo_path=".")
25
26
training_df = store.get_historical_features(
27
entity_df=entity_df,
28
features=[
29
"driver_hourly_stats:conv_rate",
30
"driver_hourly_stats:acc_rate",
31
"driver_hourly_stats:avg_daily_trips",
32
],
33
).to_df()
34
35
print("----- Feature schema -----\n")
36
print(training_df.info())
37
38
print()
39
print("----- Example features -----\n")
40
print(training_df.head())
Copied!
Output
1
----- Feature schema -----
2
3
<class 'pandas.core.frame.DataFrame'>
4
Int64Index: 3 entries, 0 to 2
5
Data columns (total 6 columns):
6
# Column Non-Null Count Dtype
7
--- ------ -------------- -----
8
0 event_timestamp 3 non-null datetime64[ns, UTC]
9
1 driver_id 3 non-null int64
10
2 label_driver_reported_satisfaction 3 non-null int64
11
3 conv_rate 3 non-null float32
12
4 acc_rate 3 non-null float32
13
5 avg_daily_trips 3 non-null int32
14
dtypes: datetime64[ns, UTC](1), float32(2), int32(1), int64(2)
15
memory usage: 132.0 bytes
16
None
17
18
----- Example features -----
19
20
event_timestamp driver_id ... acc_rate avg_daily_trips
21
0 2021-08-23 15:12:55.489091+00:00 1003 ... 0.120588 938
22
1 2021-08-23 15:49:55.489089+00:00 1002 ... 0.504881 635
23
2 2021-08-23 16:14:55.489075+00:00 1001 ... 0.138416 606
24
25
[3 rows x 6 columns]
Copied!

Step 5: Load features into your online store

We now serialize the latest values of features since the beginning of time to prepare for serving (note: materialize-incremental serializes all new features since the last materialize call).
Bash
1
CURRENT_TIME=$(date -u +"%Y-%m-%dT%H:%M:%S")
2
feast materialize-incremental $CURRENT_TIME
Copied!
Output
1
Materializing 1 feature views to 2021-08-23 16:25:46+00:00 into the sqlite online
2
store.
3
4
driver_hourly_stats from 2021-08-22 16:25:47+00:00 to 2021-08-23 16:25:46+00:00:
5
100%|████████████████████████████████████████████| 5/5 [00:00<00:00, 592.05it/s]
Copied!

Step 6: Fetching feature vectors for inference

At inference time, we need to quickly read the latest feature values for different drivers (which otherwise might have existed only in batch sources) from the online feature store using get_online_features(). These feature vectors can then be fed to the model.
Python
1
from pprint import pprint
2
from feast import FeatureStore
3
4
store = FeatureStore(repo_path=".")
5
6
feature_vector = store.get_online_features(
7
features=[
8
"driver_hourly_stats:conv_rate",
9
"driver_hourly_stats:acc_rate",
10
"driver_hourly_stats:avg_daily_trips",
11
],
12
entity_rows=[
13
# {join_key: entity_value}
14
{"driver_id": 1004},
15
{"driver_id": 1005},
16
],
17
).to_dict()
18
19
pprint(feature_vector)
Copied!
Output
1
{
2
'acc_rate': [0.5732735991477966, 0.7828438878059387],
3
'avg_daily_trips': [33, 984],
4
'conv_rate': [0.15498852729797363, 0.6263588070869446],
5
'driver_id': [1004, 1005]
6
}
Copied!

Step 7: Using a feature service to fetch online features instead.

You can also use feature services to manage multiple features, and decouple feature view definitions and the features needed by end applications. The feature store can also be used to fetch either online or historical features using the same api below. More information can be found here.
1
from feast import FeatureStore
2
feature_store = FeatureStore('.') # Initialize the feature store
3
4
feature_service = feature_store.get_feature_service("driver_activity")
5
features = feature_store.get_online_features(
6
features=feature_service,
7
entity_rows=[
8
# {join_key: entity_value}
9
{"driver_id": 1004},
10
{"driver_id": 1005},
11
],
12
).to_dict()
Copied!
Output
1
{
2
'acc_rate': [0.5732735991477966, 0.7828438878059387],
3
'avg_daily_trips': [33, 984],
4
'conv_rate': [0.15498852729797363, 0.6263588070869446],
5
'driver_id': [1004, 1005]
6
}
Copied!

Step 8: Browse your features with the Web UI (experimental)

View all registered features, data sources, entities, and feature services with the Web UI.
One of the ways to view this is with the feast ui command.

Next steps