Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Speak to us: Have a question, feature request, idea, or just looking to speak to a real person? Set up a meeting with a Feast maintainer over here!
Slack: Feel free to ask questions or say hello!
Mailing list: We have both a user and developer mailing list.
Feast users should join feast-discuss@googlegroups.com group by clicking here.
Feast developers should join feast-dev@googlegroups.com group by clicking here.
Google Folder: This folder is used as a central repository for all Feast resources. For example:
Design proposals in the form of Request for Comments (RFC).
User surveys and meeting minutes.
Slide decks of conferences our contributors have spoken at.
Feast GitHub Repository: Find the complete Feast codebase on GitHub.
Feast Linux Foundation Wiki: Our LFAI wiki page contains links to resources for contributors and maintainers.
Slack: Need to speak to a human? Come ask a question in our Slack channel (link above).
GitHub Issues: Found a bug or need a feature? Create an issue on GitHub.
StackOverflow: Need to ask a question on how to use Feast? We also monitor and respond to StackOverflow.
We have a user and contributor community call every two weeks (Asia & US friendly).
Please join the above Feast user groups in order to see calendar invites to the community calls
Tuesday 18:00 pm to 18:30 pm (US, Asia)
Tuesday 10:00 am to 10:30 am (US, Europe)
Meeting notes: https://bit.ly/feast-notes
The list below contains the functionality that contributors are planning to develop for Feast
Items below that are in development (or planned for development) will be indicated in parentheses.
We welcome contribution to all items in the roadmap!
Want to influence our roadmap and prioritization? Submit your feedback to this form.
Want to speak to a Feast contributor? We are more than happy to jump on a call. Please schedule a time using Calendly.
Data Sources
Offline Stores
Online Stores
Streaming
Feature Engineering
Deployments
Feature Serving
Data Quality Management
Feature Discovery and Governance
Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training).
Models need consistent access to data: Machine Learning (ML) systems built on traditional data infrastructure are often coupled to databases, object stores, streams, and files. A result of this coupling, however, is that any change in data infrastructure may break dependent ML systems. Another challenge is that dual implementations of data retrieval for training and serving can lead to inconsistencies in data, which in turn can lead to training-serving skew.
Feast decouples your models from your data infrastructure by providing a single data access layer that abstracts feature storage from feature retrieval. Feast also provides a consistent means of referencing feature data for retrieval, and therefore ensures that models remain portable when moving from training to serving.
Deploying new features into production is difficult: Many ML teams consist of members with different objectives. Data scientists, for example, aim to deploy features into production as soon as possible, while engineers want to ensure that production systems remain stable. These differing objectives can create an organizational friction that slows time-to-market for new features.
Feast addresses this friction by providing both a centralized registry to which data scientists can publish features and a battle-hardened serving layer. Together, these enable non-engineering teams to ship features into production with minimal oversight.
Models need point-in-time correct data: ML models in production require a view of data consistent with the one on which they are trained, otherwise the accuracy of these models could be compromised. Despite this need, many data science projects suffer from inconsistencies introduced by future feature values being leaked to models during training.
Feast solves the challenge of data leakage by providing point-in-time correct feature retrieval when exporting feature datasets for model training.
Features aren't reused across projects: Different teams within an organization are often unable to reuse features across projects. The siloed nature of development and the monolithic design of end-to-end ML systems contribute to duplication of feature creation and usage across teams and projects.
Feast addresses this problem by introducing feature reuse through a centralized registry. This registry enables multiple teams working on different projects not only to contribute features, but also to reuse these same features. With Feast, data scientists can start new ML projects by selecting previously engineered features from a centralized registry, and are no longer required to develop new features for each project.
Feature engineering: We aim for Feast to support light-weight feature engineering as part of our API.
Feature discovery: We also aim for Feast to include a first-class user interface for exploring and discovering entities and features.
Feature validation: We additionally aim for Feast to improve support for statistics generation of feature data and subsequent validation of these statistics. Current support is limited.
ETL or ELT system: Feast is not (and does not plan to become) a general purpose data transformation or pipelining system. Feast plans to include a light-weight feature engineering toolkit, but we encourage teams to integrate Feast with upstream ETL/ELT systems that are specialized in transformation.
Data warehouse: Feast is not a replacement for your data warehouse or the source of truth for all transformed data in your organization. Rather, Feast is a light-weight downstream layer that can serve data from an existing data warehouse (or other data sources) to models in production.
Data catalog: Feast is not a general purpose data catalog for your organization. Feast is purely focused on cataloging features for use in ML pipelines or systems, and only to the extent of facilitating the reuse of features.
The best way to learn Feast is to use it. Head over to our Quickstart and try it out!
Explore the following resources to get started with Feast:
Quickstart is the fastest way to get started with Feast
Concepts describes all important Feast API concepts
Architecture describes Feast's overall architecture.
Tutorials shows full examples of using Feast in machine learning applications.
Running Feast with GCP/AWS provides a more in-depth guide to using Feast.
Reference contains detailed API and design documents.
Contributing contains resources for anyone who wants to contribute to Feast.
The data source refers to raw underlying data (e.g. a table in BigQuery).
Feast uses a time-series data model to represent data. This data model is used to interpret feature data in data sources in order to build training datasets or when materializing features into an online store.
Below is an example data source with a single entity (driver
) and two features (trips_today
, and rating
).
The top-level namespace within Feast is a . Users define one or more within a project. Each feature view contains one or more that relate to a specific . A feature view must always have a , which in turn is used during the generation of training and when materializing feature values into the online store.
Projects provide complete isolation of feature stores at the infrastructure level. This is accomplished through resource namespacing, e.g., prefixing table names with the associated project. Each project should be considered a completely separate universe of entities and features. It is not possible to retrieve features from multiple projects in a single request. We recommend having a single feature store and a single project per environment (dev
, staging
, prod
).
Projects are currently being supported for backward compatibility reasons. Projects may change in the future as we simplify the Feast API.
In this tutorial we will
Deploy a local feature store with a Parquet file offline store and Sqlite online store.
Build a training dataset using our time series features from our Parquet files.
Materialize feature values from the offline store into the online store.
Read the latest features from the online store for inference.
You can run this tutorial in Google Colab or run it on your localhost, following the guided steps below.
In this tutorial, we use feature stores to generate training data and power online model inference for a ride-sharing driver satisfaction prediction model. Feast solves several common issues in this flow:
Training-serving skew and complex data joins: Feature values often exist across multiple tables. Joining these datasets can be complicated, slow, and error-prone.
Feast joins these tables with battle-tested logic that ensures point-in-time correctness so future feature values do not leak to models.
*Upcoming: Feast alerts users to offline / online skew with data quality monitoring.
Online feature availability: At inference time, models often need access to features that aren't readily available and need to be precomputed from other datasources.
Feast manages deployment to a variety of online stores (e.g. DynamoDB, Redis, Google Cloud Datastore) and ensures necessary features are consistently available and freshly computed at inference time.
Feature reusability and model versioning: Different teams within an organization are often unable to reuse features across projects, resulting in duplicate feature creation logic. Models have data dependencies that need to be versioned, for example when running A/B tests on model versions.
Feast enables discovery of and collaboration on previously used features and enables versioning of sets of features (via feature services).
*Upcoming: Feast enables feature transformation so users can re-use transformation logic across online / offline usecases and across models.
Install the Feast SDK and CLI using pip:
Bootstrap a new feature repository using feast init
from the command line.
Let's take a look at the resulting demo repo itself. It breaks down into
data/
contains raw demo parquet data
example.py
contains demo feature definitions
feature_store.yaml
contains a demo setup configuring where data sources are
The key line defining the overall architecture of the feature store is the provider. This defines where the raw data exists (for generating training data & feature values for serving), and where to materialize feature values to in the online store (for serving).
Valid values for provider
in feature_store.yaml
are:
local: use file source / SQLite
gcp: use BigQuery / Google Cloud Datastore
aws: use Redshift / DynamoDB
A custom setup (e.g. using the built-in support for Redis) can be made by following Creating a custom provider
The apply
command scans python files in the current directory for feature view/entity definitions, registers the objects, and deploys infrastructure. In this example, it reads example.py
(shown again below for convenience) and sets up SQLite online store tables. Note that we had specified SQLite as the default online store by using the local
provider in feature_store.yaml
.
To train a model, we need features and labels. Often, this label data is stored separately (e.g. you have one table storing user survey results and another set of tables with feature values).
The user can query that table of labels with timestamps and pass that into Feast as an entity dataframe for training data generation. In many cases, Feast will also intelligently join relevant tables to create the relevant feature vectors.
Note that we include timestamps because want the features for the same driver at various timestamps to be used in a model.
We now serialize the latest values of features since the beginning of time to prepare for serving (note: materialize-incremental
serializes all new features since the last materialize
call).
At inference time, we need to quickly read the latest feature values for different drivers (which otherwise might have existed only in batch sources) from the online feature store using get_online_features()
. These feature vectors can then be fed to the model.
In this tutorial, we focus on a local deployment. For a more in-depth guide on how to use Feast with GCP or AWS deployments, see
Read the page to understand the Feast data model.
Read the page.
Check out our section for more examples on how to use Feast.
Follow our guide for a more in-depth tutorial on using Feast.
Join other Feast users and contributors in and become part of the community!
A feature service is an object that represents a logical group of features from one or more feature views. Feature Services allows features from within a feature view to be used as needed by an ML model. Users can expect to create one feature service per model, allowing for tracking of the features used by models.
Feature services are used during
The generation of training datasets when querying feature views in order to find historical feature values. A single training dataset may consist of features from multiple feature views.
Retrieval of features from the online store. The features retrieved from the online store may also belong to multiple feature views.
Applying a feature service does not result in an actual service being deployed.
A dataset is a collection of rows that is produced by a historical retrieval from Feast in order to train a model. A dataset is produced by a join from one or more feature views onto an entity dataframe. Therefore, a dataset may consist of features from multiple feature views.
Dataset vs Feature View: Feature views contain the schema of data and a reference to where data can be found (through its data source). Datasets are the actual data manifestation of querying those data sources.
Dataset vs Data Source: Datasets are the output of historical retrieval, whereas data sources are the inputs. One or more data sources can be used in the creation of a dataset.
Feature references uniquely identify feature values in Feast. The structure of a feature reference in string form is as follows: <feature_view>:<feature>
Feature references are used for the retrieval of features from Feast:
It is possible to retrieve features from multiple feature views with a single request, and Feast is able to join features from multiple tables in order to build a training dataset. However, It is not possible to reference (or retrieve) features from multiple projects at the same time.
The timestamp on which an event occurred, as found in a feature view's data source. The entity timestamp describes the event time at which a feature was observed or generated.
Event timestamps are used during point-in-time joins to ensure that the latest feature values are joined from feature views onto entity rows. Event timestamps are also used to ensure that old feature values aren't served to models during online serving.
A feature view is an object that represents a logical group of time-series feature data as it is found in a data source. Feature views consist of one or more entities, features, and a data source. Feature views allow Feast to model your existing feature data in a consistent way in both an offline (training) and online (serving) environment.
Feature views are used during
The generation of training datasets by querying the data source of feature views in order to find historical feature values. A single training dataset may consist of features from multiple feature views.
Loading of feature values into an online store. Feature views determine the storage schema in the online store.
Retrieval of features from the online store. Feature views provide the schema definition to Feast in order to look up features from the online store.
Feast does not generate feature values. It acts as the ingestion and serving system. The data sources described within feature views should reference feature values in their already computed form.
A feature is an individual measurable property observed on an entity. For example, a feature of a customer
entity could be the number of transactions they have made on an average month.
Features are defined as part of feature views. Since Feast does not transform data, a feature is essentially a schema that only contains a name and a type:
Together with data sources, they indicate to Feast where to find your feature values, e.g., in a specific parquet file or BigQuery table. Feature definitions are also used when reading features from the feature store, using feature references.
Feature names must be unique within a feature view.
Create Batch Features: ELT/ETL systems like Spark and SQL are used to transform data in the batch store.
Feast Apply: The user (or CI) publishes versioned controlled feature definitions using feast apply
. This CLI command updates infrastructure and persists definitions in the object store registry.
Feast Materialize: The user (or scheduler) executes feast materialize
which loads features from the offline store into the online store.
Model Training: A model training pipeline is launched. It uses the Feast Python SDK to retrieve a training dataset and trains a model.
Get Historical Features: Feast exports a point-in-time correct training dataset based on the list of features and entity dataframe provided by the model training pipeline.
Deploy Model: The trained model binary (and list of features) are deployed into a model serving system. This step is not executed by Feast.
Prediction: A backend system makes a request for a prediction from the model serving service.
Get Online Features: The model serving service makes a request to the Feast Online Serving service for online features using a Feast SDK.
A complete Feast deployment contains the following components:
Feast Registry: An object store (GCS, S3) based registry used to persist feature definitions that are registered with the feature store. Systems can discover feature data by interacting with the registry through the Feast SDK.
Feast Python SDK/CLI: The primary user facing SDK. Used to:
Manage version controlled feature definitions.
Materialize (load) feature values into the online store.
Build and retrieve training datasets from the offline store.
Retrieve online features.
Online Store: The online store is a database that stores only the latest feature values for each entity. The online store is populated by materialization jobs.
Offline Store: The offline store persists batch data that has been ingested into Feast. This data is used for producing training datasets. Feast does not manage the offline store directly, but runs queries against it.
Java and Go Clients are also available for online feature retrieval.
Feast users use Feast to manage two important sets of configuration:
Configuration about how to run Feast on your infrastructure
Feature definitions
With Feast, the above configuration can be written declaratively and stored as code in a central location. This central location is called a feature repository. The feature repository is the declarative source of truth for what the desired state of a feature store should be.
The Feast CLI uses the feature repository to configure, deploy, and manage your feature store.
An example structure of a feature repository is shown below:
For more details, see the reference.
The Feast online store is used for low-latency online feature value lookups. Feature values are loaded into the online store from data sources in feature views using the materialize
command.
The storage schema of features within the online store mirrors that of the data source used to populate the online store. One key difference between the online store and data sources is that only the latest feature values are stored per entity key. No historical values are stored.
Example batch data source
Once the above data source is materialized into Feast (using feast materialize
), the feature values will be stored as follows:
Feast uses offline stores as storage and compute systems. Offline stores store historic time-series feature values. Feast does not generate these features, but instead uses the offline store as the interface for querying existing features in your organization.
Offline stores are used primarily for two reasons
Building training datasets from time-series features.
Materializing (loading) features from the offline store into an online store in order to serve those features at low latency for prediction.
Offline stores are configured through the . When building training datasets or materializing features into an online store, Feast will use the configured offline store along with the data sources you have defined as part of feature views to execute the necessary data operations.
It is not possible to query all data sources from all offline stores, and only a single offline store can be used at a time. For example, it is not possible to query a BigQuery table from a File
offline store, nor is it possible for a BigQuery
offline store to query files from your local file system.
Please see the reference for more details on configuring offline stores.
The Feast feature registry is a central catalog of all the feature definitions and their related metadata. It allows data scientists to search, discover, and collaborate on new features.
Each Feast deployment has a single feature registry. Feast only supports file-based registries today, but supports three different backends
Local
: Used as a local backend for storing the registry during development
S3
: Used as a centralized backend for storing the registry on AWS
GCS
: Used as a centralized backend for storing the registry on GCP
The feature registry is updated during different operations when using Feast. More specifically, objects within the registry (entities, feature views, feature services) are updated when running apply
from the Feast CLI, but metadata about objects can also be updated during operations like materialization.
Users interact with a feature registry through the Feast SDK. Listing all feature views:
Or retrieving a specific feature view:
The feature registry is a of Feast metadata. This Protobuf file can be read programmatically from other programming languages, but no compatibility guarantees are made on the internal structure of the registry.
A provider is an implementation of a feature store using specific feature store components (e.g. offline store, online store) targeting a specific environment (e.g. GCP stack).
Providers orchestrate various components (offline store, online store, infrastructure, compute) inside an environment. For example, the gcp
provider supports as an offline store and as an online store, ensuring that these components can work together seamlessly. Feast has three built-in providers (local
, gcp
, and aws
) with default configurations that make it easy for users to start a feature store in a specific environment. These default configurations can be overridden easily. For instance, you can use the gcp
provider but use Redis as the online store instead of Datastore.
If the built-in providers are not sufficient, you can create your own custom provider. Please see for more details.
Please see for configuring providers.
The quickstart is the easiest way to learn about Feast. For more detailed tutorials, please check out the tutorials page.
Feature tables from Feast 0.9 have been renamed to feature views in Feast 0.10+. For more details, please see the discussion here.
Feast currently does not support any access control other than the access control required for the Provider's environment (for example, GCP and AWS permissions).
Feast is actively working on this right now. Please reach out to the Feast team if you're interested in giving feedback!
A feature view can be defined with multiple entities. Since each entity has a unique join_key, using multiple entities will achieve the effect of a composite key.
Please see a detailed comparison of Feast vs. Tecton here. For another comparison, please see here.
Feast is designed to work at scale and support low latency online serving. Benchmarks to be released soon, and active work is underway to support very latency sensitive use cases.
Yes. Specifically:
Simple lists / dense embeddings:
BigQuery supports list types natively
Redshift does not support list types, so you'll need to serialize these features into strings (e.g. json or protocol buffers)
Feast's implementation of online stores serializes features into Feast protocol buffers and supports list types (see reference)
Sparse embeddings (e.g. one hot encodings)
One way to do this efficiently is to have a protobuf or string representation of https://www.tensorflow.org/guide/sparse_tensor
The list of supported offline and online stores can be found here and here, respectively. The roadmap indicates the stores for which we are planning to add support. Finally, our Provider abstraction is built to be extensible, so you can plug in your own implementations of offline and online stores. Please see more details about custom providers here.
Please follow the instructions here.
Yes. There are two ways to use S3 in Feast:
Using Redshift as a data source via Spectrum (AWS tutorial), and then continuing with the Running Feast with GCP/AWS guide. See a presentation we did on this at our apply() meetup.
Using the s3_endpoint_override
in a FileSource
data source. This endpoint is more suitable for quick proof of concepts that won't necessarily scale for production use cases.
Feast does not support Spark natively. However, you can create a custom provider that will support Spark, which can help with more scalable materialization and ingestion.
Please see the roadmap.
Feast 0.10+ is much lighter weight and more extensible than Feast 0.9. It is designed to be simple to install and use. Please see this document for more details.
Please see this document. If you have any questions or suggestions, feel free to leave a comment on the document!
For more details on contributing to the Feast community, see here and this here.
Feast Core and Feast Serving were both part of Feast Java. We plan to support Feast Serving. We will not support Feast Core; instead we will support our object store based registry. We will not support Feast Spark. For more details on what we plan on supporting, please see the roadmap.
Don't see your question?
We encourage you to ask questions on Slack or Github. Even better, once you get an answer, add the answer to this FAQ via a pull request!
These Feast tutorials showcase how to use Feast to simplify end to end model training / serving.
Credit scoring models are used to approve or reject loan applications. In this tutorial we will build a real-time credit scoring system on AWS.
When individuals apply for loans from banks and other credit providers, the decision to approve a loan application is often made through a statistical model. This model uses information about a customer to determine the likelihood that they will repay or default on a loan, in a process called credit scoring.
In this example, we will demonstrate how a real-time credit scoring system can be built using Feast and Scikit-Learn on AWS, using feature data from S3.
This real-time system accepts a loan request from a customer and responds within 100ms with a decision on whether their loan has been approved or rejected.
This end-to-end tutorial will take you through the following steps:
Deploying S3 with Parquet as your primary data source, containing both loan features and zip code features
Deploying Redshift as the interface Feast uses to build training datasets
Registering your features with Feast and configuring DynamoDB for online serving
Building a training dataset with Feast to train your credit scoring model
Loading feature values from S3 into DynamoDB
Making online predictions with your credit scoring model using features from DynamoDB
Making a prediction using a linear regression model is a common use case in ML. This model predicts if a driver will complete a trip based on features ingested into Feast.
In this example, you'll learn how to use some of the key functionality in Feast. The tutorial runs in both local mode and on the Google Cloud Platform (GCP). For GCP, you must have access to a GCP project already, including read and write permissions to BigQuery.
This tutorial guides you on how to use Feast with Scikit-learn. You will learn how to:
Train a model locally (on your laptop) using data from BigQuery
Test the model for online inference using SQLite (for fast iteration)
Test the model for online inference using Firestore (for production use)
Try it and let us know what you think!
A common use case in machine learning, this tutorial is an end-to-end, production-ready fraud prediction system. It predicts in real-time whether a transaction made by a user is fraudulent.
Throughout this tutorial, we’ll walk through the creation of a production-ready fraud prediction system. A prediction is made in real-time as the user makes the transaction, so we need to be able to generate a prediction at low latency.
Our end-to-end example will perform the following workflows:
Computing and backfilling feature data from raw data
Building point-in-time correct training datasets from feature data and training a model
Making online predictions from feature data
Here's a high-level picture of our system architecture on Google Cloud Platform (GCP):
A feature repository is a directory that contains the configuration of the feature store and individual features. This configuration is written as code (Python/YAML) and it's highly recommended that teams track it centrally using git. See Feature Repository for a detailed explanation of feature repositories.
The easiest way to create a new feature repository to use feast init
command:
The init
command creates a Python file with feature definitions, sample data, and a Feast configuration file for local development:
Enter the directory:
You can now use this feature repository for development. You can try the following:
Run feast apply
to apply these definitions to Feast.
Edit the example feature definitions in example.py
and run feast apply
again to change feature definitions.
Initialize a git repository in the same directory and checking the feature repository into version control.
Install Feast using :
Install Feast with GCP dependencies (required when using BigQuery or Firestore):
Install Feast with AWS dependencies (required when using Redshift or DynamoDB):
The Feast CLI can be used to deploy a feature store to your infrastructure, spinning up any necessary persistent resources like buckets or tables in data stores. The deployment target and effects depend on the provider
that has been configured in your file, as well as the feature definitions found in your feature repository.
Here we'll be using the example repository we created in the previous guide, . You can re-create it by running feast init
in a new directory.
To have Feast deploy your infrastructure, run feast apply
from your command line while inside a feature repository:
Depending on whether the feature repository is configured to use a local
provider or one of the cloud providers like GCP
or AWS
, it may take from a couple of seconds to a minute to run to completion.
At this point, no data has been materialized to your online store. Feast apply simply registers the feature definitions with Feast and spins up any necessary infrastructure such as tables. To load data into the online store, run feast materialize
. See for more details.
If you need to clean up the infrastructure created by feast apply
, use the teardown
command.
Warning: teardown
is an irreversible command and will remove all feature store infrastructure. Proceed with caution!
****
Feast allows users to build a training dataset from time-series feature data that already exists in an offline store. Users are expected to provide a list of features to retrieve (which may span multiple feature views), and a dataframe to join the resulting features onto. Feast will then execute a point-in-time join of multiple feature views onto the provided dataframe, and return the full resulting dataframe.
Please ensure that you have created a feature repository and that you have registered (applied) your feature views with Feast.
Start by defining the feature references (e.g., driver_trips:average_daily_rides
) for the features that you would like to retrieve from the offline store. These features can come from multiple feature tables. The only requirement is that the feature tables that make up the feature references have the same entity (or composite entity), and that they aren't located in the same offline store.
3. Create an entity dataframe
An entity dataframe is the target dataframe on which you would like to join feature values. The entity dataframe must contain a timestamp column called event_timestamp
and all entities (primary keys) necessary to join feature tables onto. All entities found in feature views that are being joined onto the entity dataframe must be found as column on the entity dataframe.
It is possible to provide entity dataframes as either a Pandas dataframe or a SQL query.
Pandas:
In the example below we create a Pandas based entity dataframe that has a single row with an event_timestamp
column and a driver_id
entity column. Pandas based entity dataframes may need to be uploaded into an offline store, which may result in longer wait times compared to a SQL based entity dataframe.
SQL (Alternative):
Below is an example of an entity dataframe built from a BigQuery SQL query. It is only possible to use this query when all feature views being queried are available in the same offline store (BigQuery).
4. Launch historical retrieval
Once the feature references and an entity dataframe are defined, it is possible to call get_historical_features()
. This method launches a job that executes a point-in-time join of features from the offline store onto the entity dataframe. Once completed, a job reference will be returned. This job reference can then be converted to a Pandas dataframe by calling to_df()
.
Feast allows users to load their feature data into an online store in order to serve the latest features to models for online prediction.
Before proceeding, please ensure that you have applied (registered) the feature views that should be materialized.
The materialize command allows users to materialize features over a specific historical time range into the online store.
The above command will query the batch sources for all feature views over the provided time range, and load the latest feature values into the configured online store.
It is also possible to materialize for specific feature views by using the -v / --views
argument.
The materialize command is completely stateless. It requires the user to provide the time ranges that will be loaded into the online store. This command is best used from a scheduler that tracks state, like Airflow.
For simplicity, Feast also provides a materialize command that will only ingest new data that has arrived in the offline store. Unlike materialize
, materialize-incremental
will track the state of previous ingestion runs inside of the feature registry.
The example command below will load only new data that has arrived for each feature view up to the end date and time (2021-04-08T00:00:00
).
The materialize-incremental
command functions similarly to materialize
in that it loads data over a specific time range for all feature views (or the selected feature views) into the online store.
Unlike materialize
, materialize-incremental
automatically determines the start time from which to load features from batch sources of each feature view. The first time materialize-incremental
is executed it will set the start time to the oldest timestamp of each data source, and the end time as the one provided by the user. For each run of materialize-incremental
, the end timestamp will be tracked.
Subsequent runs of materialize-incremental
will then set the start time to the end time of the previous run, thus only loading new data that has arrived into the online store. Note that the end time that is tracked for each run is at the feature view level, not globally for all feature views, i.e, different feature views may have different periods that have been materialized into the online store.
The Feast Python SDK allows users to retrieve feature values from an online store. This API is used to look up feature values at low latency during model serving in order to make online predictions.
Online stores only maintain the current state of features, i.e latest feature values. No historical data is stored or served.
Please ensure that you have materialized (loaded) your feature values into the online store before starting
Create a list of features that you would like to retrieve. This list typically comes from the model training step and should accompany the model binary.
Next, we will create a feature store object and call get_online_features()
which reads the relevant feature values directly from the online store.
Please see for an explanation of data sources.
An entity is a collection of semantically related features. Users define entities to map to the domain of their use case. For example, a ride-hailing service could have customers and drivers as their entities, which group related features that correspond to these customers and drivers.
Entities are defined as part of feature views. Entities are used to identify the primary key on which feature values should be stored and retrieved. These keys are used during the lookup of feature values from the online store and the join process in point-in-time joins. It is possible to define composite entities (more than one entity object) in a feature view.
Entities should be reused across feature views.
A related concept is an entity key. These are one or more entity values that uniquely describe a feature view record. In the case of an entity (like a driver
) that only has a single entity field, the entity is an entity key. However, it is also possible for an entity key to consist of multiple entity values. For example, a feature view with the composite entity of (customer, country) might have an entity key of (1001, 5).
Entity keys act as primary keys. They are used during the lookup of features from the online store, and they are also used to match feature rows across feature views during point-in-time joins.
All Feast operations execute through a provider
. Operations like materializing data from the offline to the online store, updating infrastructure like databases, launching streaming ingestion jobs, building training datasets, and reading features from the online store.
Custom providers allow Feast users to extend Feast to execute any custom logic. Examples include:
Launching custom streaming ingestion jobs (Spark, Beam)
Launching custom batch ingestion (materialization) jobs (Spark, Beam)
Adding custom validation to feature repositories during feast apply
Adding custom infrastructure setup logic which runs during feast apply
Extending Feast commands with in-house metrics, logging, or tracing
Feast comes with built-in providers, e.g, LocalProvider
, GcpProvider
, and AwsProvider
. However, users can develop their own providers by creating a class that implements the contract in the .
This guide also comes with a fully functional . Please have a look at the repository for a representative example of what a custom provider looks like, or fork the repository when creating your own provider.
The fastest way to add custom logic to Feast is to extend an existing provider. The most generic provider is the LocalProvider
which contains no cloud-specific logic. The guide that follows will extend the LocalProvider
with operations that print text to the console. It is up to you as a developer to add your custom code to the provider methods, but the guide below will provide the necessary scaffolding to get you started.
The first step is to define a custom provider class. We've created the MyCustomProvider
below.
Notice how in the above provider we have only overwritten two of the methods on the LocalProvider
, namely update_infra
and materialize_single_feature_view
. These two methods are convenient to replace if you are planning to launch custom batch or streaming jobs. update_infra
can be used for launching idempotent streaming jobs, and materialize_single_feature_view
can be used for launching batch ingestion jobs.
Notice how the provider
field above points to the module and class where your provider can be found.
Now you should be able to use your provider by running a Feast command:
It may also be necessary to add the module root path to your PYTHONPATH
as follows:
That's it. You should not have a fully functional custom provider!
In this guide we will show you how to:
Deploy your feature store and keep your infrastructure in sync with your feature repository
Keep the data in your online store up to date
Use Feast for model training and serving
The first step to setting up a deployment of Feast is to create a Git repository that contains your feature definitions. The recommended way to version and track your feature definitions is by committing them to a repository and tracking changes through commits.
Most teams will need to have a feature store deployed to more than one environment. We have created an example repository () which contains two Feast projects, one per environment.
The contents of this repository are shown below:
The repository contains three sub-folders:
staging/
: This folder contains the staging feature_store.yaml
and Feast objects. Users that want to make changes to the Feast deployment in the staging environment will commit changes to this directory.
production/
: This folder contains the production feature_store.yaml
and Feast objects. Typically users would first test changes in staging before copying the feature definitions into the production folder, before committing the changes.
.github
: This folder is an example of a CI system that applies the changes in either the staging
or production
repositories using feast apply
. This operation saves your feature definitions to a shared registry (for example, on GCS) and configures your infrastructure for serving features.
The feature_store.yaml
contains the following?
Notice how the registry has been configured to use a Google Cloud Storage bucket. All changes made to infrastructure using feast apply
are tracked in the registry.db
. This registry will be accessed later by the Feast SDK in your training pipelines or model serving services in order to read features.
It is important to note that the CI system above must have access to create, modify, or remove infrastructure in your production environment. This is unlike clients of the feature store, who will only have read access.
In summary, once you have set up a Git based repository with CI that runs feast apply
on changes, your infrastructure (offline store, online store, and cloud environment) will automatically be updated to support loading of data into the feature store or retrieval of data.
In order to keep your online store up to date, you need to run a job that loads feature data from your feature view sources into your online store. In Feast, this loading operation is called materialization.
The simplest way to schedule materialization is to run an incremental materialization using the Feast CLI:
The above command will load all feature values from all feature view sources into the online store up to the time 2022-01-01T00:00:00
.
A timestamp is required to set the end date for materialization. If your source is fully up to date then the end date would be the current time. However, if you are querying a source where data is not yet available, then you do not want to set the timestamp to the current time. You would want to use a timestamp that ends at a date for which data is available. The next time materialize-incremental
is run, Feast will load data that starts from the previous end date, so it is important to ensure that the materialization interval does not overlap with time periods for which data has not been made available. This is commonly the case when your source is an ETL pipeline that is scheduled on a daily basis.
An alternative approach to incremental materialization (where Feast tracks the intervals of data that need to be ingested), is to call Feast directly from your scheduler like Airflow. In this case Airflow is the system that tracks the intervals that have been ingested.
In the above example we are materializing the source data from the driver_hourly_stats
feature view over a day. This command can be scheduled as the final operation in your Airflow ETL, which runs after you have computed your features and stored them in the source location. Feast will then load your feature data into your online store.
The timestamps above should match the interval of data that has been computed by the data transformation system.
Now that you have deployed a registry, provisioned your feature store, and loaded your data into your online store, your clients can start to consume features for training and inference.
For both model training and inferencing your clients will use the Feast Python SDK to retrieve features. In both cases it is necessary to create a FeatureStore
object.
One way to ensure your production clients have access to the feature store is to provide a copy of the feature_store.yaml
to those pipelines. This feature_store.yaml
file will have a reference to the feature store registry, which allows clients to retrieve features from offline or online stores.
Then, training data can be retrieved as follows:
The most common way to productionize ML models is by storing and versioning models in a "model store", and then deploying these models into production. When using Feast, it is recommended that the list of feature references also be saved alongside the model. This ensures that models and the features they are trained on are paired together when being shipped into production:
you can simply create a FeatureStore
object, fetch the features, and then make a prediction:
It is important to note that both the training pipeline and model serving service only needs read access to the feature registry and associated infrastructure. This prevents clients from accidentally making changes to the feature store.
Feast makes adding support for a new offline store (database) easy. Developers can simply implement the interface to add support for a new store (other than the existing stores like Parquet files, Redshift, and Bigquery).
In this guide, we will show you how to extend the existing File offline store and use in a feature repo. While we will be implementing a specific store, this guide should be representative for adding support for any new offline store.
The full working code for this guide can be found at .
The process for using a custom offline store consists of 4 steps:
Defining an OfflineStore
class.
Defining an OfflineStoreConfig
class.
Defining a RetrievalJob
class for this offline store.
Referencing the OfflineStore
in a feature repo's feature_store.yaml
file.
OfflineStore class names must end with the OfflineStore suffix!
The OfflineStore class contains a couple of methods to read features from the offline store. Unlike the OnlineStore class, Feast does not manage any infrastructure for the offline store.
There are two methods that deal with reading data from the offline storesget_historical_features
and pull_latest_from_table_or_query
.
pull_latest_from_table_or_query
is invoked when running materialization (using the feast materialize
or feast materialize-incremental
commands, or the corresponding FeatureStore.materialize()
method. This method pull data from the offline store, and the FeatureStore
class takes care of writing this data into the online store.
get_historical_features
is invoked when reading values from the offline store using the FeatureStore.get_historica_features()
method. Typically, this method is used to retrieve features when training ML models.
Additional configuration may be needed to allow the OfflineStore to talk to the backing store. For example, Redshift needs configuration information like the connection information for the Redshift instance, credentials for connecting to the database, etc.
This config class must container a type
field, which contains the fully qualified class name of its corresponding OfflineStore class.
Additionally, the name of the config class must be the same as the OfflineStore class, with the Config
suffix.
An example of the config class for the custom file offline store :
This configuration can be specified in the feature_store.yaml
as follows:
This configuration information is available to the methods of the OfflineStore, via theconfig: RepoConfig
parameter which is passed into the methods of the OfflineStore interface, specifically at the config.offline_store
field of the config
parameter.
The offline store methods aren't expected to perform their read operations eagerly. Instead, they are expected to execute lazily, and they do so by returning a RetrievalJob
instance, which represents the execution of the actual query against the underlying store.
Custom offline stores may need to implement their own instances of the RetrievalJob
interface.
The RetrievalJob
interface exposes two methods - to_df
and to_arrow
. The expectation is for the retrieval job to be able to return the rows read from the offline store as a parquet DataFrame, or as an Arrow table respectively.
After implementing these classes, the custom offline store can be used by referencing it in a feature repo's feature_store.yaml
file, specifically in the offline_store
field. The value specified should be the fully qualified class name of the OfflineStore.
As long as your OfflineStore class is available in your Python environment, it will be imported by Feast dynamically at runtime.
To use our custom file offline store, we can use the following feature_store.yaml
:
If additional configuration for the offline store is not required, then we can omit the other fields and only specify the type
of the offline store class as the value for the offline_store
.
Feast makes adding support for a new online store (database) easy. Developers can simply implement the interface to add support for a new store (other than the existing stores like Redis, DynamoDB, SQLite, and Datastore).
In this guide, we will show you how to integrate with MySQL as an online store. While we will be implementing a specific store, this guide should be representative for adding support for any new online store.
The full working code for this guide can be found at .
The process of using a custom online store consists of 3 steps:
Defining the OnlineStore
class.
Defining the OnlineStoreConfig
class.
Referencing the OnlineStore
in a feature repo's feature_store.yaml
file.
OnlineStore class names must end with the OnlineStore suffix!
The OnlineStore class broadly contains two sets of methods
One set deals with managing infrastructure that the online store needed for operations
One set deals with writing data into the store, and reading data from the store.
There are two methods that deal with managing infrastructure for online stores, update
and teardown
update
is invoked when users run feast apply
as a CLI command, or the FeatureStore.apply()
sdk method.
The update
method should be used to perform any operations necessary before data can be written to or read from the store. The update
method can be used to create MySQL tables in preparation for reads and writes to new feature views.
teardown
is invoked when users run feast teardown
or FeatureStore.teardown()
.
The teardown
method should be used to perform any clean-up operations. teardown
can be used to drop MySQL indices and tables corresponding to the feature views being deleted.
There are two methods that deal with writing data to and from the online stores.online_write_batch
and online_read
.
online_write_batch
is invoked when running materialization (using the feast materialize
or feast materialize-incremental
commands, or the corresponding FeatureStore.materialize()
method.
online_read
is invoked when reading values from the online store using the FeatureStore.get_online_features()
method.
Additional configuration may be needed to allow the OnlineStore to talk to the backing store. For example, MySQL may need configuration information like the host at which the MySQL instance is running, credentials for connecting to the database, etc.
This config class must container a type
field, which contains the fully qualified class name of its corresponding OnlineStore class.
Additionally, the name of the config class must be the same as the OnlineStore class, with the Config
suffix.
An example of the config class for MySQL :
This configuration can be specified in the feature_store.yaml
as follows:
This configuration information is available to the methods of the OnlineStore, via theconfig: RepoConfig
parameter which is passed into all the methods of the OnlineStore interface, specifically at the config.online_store
field of the config
parameter.
After implementing both these classes, the custom online store can be used by referencing it in a feature repo's feature_store.yaml
file, specifically in the online_store
field. The value specified should be the fully qualified class name of the OnlineStore.
As long as your OnlineStore class is available in your Python environment, it will be imported by Feast dynamically at runtime.
To use our MySQL online store, we can use the following feature_store.yaml
:
If additional configuration for the online store is not required, then we can omit the other fields and only specify the type
of the online store class as the value for the online_store
.
It is possible to overwrite all the methods on the provider class. In fact, it isn't even necessary to subclass an existing provider like LocalProvider
. The only requirement for the provider class is that it follows the .
Configure your file to point to your new provider class:
Have a look at the for a fully functional example of a custom provider. Feel free to fork it when creating your own custom provider!
To facilitate configuration, all OfflineStore implementations are required to also define a corresponding OfflineStoreConfig class in the same file. This OfflineStoreConfig class should inherit from the FeastConfigBaseModel
class, which is defined .
The FeastConfigBaseModel
is a class, which parses yaml configuration into python objects. Pydantic also allows the model classes to define validators for the config classes, to make sure that the config classes are correctly defined.
To facilitate configuration, all OnlineStore implementations are required to also define a corresponding OnlineStoreConfig class in the same file. This OnlineStoreConfig class should inherit from the FeastConfigBaseModel
class, which is defined .
The FeastConfigBaseModel
is a class, which parses yaml configuration into python objects. Pydantic also allows the model classes to define validators for the config classes, to make sure that the config classes are correctly defined.
Redshift data sources allow for the retrieval of historical feature values from Redshift for building training datasets as well as materializing features into an online store.
Either a table name or a SQL query can be provided.
No performance guarantees can be provided over SQL query-based sources. Please use table references where possible.
Using a table name
Using a query
Configuration options are available here.
File data sources allow for the retrieval of historical feature values from files on disk for building training datasets, as well as for materializing features into an online store.
FileSource is meant for development purposes only and is not optimized for production use.
Configuration options are available here.
The File offline store provides support for reading FileSources.
Only Parquet files are currently supported.
All data is downloaded and joined using Python and may not scale to production workloads.
Configuration options are available here.
BigQuery data sources allow for the retrieval of historical feature values from BigQuery for building training datasets as well as materializing features into an online store.
Either a table reference or a SQL query can be provided.
No performance guarantees can be provided over SQL query-based sources. Please use table references where possible.
Using a table reference
Using a query
Configuration options are available here.
The BigQuery offline store provides support for reading BigQuerySources.
BigQuery tables and views are allowed as sources.
All joins happen within BigQuery.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. Pandas dataframes will be uploaded to BigQuery in order to complete join operations.
A BigQueryRetrievalJob is returned when calling get_historical_features()
.
Configuration options are available here.
Please see Offline Store for an explanation of offline stores.
The online store provides support for materializing feature values into AWS DynamoDB.
Configuration options are available .
Feast requires the following permissions in order to execute commands for DynamoDB online store:
The following inline policy can be used to grant Feast the necessary permissions:
Lastly, this IAM role needs to be associated with the desired Redshift cluster. Please follow the official AWS guide for the necessary steps .
Command | Permissions | Resources |
Apply | dynamodb:CreateTable dynamodb:DescribeTable dynamodb:DeleteTable | arn:aws:dynamodb:<region>:<account_id>:table/* |
Materialize | dynamodb.BatchWriteItem | arn:aws:dynamodb:<region>:<account_id>:table/* |
Get Online Features | dynamodb.GetItem | arn:aws:dynamodb:<region>:<account_id>:table/* |
The Redis online store provides support for materializing feature values into Redis.
Both Redis and Redis Cluster are supported
The data model used to store feature values in Redis is described in more detail here.
Connecting to a single Redis instance
Connecting to a Redis Cluster with SSL enabled and password authentication
Configuration options are available here.
The SQLite online store provides support for materializing feature values into an SQLite database for serving online features.
All feature values are stored in an on-disk SQLite database
Only the latest feature values are persisted
Configuration options are available here.
The Redshift offline store provides support for reading RedshiftSources.
Redshift tables and views are allowed as sources.
All joins happen within Redshift.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. Pandas dataframes will be uploaded to Redshift in order to complete join operations.
A RedshiftRetrievalJob is returned when calling get_historical_features()
.
Configuration options are available here.
Feast requires the following permissions in order to execute commands for Redshift offline store:
The following inline policy can be used to grant Feast the necessary permissions:
In addition to this, Redshift offline store requires an IAM role that will be used by Redshift itself to interact with S3. More concretely, Redshift has to use this IAM role to run UNLOAD and COPY commands. Once created, this IAM role needs to be configured in feature_store.yaml
file as offline_store: iam_role
.
The following inline policy can be used to grant Redshift necessary permissions to access S3:
While the following trust relationship is necessary to make sure that Redshift, and only Redshift can assume this role:
Please see Online Store for an explanation of online stores.
Please see for an explanation of providers.
Command
Permissions
Resources
Apply
redshift-data:DescribeTable
redshift:GetClusterCredentials
arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>
arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>
arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>
Materialize
redshift-data:ExecuteStatement
arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>
Materialize
redshift-data:DescribeStatement
*
Materialize
s3:ListBucket
s3:GetObject
s3:DeleteObject
arn:aws:s3:::<bucket_name>
arn:aws:s3:::<bucket_name>/*
Get Historical Features
redshift-data:ExecuteStatement
redshift:GetClusterCredentials
arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>
arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>
arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>
Get Historical Features
redshift-data:DescribeStatement
*
Get Historical Features
s3:ListBucket
s3:GetObject
s3:PutObject
s3:DeleteObject
arn:aws:s3:::<bucket_name>
arn:aws:s3:::<bucket_name>/*
feature_store.yaml
is used to configure a feature store. The file must be located at the root of a feature repository. An example feature_store.yaml
is shown below:
The following top-level configuration options exist in the feature_store.yaml
file.
provider — Configures the environment in which Feast will deploy and operate.
registry — Configures the location of the feature registry.
online_store — Configures the online store.
offline_store — Configures the offline store.
project — Defines a namespace for the entire feature store. Can be used to isolate multiple deployments in a single installation of Feast. Should only contain letters, numbers, and underscores.
Please see the RepoConfig API reference for the full list of configuration options.
Feast users use Feast to manage two important sets of configuration:
Configuration about how to run Feast on your infrastructure
Feature definitions
With Feast, the above configuration can be written declaratively and stored as code in a central location. This central location is called a feature repository. The feature repository is the declarative source of truth for what the desired state of a feature store should be.
The Feast CLI uses the feature repository to configure, deploy, and manage your feature store.
A feature repository consists of:
A collection of Python files containing feature declarations.
A feature_store.yaml
file containing infrastructural configuration.
A .feastignore
file containing paths in the feature repository to ignore.
Typically, users store their feature repositories in a Git repository, especially when working in teams. However, using Git is not a requirement.
The structure of a feature repository is as follows:
The root of the repository should contain a feature_store.yaml
file and may contain a .feastignore
file.
The repository should contain Python files that contain feature definitions.
The repository can contain other files as well, including documentation and potentially data files.
An example structure of a feature repository is shown below:
A couple of things to note about the feature repository:
Feast reads all Python files recursively when feast apply
is ran, including subdirectories, even if they don't contain feature definitions.
It's recommended to add .feastignore
and add paths to all imperative scripts if you need to store them inside the feature registry.
The configuration for a feature store is stored in a file named feature_store.yaml
, which must be located at the root of a feature repository. An example feature_store.yaml
file is shown below:
The feature_store.yaml
file configures how the feature store should run. See feature_store.yaml for more details.
This file contains paths that should be ignored when running feast apply
. An example .feastignore
is shown below:
See .feastignore for more details.
A feature repository can also contain one or more Python files that contain feature definitions. An example feature definition file is shown below:
To declare new feature definitions, just add code to the feature repository, either in existing files or in a new file. For more information on how to define features, see Feature Views.
See Create a feature repository to get started with an example feature repository.
See feature_store.yaml, .feastignore, or Feature Views for more information on the configuration files that live in a feature registry.
.feastignore
is a file that is placed at the root of the . This file contains paths that should be ignored when running feast apply
. An example .feastignore
is shown below:
.feastignore
file is optional. If the file can not be found, every Python file in the feature repo directory will be parsed by feast apply
.
Command
Component
Permissions
Recommended Role
Apply
BigQuery (source)
bigquery.jobs.create
bigquery.readsessions.create
bigquery.readsessions.getData
roles/bigquery.user
Apply
Datastore (destination)
datastore.entities.allocateIds
datastore.entities.create
datastore.entities.delete
datastore.entities.get
datastore.entities.list
datastore.entities.update
roles/datastore.owner
Materialize
BigQuery (source)
bigquery.jobs.create
roles/bigquery.user
Materialize
Datastore (destination)
datastore.entities.allocateIds
datastore.entities.create
datastore.entities.delete
datastore.entities.get
datastore.entities.list
datastore.entities.update
datastore.databases.get
roles/datastore.owner
Get Online Features
Datastore
datastore.entities.get
roles/datastore.user
Get Historical Features
BigQuery (source)
bigquery.datasets.get
bigquery.tables.get
bigquery.tables.create
bigquery.tables.updateData
bigquery.tables.update
bigquery.tables.delete
bigquery.tables.getData
roles/bigquery.dataEditor
Pattern | Example matches | Explanation |
venv | venv/foo.py venv/a/foo.py | You can specify a path to a specific directory. Everything in that directory will be ignored. |
scripts/foo.py | scripts/foo.py | You can specify a path to a specific file. Only that file will be ignored. |
scripts/*.py | scripts/foo.py scripts/bar.py | You can specify an asterisk (*) anywhere in the expression. An asterisk matches zero or more characters, except "/". |
scripts/**/foo.py | scripts/foo.py scripts/a/foo.py scripts/a/b/foo.py | You can specify a double asterisk (**) anywhere in the expression. A double asterisk matches zero or more directories. |
The Feast CLI comes bundled with the Feast Python package. It is immediately available after installing Feast.
The Feast CLI provides one global top-level option that can be used with other commands
chdir (-c, --chdir)
This command allows users to run Feast CLI commands in a different folder from the current working directory.
Creates or updates a feature store deployment
What does Feast apply do?
Feast will scan Python files in your feature repository and find all Feast object definitions, such as feature views, entities, and data sources.
Feast will validate your feature definitions
Feast will sync the metadata about Feast objects to the registry. If a registry does not exist, then it will be instantiated. The standard registry is a simple protobuf binary file that is stored on disk (locally or in an object store).
Feast CLI will create all necessary feature store infrastructure. The exact infrastructure that is deployed or configured depends on the provider
configuration that you have set in feature_store.yaml
. For example, setting local
as your provider will result in a sqlite
online store being created.
feast apply
(when configured to use cloud provider like gcp
or aws
) will create cloud infrastructure. This may incur costs.
List all registered entities
List all registered feature views
Creates a new feature repository
It's also possible to use other templates
or to set the name of the new project
Load data from feature views into the online store between two dates
Load data for specific feature views into the online store between two dates
Load data from feature views into the online store, beginning from either the previous materialize
or materialize-incremental
end date, or the beginning of time.
Tear down deployed feature store infrastructure
Print the current Feast version
The Feast project logs anonymous usage statistics and errors in order to inform our planning. Several client methods are tracked, beginning in Feast 0.9. Users are assigned a UUID which is sent along with the name of the method, the Feast version, the OS (using sys.platform
), and the current time.
The is available here.
Set the environment variable FEAST_USAGE
to False
.
We use and to communicate development ideas. The simplest way to contribute to Feast is to leave comments in our in the or our GitHub issues. You will need to join our in order to get access.
We follow a process of . If you believe you know what the project needs then just start development. If you are unsure about which direction to take with development then please communicate your ideas through a GitHub issue or through our before starting development.
Please to the master branch of the Feast repository once you are ready to submit your contribution. Code submission to Feast (including submission from project maintainers) require review and approval from maintainers or code owners.
PRs that are submitted by the general public need to be identified as ok-to-test
. Once enabled, will run a range of tests to verify the submission, after which community members will help to review the pull request.
Please sign the in order to have your code merged into the Feast repository.
For Feast maintainers, these are the concrete steps for making a new release.
For new major or minor release, create and check out the release branch for the new stream, e.g. v0.6-branch
. For a patch version, check out the stream's release branch.
Update the CHANGELOG.md. See the Creating a change log guide and commit
Make to review each PR in the changelog to flag any breaking changes and deprecation.
Update versions for the release/release candidate with a commit:
In the root pom.xml
, remove -SNAPSHOT
from the <revision>
property, update versions, and commit.
Tag the commit with the release version, using a v
and sdk/go/v
prefixes
for a release candidate, create tags vX.Y.Z-rc.N
and sdk/go/vX.Y.Z-rc.N
for a stable release X.Y.Z
create tags vX.Y.Z
and sdk/go/vX.Y.Z
Check that versions are updated with make lint-versions
.
If changes required are flagged by the version lint, make the changes, amend the commit and move the tag to the new commit.
Push the commits and tags. Make sure the CI passes.
If the CI does not pass, or if there are new patches for the release fix, repeat step 2 & 3 with release candidates until stable release is achieved.
Bump to the next patch version in the release branch, append -SNAPSHOT
in pom.xml
and push.
Create a PR against master to:
Bump to the next major/minor version and append -SNAPSHOT
.
Add the change log by applying the change log commit created in step 2.
Check that versions are updated with env TARGET_MERGE_BRANCH=master make lint-versions
Create a GitHub release which includes a summary of important changes as well as any artifacts associated with the release. Make sure to include the same change log as added in CHANGELOG.md. Use Feast vX.Y.Z
as the title.
Update the Upgrade Guide to include the action required instructions for users to upgrade to this new release. Instructions should include a migration for each breaking change made to this release.
When a tag that matches a Semantic Version string is pushed, CI will automatically build and push the relevant artifacts to their repositories or package managers (docker images, Python wheels, etc). JVM artifacts are promoted from Sonatype OSSRH to Maven Central, but it sometimes takes some time for them to be available. The sdk/go/v tag
is required to version the Go SDK go module so that users can go get a specific tagged release of the Go SDK.
We use an open source change log generator to generate change logs. The process still requires a little bit of manual effort.
Create a GitHub token as per these instructions. The token is used as an input argument (-t
) to the change log generator.
The change log generator configuration below will look for unreleased changes on a specific branch. The branch will be master
for a major/minor release, or a release branch (v0.4-branch
) for a patch release. You will need to set the branch using the --release-branch
argument.
You should also set the --future-release
argument. This is the version you are releasing. The version can still be changed at a later date.
Update the arguments below and run the command to generate the change log to the console.
Review each change log item.
Make sure that sentences are grammatically correct and well formatted (although we will try to enforce this at the PR review stage).
Make sure that each item is categorised correctly. You will see the following categories: Breaking changes
, Implemented enhancements
, Fixed bugs
, and Merged pull requests
. Any unlabelled PRs will be found in Merged pull requests
. It's important to make sure that any breaking changes
, enhancements
, or bug fixes
are pulled up out of merged pull requests
into the correct category. Housekeeping, tech debt clearing, infra changes, or refactoring do not count as enhancements
. Only enhancements a user benefits from should be listed in that category.
Make sure that the "Full Change log" link is actually comparing the correct tags (normally your released version against the previously version).
Make sure that release notes and breaking changes are present.
It's important to flag breaking changes and deprecation to the API for each release so that we can maintain API compatibility.
Developers should have flagged PRs with breaking changes with the compat/breaking
label. However, it's important to double check each PR's release notes and contents for changes that will break API compatibility and manually label compat/breaking
to PRs with undeclared breaking changes. The change log will have to be regenerated if any new labels have to be added.
Versioning policies and status of Feast components
Feast uses semantic versioning.
Contributors are encouraged to understand our branch workflow described below, for choosing where to branch when making a change (and thus the merge base for a pull request).
Major and minor releases are cut from the master
branch.
Each major and minor release has a long-lived maintenance branch, e.g., v0.3-branch
. This is called a "release branch".
From the release branch the pre-release release candidates are tagged, e.g., v0.3.0-rc.1
From the release candidates the stable patch version releases are tagged, e.g.,v0.3.0
.
A release branch should be substantially feature complete with respect to the intended release. Code that is committed to master
may be merged or cherry-picked on to a release branch, but code that is directly committed to a release branch should be solely applicable to that release (and should not be committed back to master).
In general, unless you're committing code that only applies to a particular release stream (for example, temporary hot-fixes, back-ported security fixes, or image hashes), you should base changes from master
and then merge or cherry-pick to the release branch.
The following table shows the status (stable, beta, or alpha) of Feast components.
Application status indicators for Feast:
Stable means that the component has reached a sufficient level of stability and adoption that the Feast community has deemed the component stable. Please see the stability criteria below.
Beta means that the component is working towards a version 1.0 release. Beta does not mean a component is unstable, it simply means the component has not met the full criteria of stability.
Alpha means that the component is in the early phases of development and/or integration into Feast.
Criteria for reaching stable status:
Contributors from at least two organizations
Complete end-to-end test suite
Scalability and load testing if applicable
Automated release process (docker images, PyPI packages, etc)
API reference documentation
No deprecative changes
Must include logging and monitoring
Criteria for reaching beta status
Contributors from at least two organizations
End-to-end test suite
API reference documentation
Deprecative changes must span multiple minor versions and allow for an upgrade path.
Feast components have various levels of support based on the component status.
Feast has an active and helpful community of users and contributors.
The Feast community offers support on a best-effort basis for stable and beta applications. Best-effort support means that there’s no formal agreement or commitment to solve a problem but the community appreciates the importance of addressing the problem as soon as possible. The community commits to helping you diagnose and address the problem if all the following are true:
The cause falls within the technical framework that Feast controls. For example, the Feast community may not be able to help if the problem is caused by a specific network configuration within your organization.
Community members can reproduce the problem.
The reporter of the problem can help with further diagnosis and troubleshooting.
Please see the Community page for channels through which support can be requested.
This guide is targeted at developers looking to contribute to Feast:
Learn How the Feast Contributing Process works.
Feast is composed of multiple components distributed into multiple repositories:
See also the CONTRIBUTING.md in the corresponding GitHub repository (e.g. main repo doc)
Our preference is the use of git rebase
instead of git merge
: git pull -r
Commits have to be signed before they are allowed to be merged into the Feast codebase:
Fill in the description based on the default template configured when you first open the PR
What this PR does/why we need it
Which issue(s) this PR fixes
Does this PR introduce a user-facing change
Include kind
label when opening the PR
Add WIP:
to PR name if more work needs to be done prior to review
Avoid force-pushing
as it makes reviewing difficult
Managing CI-test failures
GitHub runner tests
Click checks
tab to analyse failed tests
Prow tests
Visit Prow status page to analyse failed tests
Feast data storage contracts are documented in the following locations:
Feast Offline Storage Format: Used by BigQuery, Snowflake (Future), Redshift (Future).
Feast Online Storage Format: Used by Redis, Google Datastore.
Feast Protobuf API defines the common API used by Feast's Components:
Feast Protobuf API specifications are written in proto3 in the Main Feast Repository.
Changes to the API should be proposed via a GitHub Issue for discussion first.
The language specific bindings have to be regenerated when changes are made to the Feast Protobuf API:
Feast 0.10 brought about major changes to the way Feast is architected and how the software is intended to be deployed, extended, and operated.
Please see Upgrading from Feast 0.9 for a guide on how to upgrade to the latest Feast version.
Feast contributors identified various design challenges in Feast 0.9 that made deploying, operating, extending, and maintaining it challenging. These challenges applied both to users and contributors.
Our goal is to make ML practitioners immediately productive in operationalizing data for machine learning. To that end, Feast 0.10+ made the following improvements on Feast 0.9:
Where Feast 0.9 was a large stack of components that needed to be deployed to Kubernetes, Feast 0.10 is simply a lightweight SDK and CLI. It doesn’t need any long-running processes to operate. This SDK/CLI can deploy and configure your feature store to your infrastructure, and execute workflows like building training datasets or reading features from an online feature store.
Feast 0.10 introduces local mode: Local mode allows users to try out Feast in a completely local environment (without using any cloud technologies). This provides users with a responsive means of trying out the software before deploying it into a production environment.
Feast comes with opinionated defaults: As much as possible we are attempting to make Feast a batteries-included feature store that removes the need for users to configure infinite configuration options (as with Feast 0.9). Feast 0.10 comes with sane default configuration options to deploy Feast on your infrastructure.
Feast Core was replaced by a file-based (S3, GCS) registry: Feast Core is a metadata server that maintains and exposes an API of feature definitions. With Feast 0.10, we’ve moved this entire service into a single flat file that can be stored on either the local disk or in a central object store like S3 or GCS. The benefit of this change is that users don’t need to maintain a database and a registry service, yet they can still access all the metadata they had before.
Materialization is a CLI operation: Instead of having ingestion jobs be managed by a job service, users can now schedule a batch ingestion job themselves by calling “materialize”. This change was introduced because most teams already have schedulers like Airflow in their organization. By starting ingestion jobs from Airflow, teams are now able to easily track state outside of Feast and to debug failures synchronously. Similarly, streaming ingestion jobs can be launched through the “apply” command
Doubling down on data warehouses: Most modern data teams are doubling down on data warehouses like BigQuery, Snowflake, and Redshift. Feast doubles down on these big data technologies as the primary interfaces through which it launches batch operations (like training dataset generation). This reduces the development burden on Feast contributors (since they only need to reason about SQL), provides users with a more responsive experience, avoids moving data from the warehouse (to compute joins using Spark), and provides a more serverless and scalable experience to users.
Temporary loss of streaming support: Unfortunately, Feast 0.10, 0.11, and 0.12 do not support streaming feature ingestion out of the box. It is entirely possible to launch streaming ingestion jobs using these Feast versions, but it requires the use of a Feast extension point to launch these ingestion jobs. It is still a core design goal for Feast to support streaming ingestion, so this change is in the development backlog for the Feast project.
Addition of extension points: Feast 0.10+ introduces various extension points. Teams can override all feature store behavior by writing (or extending) a provider. It is also possible for teams to add their own data storage connectors for both an offline and online store using a plugin interface that Feast provides.
Please see the Feast 0.9 Upgrade Guide.
Repository
Description
Component(s)
Hosts all required code to run Feast. This includes the Feast Python SDK and Protobuf definitions. For legacy reasons this repository still contains Terraform config and a Go Client for Feast.
Python SDK / CLI
Protobuf APIs
Documentation
Go Client
Terraform
Java-specific Feast components. Includes the Feast Core Registry, Feast Serving for serving online feature values, and the Feast Java Client for retrieving feature values.
Core
Serving
Java Client
Feast Spark SDK & Feast Job Service for launching ingestion jobs and for building training datasets with Spark
Spark SDK
Job Service
Helm Chart for deploying Feast on Kubernetes & Spark.
Helm Chart
Repository
Language
Regenerating Language Bindings
Python
Run make compile-protos-python
to generate bindings
Golang
Run make compile-protos-go
to generate bindings
Java
No action required: bindings are generated automatically during compilation.
Application
Status
Notes
Beta
APIs are considered stable and will not have breaking changes within 3 minor versions.
Beta
At risk of deprecation
Beta
Beta
Beta
Alpha
Alpha
Alpha
At risk of deprecation
Beta
Application status
Level of support
Stable
The Feast community offers best-effort support for stable applications. Stable components will be offered long term support
Beta
The Feast community offers best-effort support for beta applications. Beta applications will be supported for at least 2 more minor releases.
Alpha
The response differs per application in alpha status, depending on the size of the community for that application and the current level of active development of the application.
Challenges in Feast 0.9 (Before)
Changed in Feast 0.10+ (After)
Hard to install because it was a heavy-weight system with many components requiring a lot of configuration
Easy to install via pip install
Opinionated default configurations
No Helm charts necessary
Engineering support needed to deploy/operate reliably
Feast moves from a stack of services to a CLI/SDK
No need for Kubernetes or Spark
No long running processes or orchestrators
Leverages globally available managed services where possible
Hard to develop/debug with tightly coupled components, async operations, and hard to debug components like Spark
Easy to develop and debug
Modular components
Clear extension points
Fewer background operations
Faster feedback
Local mode
Inability to benefit from cloud-native technologies because of focus on reusable technologies like Kubernetes and Spark
Leverages best-in-class cloud technologies so users can enjoy scalable + powerful tech stacks without managing open source stacks themselves
Component
Feast 0.9
Feast 0.10, 011, 0.12+
Architecture
Service-oriented architecture
Containers and services deployed to Kubernetes
SDK/CLI centric software
Feast is able to deploy or configure infrastructure for use as a feature store
Installation
Terraform and Helm
Pip to install SDK/CLI
Provider used to deploy Feast components to GCP, AWS, or other environments during apply
Required infrastructure
Kubernetes, Postgres, Spark, Docker, Object Store
None
Batch compute
Yes (Spark based)
Python native (client-side) for batch data loading
Data warehouse for batch compute
Streaming support
Yes (Spark based)
Planned. Streaming jobs will be launched using apply
Offline store
None (can source data from any source Spark supports)
BigQuery, Snowflake (planned), Redshift, or custom implementations
Online store
Redis
DynamoDB, Firestore, Redis, and more planned.
Job Manager
Yes
No
Registry
gRPC service with Postgres backend
File-based registry with accompanying SDK for exploration
Local Mode
No
Yes