Overview

Using Feast

Feast development happens through three key workflows:

Defining feature tables and ingesting data into Feast

Feature creators model the data within their organization into Feast through the definition of feature tables that contain data sources. Feature tables are both a schema and a means of identifying data sources for features, and allow Feast to know how to interpret your data, and where to find it.

After registering a feature table with Feast, users can trigger an ingestion from their data source into Feast. This loads feature values from an upstream data source into Feast stores through ingestion jobs.

Visit feature tables to learn more about them.

pageDefine and ingest features

Retrieving historical features for training

In order to generate a training dataset it is necessary to provide both an entity dataframe and feature references through the Feast SDK to retrieve historical features. For historical serving, Feast requires that you provide the entities and timestamps for the corresponding feature data. Feast produces a point-in-time correct dataset using the requested features. These features can be requested from an unlimited number of feature sets.

pageGetting training features

Retrieving online features for online serving

Online retrieval uses feature references through the Feast Online Serving API to retrieve online features. Online serving allows for very low latency requests to feature data at very high throughput.

pageGetting online features

Last updated