Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Here are the methods exposed by the OfflineStore
interface, along with the core functionality supported by the method:
get_historical_features
: point-in-time correct join to retrieve historical features
pull_latest_from_table_or_query
: retrieve latest feature values for materialization into the online store
pull_all_from_table_or_query
: retrieve a saved dataset
offline_write_batch
: persist dataframes to the offline store, primarily for push sources
write_logged_features
: persist logged features to the offline store, for feature logging
The first three of these methods all return a RetrievalJob
specific to an offline store, such as a SnowflakeRetrievalJob
. Here is a list of functionality supported by RetrievalJob
s:
export to dataframe
export to arrow table
export to arrow batches (to handle large datasets in memory)
export to SQL
export to data lake (S3, GCS, etc.)
export to data warehouse
export as Spark dataframe
local execution of Python-based on-demand transforms
remote execution of Python-based on-demand transforms
persist results in the offline store
preview the query plan before execution (RetrievalJob
s are lazily executed)
read partitioned data
There are currently four core offline store implementations: FileOfflineStore
, BigQueryOfflineStore
, SnowflakeOfflineStore
, and RedshiftOfflineStore
. There are several additional implementations contributed by the Feast community (PostgreSQLOfflineStore
, SparkOfflineStore
, and TrinoOfflineStore
), which are not guaranteed to be stable or to match the functionality of the core implementations. Details for each specific offline store, such as how to configure it in a feature_store.yaml
, can be found here.
Below is a matrix indicating which offline stores support which methods.
Below is a matrix indicating which RetrievalJob
s support what functionality.
The Snowflake offline store provides support for reading SnowflakeSources.
All joins happen within Snowflake.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Snowflake as a temporary table in order to complete join operations.
The full set of configuration options is available in SnowflakeOfflineStoreConfig.
The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Snowflake offline store.
Below is a matrix indicating which functionality is supported by SnowflakeRetrievalJob
.
To compare this set of functionality against other offline stores, please see the full functionality matrix.
The BigQuery offline store provides support for reading BigQuerySources.
All joins happen within BigQuery.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to BigQuery as a table (marked for expiration) in order to complete join operations.
The full set of configuration options is available in BigQueryOfflineStoreConfig.
The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the BigQuery offline store.
Below is a matrix indicating which functionality is supported by BigQueryRetrievalJob
.
*See GitHub issue for details on proposed solutions for enabling the BigQuery offline store to understand tables that use _PARTITIONTIME
as the partition column.
To compare this set of functionality against other offline stores, please see the full functionality matrix.
All joins happen within Redshift.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Redshift temporarily in order to complete join operations.
Below is a matrix indicating which functionality is supported by RedshiftRetrievalJob
.
Feast requires the following permissions in order to execute commands for Redshift offline store:
The following inline policy can be used to grant Feast the necessary permissions:
The following inline policy can be used to grant Redshift necessary permissions to access S3:
While the following trust relationship is necessary to make sure that Redshift, and only Redshift can assume this role: