Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Please see Offline Store for a conceptual explanation of offline stores.
OverviewFileSnowflakeBigQueryRedshiftDuckDBSpark (contrib)PostgreSQL (contrib)Trino (contrib)Azure Synapse + Azure SQL (contrib)The BigQuery offline store provides support for reading BigQuerySources.
All joins happen within BigQuery.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to BigQuery as a table (marked for expiration) in order to complete join operations.
In order to use this offline store, you'll need to run pip install 'feast[gcp]'. You can get started by then running feast init -t gcp.
The full set of configuration options is available in .
The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the BigQuery offline store.
Below is a matrix indicating which functionality is supported by BigQueryRetrievalJob.
*See for details on proposed solutions for enabling the BigQuery offline store to understand tables that use _PARTITIONTIME as the partition column.
To compare this set of functionality against other offline stores, please see the full .
no
local execution of Python-based on-demand transforms
yes
remote execution of Python-based on-demand transforms
no
persist results in the offline store
yes
preview the query plan before execution
yes
read partitioned data*
partial
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
yes
write_logged_features (persist logged features to offline store)
yes
export to dataframe
yes
export to arrow table
yes
export to arrow batches
no
export to SQL
yes
export to data lake (S3, GCS, etc.)
no
export to data warehouse
yes
export as Spark dataframe
project: my_feature_repo
registry: gs://my-bucket/data/registry.db
provider: gcp
offline_store:
type: bigquery
dataset: feast_bq_datasetThe Spark offline store provides support for reading SparkSources.
Entity dataframes can be provided as a SQL query, Pandas dataframe or can be provided as a Pyspark dataframe. A Pandas dataframes will be converted to a Spark dataframe and processed as a temporary view.
The Spark offline store does not achieve full test coverage. Please do not assume complete stability.
In order to use this offline store, you'll need to run pip install 'feast[spark]'. You can get started by then running feast init -t spark.
The full set of configuration options is available in .
The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Spark offline store.
Below is a matrix indicating which functionality is supported by SparkRetrievalJob.
To compare this set of functionality against other offline stores, please see the full .
yes
local execution of Python-based on-demand transforms
no
remote execution of Python-based on-demand transforms
no
persist results in the offline store
yes
preview the query plan before execution
yes
read partitioned data
yes
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
no
write_logged_features (persist logged features to offline store)
no
export to dataframe
yes
export to arrow table
yes
export to arrow batches
no
export to SQL
no
export to data lake (S3, GCS, etc.)
no
export to data warehouse
no
export as Spark dataframe
project: my_project
registry: data/registry.db
provider: local
offline_store:
type: spark
spark_conf:
spark.master: "local[*]"
spark.ui.enabled: "false"
spark.eventLog.enabled: "false"
spark.sql.catalogImplementation: "hive"
spark.sql.parser.quotedRegexColumnNames: "true"
spark.sql.session.timeZone: "UTC"
spark.sql.execution.arrow.fallback.enabled: "true"
spark.sql.execution.arrow.pyspark.enabled: "true"
online_store:
path: data/online_store.dbThe Remote Offline Store is an Arrow Flight client for the offline store that implements the RemoteOfflineStore class using the existing OfflineStore interface. The client implements various methods, including get_historical_features, pull_latest_from_table_or_query, write_logged_features, and offline_write_batch.
User needs to create client side feature_store.yaml file and set the offline_store type remote and provide the server connection configuration including adding the host and specifying the port (default is 8815) required by the Arrow Flight client to connect with the Arrow Flight server.
The complete example can be find under
Please see the detail how to configure offline feature server
offline_store:
type: remote
host: localhost
port: 8815The duckdb offline store provides support for reading FileSources. It can read both Parquet and Delta formats. DuckDB offline store uses ibis under the hood to convert offline store operations to DuckDB queries.
Entity dataframes can be provided as a Pandas dataframe.
In order to use this offline store, you'll need to run pip install 'feast[duckdb]'.
The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the DuckDB offline store.
Below is a matrix indicating which functionality is supported by IbisRetrievalJob.
To compare this set of functionality against other offline stores, please see the full .
Here are the methods exposed by the OfflineStore interface, along with the core functionality supported by the method:
get_historical_features: point-in-time correct join to retrieve historical features
pull_latest_from_table_or_query: retrieve latest feature values for materialization into the online store
pull_all_from_table_or_query: retrieve a saved dataset
offline_write_batch: persist dataframes to the offline store, primarily for push sources
write_logged_features: persist logged features to the offline store, for feature logging
The first three of these methods all return a RetrievalJob specific to an offline store, such as a SnowflakeRetrievalJob. Here is a list of functionality supported by RetrievalJobs:
export to dataframe
export to arrow table
export to arrow batches (to handle large datasets in memory)
export to SQL
There are currently four core offline store implementations: FileOfflineStore, BigQueryOfflineStore, SnowflakeOfflineStore, and RedshiftOfflineStore. There are several additional implementations contributed by the Feast community (PostgreSQLOfflineStore, SparkOfflineStore, and TrinoOfflineStore), which are not guaranteed to be stable or to match the functionality of the core implementations. Details for each specific offline store, such as how to configure it in a feature_store.yaml, can be found .
Below is a matrix indicating which offline stores support which methods.
Below is a matrix indicating which RetrievalJobs support what functionality.
export to data warehouse
export as Spark dataframe
local execution of Python-based on-demand transforms
remote execution of Python-based on-demand transforms
persist results in the offline store
preview the query plan before execution (RetrievalJobs are lazily executed)
read partitioned data
pull_latest_from_table_or_query
yes
yes
yes
yes
yes
yes
yes
pull_all_from_table_or_query
yes
yes
yes
yes
yes
yes
yes
offline_write_batch
yes
yes
yes
yes
no
no
no
write_logged_features
yes
yes
yes
yes
no
no
no
yes
yes
export to arrow table
yes
yes
yes
yes
yes
yes
yes
yes
export to arrow batches
no
no
no
yes
no
no
no
no
export to SQL
no
yes
yes
yes
yes
no
yes
no
export to data lake (S3, GCS, etc.)
no
no
yes
no
yes
no
no
no
export to data warehouse
no
yes
yes
yes
yes
no
no
no
export as Spark dataframe
no
no
yes
no
no
yes
no
no
local execution of Python-based on-demand transforms
yes
yes
yes
yes
yes
no
yes
yes
remote execution of Python-based on-demand transforms
no
no
no
no
no
no
no
no
persist results in the offline store
yes
yes
yes
yes
yes
yes
no
yes
preview the query plan before execution
yes
yes
yes
yes
yes
yes
yes
no
read partitioned data
yes
yes
yes
yes
yes
yes
yes
yes
get_historical_features
yes
yes
yes
yes
yes
yes
yes
export to dataframe
yes
yes
yes
yes
yes
yes
no
local execution of Python-based on-demand transforms
yes
remote execution of Python-based on-demand transforms
no
persist results in the offline store
yes
preview the query plan before execution
no
read partitioned data
yes
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
yes
write_logged_features (persist logged features to offline store)
yes
export to dataframe
yes
export to arrow table
yes
export to arrow batches
no
export to SQL
no
export to data lake (S3, GCS, etc.)
no
export to data warehouse
no
export as Spark dataframe
project: my_project
registry: data/registry.db
provider: local
offline_store:
type: duckdb
online_store:
path: data/online_store.dbIn order to use this offline store, you'll need to run pip install 'feast[snowflake]'.
If you're using a file based registry, then you'll also need to install the relevant cloud extra (pip install 'feast[snowflake, CLOUD]' where CLOUD is one of aws, gcp, azure)
You can get started by then running feast init -t snowflake.
The full set of configuration options is available in SnowflakeOfflineStoreConfig.
Please be aware that here is a restriction/limitation for using SQL query string in Feast with Snowflake. Try to avoid the usage of single quote in SQL query string. For example, the following query string will fail:
That 'value' will fail in Snowflake. Instead, please use pairs of dollar signs like $$value$$ as mentioned in Snowflake document.
The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Snowflake offline store.
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
yes
write_logged_features (persist logged features to offline store)
yes
Below is a matrix indicating which functionality is supported by SnowflakeRetrievalJob.
export to dataframe
yes
export to arrow table
yes
export to arrow batches
yes
export to SQL
yes
export to data lake (S3, GCS, etc.)
yes
export to data warehouse
yes
To compare this set of functionality against other offline stores, please see the full functionality matrix.
The PostgreSQL offline store does not achieve full test coverage. Please do not assume complete stability.
In order to use this offline store, you'll need to run pip install 'feast[postgres]'. You can get started by then running feast init -t postgres.
Note that sslmode, sslkey_path, sslcert_path, and sslrootcert_path are optional parameters. The full set of configuration options is available in PostgreSQLOfflineStoreConfig.
The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the PostgreSQL offline store.
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
no
write_logged_features (persist logged features to offline store)
no
Below is a matrix indicating which functionality is supported by PostgreSQLRetrievalJob.
export to dataframe
yes
export to arrow table
yes
export to arrow batches
no
export to SQL
yes
export to data lake (S3, GCS, etc.)
yes
export to data warehouse
yes
To compare this set of functionality against other offline stores, please see the full functionality matrix.
In order to use this offline store, you'll need to run pip install 'feast[azure]'. You can get started by then following this tutorial.
The MsSQL offline store does not achieve full test coverage. Please do not assume complete stability.
The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Spark offline store.
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
no
write_logged_features (persist logged features to offline store)
no
Below is a matrix indicating which functionality is supported by MsSqlServerRetrievalJob.
export to dataframe
yes
export to arrow table
yes
export to arrow batches
no
export to SQL
no
export to data lake (S3, GCS, etc.)
no
export to data warehouse
no
To compare this set of functionality against other offline stores, please see the full functionality matrix.
The full set of configuration options is available in FileOfflineStoreConfig.
The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the file offline store.
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
yes
write_logged_features (persist logged features to offline store)
yes
Below is a matrix indicating which functionality is supported by FileRetrievalJob.
export to dataframe
yes
export to arrow table
yes
export to arrow batches
no
export to SQL
no
export to data lake (S3, GCS, etc.)
no
export to data warehouse
no
To compare this set of functionality against other offline stores, please see the full functionality matrix.
project: my_feature_repo
registry: data/registry.db
provider: local
offline_store:
type: snowflake.offline
account: snowflake_deployment.us-east-1
user: user_login
password: user_password
role: SYSADMIN
warehouse: COMPUTE_WH
database: FEAST
schema: PUBLICSELECT
some_column
FROM
some_table
WHERE
other_column = 'value'project: my_project
registry: data/registry.db
provider: local
offline_store:
type: postgres
host: DB_HOST
port: DB_PORT
database: DB_NAME
db_schema: DB_SCHEMA
user: DB_USERNAME
password: DB_PASSWORD
sslmode: verify-ca
sslkey_path: /path/to/client-key.pem
sslcert_path: /path/to/client-cert.pem
sslrootcert_path: /path/to/server-ca.pem
online_store:
path: data/online_store.dbregistry:
registry_store_type: AzureRegistryStore
path: ${REGISTRY_PATH} # Environment Variable
project: production
provider: azure
online_store:
type: redis
connection_string: ${REDIS_CONN} # Environment Variable
offline_store:
type: mssql
connection_string: ${SQL_CONN} # Environment Variableproject: my_feature_repo
registry: data/registry.db
provider: local
offline_store:
type: fileexport as Spark dataframe
yes
local execution of Python-based on-demand transforms
yes
remote execution of Python-based on-demand transforms
no
persist results in the offline store
yes
preview the query plan before execution
yes
read partitioned data
yes
export as Spark dataframe
no
local execution of Python-based on-demand transforms
yes
remote execution of Python-based on-demand transforms
no
persist results in the offline store
yes
preview the query plan before execution
yes
read partitioned data
yes
local execution of Python-based on-demand transforms
no
remote execution of Python-based on-demand transforms
no
persist results in the offline store
yes
export as Spark dataframe
no
local execution of Python-based on-demand transforms
yes
remote execution of Python-based on-demand transforms
no
persist results in the offline store
yes
preview the query plan before execution
yes
read partitioned data
yes
The Redshift offline store provides support for reading RedshiftSources.
All joins happen within Redshift.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Redshift temporarily in order to complete join operations.
In order to use this offline store, you'll need to run pip install 'feast[aws]'. You can get started by then running feast init -t aws.
The full set of configuration options is available in .
The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Redshift offline store.
Below is a matrix indicating which functionality is supported by RedshiftRetrievalJob.
To compare this set of functionality against other offline stores, please see the full .
Feast requires the following permissions in order to execute commands for Redshift offline store:
The following inline policy can be used to grant Feast the necessary permissions:
In addition to this, Redshift offline store requires an IAM role that will be used by Redshift itself to interact with S3. More concretely, Redshift has to use this IAM role to run and commands. Once created, this IAM role needs to be configured in feature_store.yaml file as offline_store: iam_role.
The following inline policy can be used to grant Redshift necessary permissions to access S3:
While the following trust relationship is necessary to make sure that Redshift, and only Redshift can assume this role:
In order to use , specify a workgroup instead of a cluster_id and user.
Please note that the IAM policies above will need the version, rather than the standard .
no
local execution of Python-based on-demand transforms
yes
remote execution of Python-based on-demand transforms
no
persist results in the offline store
yes
preview the query plan before execution
yes
read partitioned data
yes
redshift-data:ExecuteStatement
redshift:GetClusterCredentials
arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>
arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>
arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>
Get Historical Features
redshift-data:DescribeStatement
*
Get Historical Features
s3:ListBucket
s3:GetObject
s3:PutObject
s3:DeleteObject
arn:aws:s3:::<bucket_name>
arn:aws:s3:::<bucket_name>/*
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
yes
write_logged_features (persist logged features to offline store)
yes
export to dataframe
yes
export to arrow table
yes
export to arrow batches
yes
export to SQL
yes
export to data lake (S3, GCS, etc.)
no
export to data warehouse
yes
Command
Permissions
Resources
Apply
redshift-data:DescribeTable
redshift:GetClusterCredentials
arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>
arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>
arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>
Materialize
redshift-data:ExecuteStatement
arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>
Materialize
redshift-data:DescribeStatement
*
Materialize
s3:ListBucket
s3:GetObject
s3:DeleteObject
arn:aws:s3:::<bucket_name>
arn:aws:s3:::<bucket_name>/*
export as Spark dataframe
Get Historical Features
project: my_feature_repo
registry: data/registry.db
provider: aws
offline_store:
type: redshift
region: us-west-2
cluster_id: feast-cluster
database: feast-database
user: redshift-user
s3_staging_location: s3://feast-bucket/redshift
iam_role: arn:aws:iam::123456789012:role/redshift_s3_access_role{
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<bucket_name>/*",
"arn:aws:s3:::<bucket_name>"
]
},
{
"Action": [
"redshift-data:DescribeTable",
"redshift:GetClusterCredentials",
"redshift-data:ExecuteStatement"
],
"Effect": "Allow",
"Resource": [
"arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>",
"arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>",
"arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>"
]
},
{
"Action": [
"redshift-data:DescribeStatement"
],
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
}{
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::feast-int-bucket",
"arn:aws:s3:::feast-int-bucket/*"
]
}
],
"Version": "2012-10-17"
}{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}project: my_feature_repo
registry: data/registry.db
provider: aws
offline_store:
type: redshift
region: us-west-2
workgroup: feast-workgroup
database: feast-database
s3_staging_location: s3://feast-bucket/redshift
iam_role: arn:aws:iam::123456789012:role/redshift_s3_access_roleThe Trino offline store provides support for reading TrinoSources.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Trino as a table in order to complete join operations.
The Trino offline store does not achieve full test coverage. Please do not assume complete stability.
In order to use this offline store, you'll need to run pip install 'feast[trino]'. You can then run feast init, then swap out feature_store.yaml with the below example to connect to Trino.
The full set of configuration options is available in .
The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Trino offline store.
Below is a matrix indicating which functionality is supported by TrinoRetrievalJob.
To compare this set of functionality against other offline stores, please see the full .
no
local execution of Python-based on-demand transforms
yes
remote execution of Python-based on-demand transforms
no
persist results in the offline store
no
preview the query plan before execution
yes
read partitioned data
yes
get_historical_features (point-in-time correct join)
yes
pull_latest_from_table_or_query (retrieve latest feature values)
yes
pull_all_from_table_or_query (retrieve a saved dataset)
yes
offline_write_batch (persist dataframes to offline store)
no
write_logged_features (persist logged features to offline store)
no
export to dataframe
yes
export to arrow table
yes
export to arrow batches
no
export to SQL
yes
export to data lake (S3, GCS, etc.)
no
export to data warehouse
no
export as Spark dataframe
project: feature_repo
registry: data/registry.db
provider: local
offline_store:
type: feast_trino.trino.TrinoOfflineStore
host: localhost
port: 8080
catalog: memory
connector:
type: memory
user: trino
source: feast-trino-offline-store
http-scheme: https
ssl-verify: false
x-trino-extra-credential-header: foo=bar, baz=qux
# enables authentication in Trino connections, pick the one you need
# if you don't need authentication, you can safely remove the whole auth block
auth:
# Basic Auth
type: basic
config:
username: foo
password: $FOO
# Certificate
type: certificate
config:
cert-file: /path/to/cert/file
key-file: /path/to/key/file
# JWT
type: jwt
config:
token: $JWT_TOKEN
# OAuth2 (no config required)
type: oauth2
# Kerberos
type: kerberos
config:
config-file: /path/to/kerberos/config/file
service-name: foo
mutual-authentication: true
force-preemptive: true
hostname-override: custom-hostname
sanitize-mutual-error-response: true
principal: principal-name
delegate: true
ca_bundle: /path/to/ca/bundle/file
online_store:
path: data/online_store.db