All pages
Powered by GitBook
1 of 10

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Offline stores

Please see Offline Store for a conceptual explanation of offline stores.

OverviewFileSnowflakeBigQueryRedshiftSpark (contrib)PostgreSQL (contrib)Trino (contrib)Azure Synapse + Azure SQL (contrib)

BigQuery

Description

The BigQuery offline store provides support for reading .

  • All joins happen within BigQuery.

Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to BigQuery as a table (marked for expiration) in order to complete join operations.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[gcp]'. You can get started by then running feast init -t gcp.

Example

The full set of configuration options is available in BigQueryOfflineStoreConfig.

Functionality Matrix

The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the BigQuery offline store.

BigQuery

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

yes

write_logged_features (persist logged features to offline store)

yes

Below is a matrix indicating which functionality is supported by BigQueryRetrievalJob.

BigQuery

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

yes

export to data lake (S3, GCS, etc.)

no

*See GitHub issue for details on proposed solutions for enabling the BigQuery offline store to understand tables that use _PARTITIONTIME as the partition column.

To compare this set of functionality against other offline stores, please see the full functionality matrix.

BigQuerySources
feature_store.yaml
project: my_feature_repo
registry: gs://my-bucket/data/registry.db
provider: gcp
offline_store:
  type: bigquery
  dataset: feast_bq_dataset

export to data warehouse

yes

export as Spark dataframe

no

local execution of Python-based on-demand transforms

yes

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data*

partial

Snowflake

Description

The Snowflake offline store provides support for reading SnowflakeSources.

  • All joins happen within Snowflake.

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Snowflake as a temporary table in order to complete join operations.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[snowflake]'.

If you're using a file based registry, then you'll also need to install the relevant cloud extra (pip install 'feast[snowflake, CLOUD]' where CLOUD is one of aws, gcp, azure)

You can get started by then running feast init -t snowflake.

Example

The full set of configuration options is available in .

Functionality Matrix

The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Snowflake offline store.

Snowflake

Below is a matrix indicating which functionality is supported by SnowflakeRetrievalJob.

Snowflake

To compare this set of functionality against other offline stores, please see the full .

Azure Synapse + Azure SQL (contrib)

Description

The MsSQL offline store provides support for reading . Specifically, it is developed to read from on Microsoft Azure

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe.

PostgreSQL (contrib)

Description

The PostgreSQL offline store provides support for reading .

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Postgres as a table in order to complete join operations.

Spark (contrib)

Description

The Spark offline store provides support for reading .

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be converted to a Spark dataframe and processed as a temporary view.

yes

export as Spark dataframe

yes

local execution of Python-based on-demand transforms

yes

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data

yes

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

yes

write_logged_features (persist logged features to offline store)

yes

export to dataframe

yes

export to arrow table

yes

export to arrow batches

yes

export to SQL

yes

export to data lake (S3, GCS, etc.)

yes

SnowflakeOfflineStoreConfig
here
functionality matrix

export to data warehouse

feature_store.yaml
project: my_feature_repo
registry: data/registry.db
provider: local
offline_store:
  type: snowflake.offline
  account: snowflake_deployment.us-east-1
  user: user_login
  password: user_password
  role: SYSADMIN
  warehouse: COMPUTE_WH
  database: FEAST
  schema: PUBLIC
Getting started

In order to use this offline store, you'll need to run pip install 'feast[azure]'. You can get started by then following this tutorial.

Disclaimer

The MsSQL offline store does not achieve full test coverage. Please do not assume complete stability.

Example

Functionality Matrix

The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Spark offline store.

MsSql

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

no

write_logged_features (persist logged features to offline store)

no

Below is a matrix indicating which functionality is supported by MsSqlServerRetrievalJob.

MsSql

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

no

export to data lake (S3, GCS, etc.)

no

To compare this set of functionality against other offline stores, please see the full functionality matrix.

MsSQL Sources
Synapse SQL
Disclaimer

The PostgreSQL offline store does not achieve full test coverage. Please do not assume complete stability.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[postgres]'. You can get started by then running feast init -t postgres.

Example

Note that sslmode, sslkey_path, sslcert_path, and sslrootcert_path are optional parameters. The full set of configuration options is available in PostgreSQLOfflineStoreConfig.

Functionality Matrix

The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the PostgreSQL offline store.

Postgres

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

no

write_logged_features (persist logged features to offline store)

no

Below is a matrix indicating which functionality is supported by PostgreSQLRetrievalJob.

Postgres

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

yes

export to data lake (S3, GCS, etc.)

yes

To compare this set of functionality against other offline stores, please see the full functionality matrix.

PostgreSQLSources
Disclaimer

The Spark offline store does not achieve full test coverage. Please do not assume complete stability.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[spark]'. You can get started by then running feast init -t spark.

Example

The full set of configuration options is available in SparkOfflineStoreConfig.

Functionality Matrix

The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Spark offline store.

Spark

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

no

write_logged_features (persist logged features to offline store)

no

Below is a matrix indicating which functionality is supported by SparkRetrievalJob.

Spark

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

no

export to data lake (S3, GCS, etc.)

no

To compare this set of functionality against other offline stores, please see the full functionality matrix.

SparkSources

Redshift

Description

The Redshift offline store provides support for reading RedshiftSources.

  • All joins happen within Redshift.

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Redshift temporarily in order to complete join operations.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[aws]'. You can get started by then running feast init -t aws.

Example

The full set of configuration options is available in .

Functionality Matrix

The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Redshift offline store.

Redshift

Below is a matrix indicating which functionality is supported by RedshiftRetrievalJob.

Redshift

To compare this set of functionality against other offline stores, please see the full .

Permissions

Feast requires the following permissions in order to execute commands for Redshift offline store:

The following inline policy can be used to grant Feast the necessary permissions:

In addition to this, Redshift offline store requires an IAM role that will be used by Redshift itself to interact with S3. More concretely, Redshift has to use this IAM role to run and commands. Once created, this IAM role needs to be configured in feature_store.yaml file as offline_store: iam_role.

The following inline policy can be used to grant Redshift necessary permissions to access S3:

While the following trust relationship is necessary to make sure that Redshift, and only Redshift can assume this role:

Redshift Serverless

In order to use , specify a workgroup instead of a cluster_id and user.

Please note that the IAM policies above will need the version, rather than the standard .

File

Description

The file offline store provides support for reading . It uses Dask as the compute engine.

All data is downloaded and joined using Python and therefore may not scale to production workloads.

feature_store.yaml
registry:
  registry_store_type: AzureRegistryStore
  path: ${REGISTRY_PATH} # Environment Variable
project: production
provider: azure
online_store:
    type: redis
    connection_string: ${REDIS_CONN} # Environment Variable
offline_store:
    type: mssql
    connection_string: ${SQL_CONN}  # Environment Variable
feature_store.yaml
project: my_project
registry: data/registry.db
provider: local
offline_store:
  type: postgres
  host: DB_HOST
  port: DB_PORT
  database: DB_NAME
  db_schema: DB_SCHEMA
  user: DB_USERNAME
  password: DB_PASSWORD
  sslmode: verify-ca
  sslkey_path: /path/to/client-key.pem
  sslcert_path: /path/to/client-cert.pem
  sslrootcert_path: /path/to/server-ca.pem
online_store:
    path: data/online_store.db
feature_store.yaml
project: my_project
registry: data/registry.db
provider: local
offline_store:
    type: spark
    spark_conf:
        spark.master: "local[*]"
        spark.ui.enabled: "false"
        spark.eventLog.enabled: "false"
        spark.sql.catalogImplementation: "hive"
        spark.sql.parser.quotedRegexColumnNames: "true"
        spark.sql.session.timeZone: "UTC"
online_store:
    path: data/online_store.db

export to data warehouse

no

local execution of Python-based on-demand transforms

no

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

export to data warehouse

yes

export as Spark dataframe

no

local execution of Python-based on-demand transforms

yes

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data

yes

export to data warehouse

no

export as Spark dataframe

yes

local execution of Python-based on-demand transforms

no

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data

yes

yes

export as Spark dataframe

no

local execution of Python-based on-demand transforms

yes

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data

yes

arn:aws:s3:::<bucket_name>

arn:aws:s3:::<bucket_name>/*

Get Historical Features

redshift-data:ExecuteStatement

redshift:GetClusterCredentials

arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>

arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>

arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>

Get Historical Features

redshift-data:DescribeStatement

*

Get Historical Features

s3:ListBucket

s3:GetObject

s3:PutObject

s3:DeleteObject

arn:aws:s3:::<bucket_name>

arn:aws:s3:::<bucket_name>/*

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

yes

write_logged_features (persist logged features to offline store)

yes

export to dataframe

yes

export to arrow table

yes

export to arrow batches

yes

export to SQL

yes

export to data lake (S3, GCS, etc.)

no

Command

Permissions

Resources

Apply

redshift-data:DescribeTable

redshift:GetClusterCredentials

arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>

arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>

arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>

Materialize

redshift-data:ExecuteStatement

arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>

Materialize

redshift-data:DescribeStatement

*

Materialize

RedshiftOfflineStoreConfig
here
functionality matrix
UNLOAD
COPY
AWS Redshift Serverless
redshift-serverless
redshift

export to data warehouse

s3:ListBucket

s3:GetObject

s3:DeleteObject

feature_store.yaml
project: my_feature_repo
registry: data/registry.db
provider: aws
offline_store:
  type: redshift
  region: us-west-2
  cluster_id: feast-cluster
  database: feast-database
  user: redshift-user
  s3_staging_location: s3://feast-bucket/redshift
  iam_role: arn:aws:iam::123456789012:role/redshift_s3_access_role
{
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::<bucket_name>/*",
                "arn:aws:s3:::<bucket_name>"
            ]
        },
        {
            "Action": [
                "redshift-data:DescribeTable",
                "redshift:GetClusterCredentials",
                "redshift-data:ExecuteStatement"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>",
                "arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>",
                "arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>"
            ]
        },
        {
            "Action": [
                "redshift-data:DescribeStatement"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ],
    "Version": "2012-10-17"
}
{
    "Statement": [
        {
            "Action": "s3:*",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::feast-integration-tests",
                "arn:aws:s3:::feast-integration-tests/*"
            ]
        }
    ],
    "Version": "2012-10-17"
}
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "redshift.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
feature_store.yaml
project: my_feature_repo
registry: data/registry.db
provider: aws
offline_store:
  type: redshift
  region: us-west-2
  workgroup: feast-workgroup
  database: feast-database
  s3_staging_location: s3://feast-bucket/redshift
  iam_role: arn:aws:iam::123456789012:role/redshift_s3_access_role
Example

The full set of configuration options is available in FileOfflineStoreConfig.

Functionality Matrix

The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the file offline store.

File

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

yes

write_logged_features (persist logged features to offline store)

yes

Below is a matrix indicating which functionality is supported by FileRetrievalJob.

File

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

no

export to data lake (S3, GCS, etc.)

no

To compare this set of functionality against other offline stores, please see the full functionality matrix.

FileSources

Trino (contrib)

Description

The Trino offline store provides support for reading TrinoSources.

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Trino as a table in order to complete join operations.

Disclaimer

The Trino offline store does not achieve full test coverage. Please do not assume complete stability.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[trino]'. You can then run feast init, then swap out feature_store.yaml with the below example to connect to Trino.

Example

The full set of configuration options is available in .

Functionality Matrix

The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Trino offline store.

Trino

Below is a matrix indicating which functionality is supported by TrinoRetrievalJob.

Trino

To compare this set of functionality against other offline stores, please see the full .

feature_store.yaml
project: my_feature_repo
registry: data/registry.db
provider: local
offline_store:
  type: file

export to data warehouse

no

export as Spark dataframe

no

local execution of Python-based on-demand transforms

yes

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data

yes

no

export as Spark dataframe

no

local execution of Python-based on-demand transforms

yes

remote execution of Python-based on-demand transforms

no

persist results in the offline store

no

preview the query plan before execution

yes

read partitioned data

yes

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

no

write_logged_features (persist logged features to offline store)

no

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

yes

export to data lake (S3, GCS, etc.)

no

TrinoOfflineStoreConfig
here
functionality matrix

export to data warehouse

feature_store.yaml
project: feature_repo
registry: data/registry.db
provider: local
offline_store:
    type: feast_trino.trino.TrinoOfflineStore
    host: localhost
    port: 8080
    catalog: memory
    connector:
        type: memory
    user: trino
    source: feast-trino-offline-store
    http-scheme: https
    ssl-verify: false
    x-trino-extra-credential-header: foo=bar, baz=qux

    # enables authentication in Trino connections, pick the one you need
    # if you don't need authentication, you can safely remove the whole auth block
    auth:
        # Basic Auth
        type: basic
        config:
            username: foo
            password: $FOO

        # Certificate
        type: certificate
        config:
            cert-file: /path/to/cert/file
            key-file: /path/to/key/file

        # JWT
        type: jwt
        config:
            token: $JWT_TOKEN

        # OAuth2 (no config required)
        type: oauth2

        # Kerberos
        type: kerberos
        config:
            config-file: /path/to/kerberos/config/file
            service-name: foo
            mutual-authentication: true
            force-preemptive: true
            hostname-override: custom-hostname
            sanitize-mutual-error-response: true
            principal: principal-name
            delegate: true
            ca_bundle: /path/to/ca/bundle/file
online_store:
    path: data/online_store.db

Overview

Functionality

Here are the methods exposed by the OfflineStore interface, along with the core functionality supported by the method:

  • get_historical_features: point-in-time correct join to retrieve historical features

  • pull_latest_from_table_or_query: retrieve latest feature values for materialization into the online store

  • pull_all_from_table_or_query: retrieve a saved dataset

  • offline_write_batch: persist dataframes to the offline store, primarily for push sources

  • write_logged_features: persist logged features to the offline store, for feature logging

The first three of these methods all return a RetrievalJob specific to an offline store, such as a SnowflakeRetrievalJob. Here is a list of functionality supported by RetrievalJobs:

  • export to dataframe

  • export to arrow table

  • export to arrow batches (to handle large datasets in memory)

  • export to SQL

Functionality Matrix

There are currently four core offline store implementations: FileOfflineStore, BigQueryOfflineStore, SnowflakeOfflineStore, and RedshiftOfflineStore. There are several additional implementations contributed by the Feast community (PostgreSQLOfflineStore, SparkOfflineStore, and TrinoOfflineStore), which are not guaranteed to be stable or to match the functionality of the core implementations. Details for each specific offline store, such as how to configure it in a feature_store.yaml, can be found .

Below is a matrix indicating which offline stores support which methods.

File
BigQuery
Snowflake
Redshift
Postgres
Spark
Trino

Below is a matrix indicating which RetrievalJobs support what functionality.

File
BigQuery
Snowflake
Redshift
Postgres
Spark
Trino

export to data lake (S3, GCS, etc.)

  • export to data warehouse

  • export as Spark dataframe

  • local execution of Python-based on-demand transforms

  • remote execution of Python-based on-demand transforms

  • persist results in the offline store

  • preview the query plan before execution (RetrievalJobs are lazily executed)

  • read partitioned data

  • yes

    pull_latest_from_table_or_query

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    pull_all_from_table_or_query

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    offline_write_batch

    yes

    yes

    yes

    yes

    no

    no

    no

    write_logged_features

    yes

    yes

    yes

    yes

    no

    no

    no

    yes

    export to arrow table

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    export to arrow batches

    no

    no

    no

    yes

    no

    no

    no

    export to SQL

    no

    yes

    yes

    yes

    yes

    no

    yes

    export to data lake (S3, GCS, etc.)

    no

    no

    yes

    no

    yes

    no

    no

    export to data warehouse

    no

    yes

    yes

    yes

    yes

    no

    no

    export as Spark dataframe

    no

    no

    yes

    no

    no

    yes

    no

    local execution of Python-based on-demand transforms

    yes

    yes

    yes

    yes

    yes

    no

    yes

    remote execution of Python-based on-demand transforms

    no

    no

    no

    no

    no

    no

    no

    persist results in the offline store

    yes

    yes

    yes

    yes

    yes

    yes

    no

    preview the query plan before execution

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    read partitioned data

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    get_historical_features

    yes

    yes

    yes

    yes

    yes

    export to dataframe

    yes

    yes

    yes

    yes

    yes

    here

    yes

    yes