All pages
Powered by GitBook
1 of 12

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Offline stores

Please see Offline Store for a conceptual explanation of offline stores.

OverviewFileSnowflakeBigQueryRedshiftDuckDBSpark (contrib)PostgreSQL (contrib)Trino (contrib)Azure Synapse + Azure SQL (contrib)

BigQuery

Description

The BigQuery offline store provides support for reading BigQuerySources.

  • All joins happen within BigQuery.

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to BigQuery as a table (marked for expiration) in order to complete join operations.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[gcp]'. You can get started by then running feast init -t gcp.

Example

The full set of configuration options is available in .

Functionality Matrix

The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the BigQuery offline store.

BigQuery

Below is a matrix indicating which functionality is supported by BigQueryRetrievalJob.

BigQuery

*See for details on proposed solutions for enabling the BigQuery offline store to understand tables that use _PARTITIONTIME as the partition column.

To compare this set of functionality against other offline stores, please see the full .

no

local execution of Python-based on-demand transforms

yes

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data*

partial

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

yes

write_logged_features (persist logged features to offline store)

yes

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

yes

export to data lake (S3, GCS, etc.)

no

export to data warehouse

yes

BigQueryOfflineStoreConfig
here
GitHub issue
functionality matrix

export as Spark dataframe

feature_store.yaml
project: my_feature_repo
registry: gs://my-bucket/data/registry.db
provider: gcp
offline_store:
  type: bigquery
  dataset: feast_bq_dataset

Spark (contrib)

Description

The Spark offline store provides support for reading SparkSources.

  • Entity dataframes can be provided as a SQL query, Pandas dataframe or can be provided as a Pyspark dataframe. A Pandas dataframes will be converted to a Spark dataframe and processed as a temporary view.

Disclaimer

The Spark offline store does not achieve full test coverage. Please do not assume complete stability.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[spark]'. You can get started by then running feast init -t spark.

Example

The full set of configuration options is available in .

Functionality Matrix

The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Spark offline store.

Spark

Below is a matrix indicating which functionality is supported by SparkRetrievalJob.

Spark

To compare this set of functionality against other offline stores, please see the full .

yes

local execution of Python-based on-demand transforms

no

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data

yes

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

no

write_logged_features (persist logged features to offline store)

no

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

no

export to data lake (S3, GCS, etc.)

no

export to data warehouse

no

SparkOfflineStoreConfig
here
functionality matrix

export as Spark dataframe

feature_store.yaml
project: my_project
registry: data/registry.db
provider: local
offline_store:
    type: spark
    spark_conf:
        spark.master: "local[*]"
        spark.ui.enabled: "false"
        spark.eventLog.enabled: "false"
        spark.sql.catalogImplementation: "hive"
        spark.sql.parser.quotedRegexColumnNames: "true"
        spark.sql.session.timeZone: "UTC"
        spark.sql.execution.arrow.fallback.enabled: "true"
        spark.sql.execution.arrow.pyspark.enabled: "true"
online_store:
    path: data/online_store.db

Remote Offline

Description

The Remote Offline Store is an Arrow Flight client for the offline store that implements the RemoteOfflineStore class using the existing OfflineStore interface. The client implements various methods, including get_historical_features, pull_latest_from_table_or_query, write_logged_features, and offline_write_batch.

How to configure the client

User needs to create client side feature_store.yaml file and set the offline_store type remote and provide the server connection configuration including adding the host and specifying the port (default is 8815) required by the Arrow Flight client to connect with the Arrow Flight server.

Client Example

The complete example can be find under

How to configure the server

Please see the detail how to configure offline feature server

remote-offline-store-example
offline-feature-server.md
feature_store.yaml
offline_store:
  type: remote
  host: localhost
  port: 8815

DuckDB

Description

The duckdb offline store provides support for reading FileSources. It can read both Parquet and Delta formats. DuckDB offline store uses ibis under the hood to convert offline store operations to DuckDB queries.

  • Entity dataframes can be provided as a Pandas dataframe.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[duckdb]'.

Example

Functionality Matrix

The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the DuckDB offline store.

DuckdDB

Below is a matrix indicating which functionality is supported by IbisRetrievalJob.

DuckDB

To compare this set of functionality against other offline stores, please see the full .

Overview

Functionality

Here are the methods exposed by the OfflineStore interface, along with the core functionality supported by the method:

  • get_historical_features: point-in-time correct join to retrieve historical features

  • pull_latest_from_table_or_query: retrieve latest feature values for materialization into the online store

  • pull_all_from_table_or_query: retrieve a saved dataset

  • offline_write_batch: persist dataframes to the offline store, primarily for push sources

  • write_logged_features: persist logged features to the offline store, for feature logging

The first three of these methods all return a RetrievalJob specific to an offline store, such as a SnowflakeRetrievalJob. Here is a list of functionality supported by RetrievalJobs:

  • export to dataframe

  • export to arrow table

  • export to arrow batches (to handle large datasets in memory)

  • export to SQL

Functionality Matrix

There are currently four core offline store implementations: FileOfflineStore, BigQueryOfflineStore, SnowflakeOfflineStore, and RedshiftOfflineStore. There are several additional implementations contributed by the Feast community (PostgreSQLOfflineStore, SparkOfflineStore, and TrinoOfflineStore), which are not guaranteed to be stable or to match the functionality of the core implementations. Details for each specific offline store, such as how to configure it in a feature_store.yaml, can be found .

Below is a matrix indicating which offline stores support which methods.

File
BigQuery
Snowflake
Redshift
Postgres
Spark
Trino

Below is a matrix indicating which RetrievalJobs support what functionality.

File
BigQuery
Snowflake
Redshift
Postgres
Spark
Trino
DuckDB

Snowflake

Description

The offline store provides support for reading .

  • All joins happen within Snowflake.

PostgreSQL (contrib)

Description

The PostgreSQL offline store provides support for reading .

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Postgres as a table in order to complete join operations.

Azure Synapse + Azure SQL (contrib)

Description

The MsSQL offline store provides support for reading . Specifically, it is developed to read from on Microsoft Azure

  • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe.

File

Description

The file offline store provides support for reading . It uses Dask as the compute engine.

All data is downloaded and joined using Python and therefore may not scale to production workloads.

export to data lake (S3, GCS, etc.)
  • export to data warehouse

  • export as Spark dataframe

  • local execution of Python-based on-demand transforms

  • remote execution of Python-based on-demand transforms

  • persist results in the offline store

  • preview the query plan before execution (RetrievalJobs are lazily executed)

  • read partitioned data

  • pull_latest_from_table_or_query

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    pull_all_from_table_or_query

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    offline_write_batch

    yes

    yes

    yes

    yes

    no

    no

    no

    write_logged_features

    yes

    yes

    yes

    yes

    no

    no

    no

    yes

    yes

    export to arrow table

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    export to arrow batches

    no

    no

    no

    yes

    no

    no

    no

    no

    export to SQL

    no

    yes

    yes

    yes

    yes

    no

    yes

    no

    export to data lake (S3, GCS, etc.)

    no

    no

    yes

    no

    yes

    no

    no

    no

    export to data warehouse

    no

    yes

    yes

    yes

    yes

    no

    no

    no

    export as Spark dataframe

    no

    no

    yes

    no

    no

    yes

    no

    no

    local execution of Python-based on-demand transforms

    yes

    yes

    yes

    yes

    yes

    no

    yes

    yes

    remote execution of Python-based on-demand transforms

    no

    no

    no

    no

    no

    no

    no

    no

    persist results in the offline store

    yes

    yes

    yes

    yes

    yes

    yes

    no

    yes

    preview the query plan before execution

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    no

    read partitioned data

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    get_historical_features

    yes

    yes

    yes

    yes

    yes

    yes

    yes

    export to dataframe

    yes

    yes

    yes

    yes

    yes

    here

    yes

    no

    local execution of Python-based on-demand transforms

    yes

    remote execution of Python-based on-demand transforms

    no

    persist results in the offline store

    yes

    preview the query plan before execution

    no

    read partitioned data

    yes

    get_historical_features (point-in-time correct join)

    yes

    pull_latest_from_table_or_query (retrieve latest feature values)

    yes

    pull_all_from_table_or_query (retrieve a saved dataset)

    yes

    offline_write_batch (persist dataframes to offline store)

    yes

    write_logged_features (persist logged features to offline store)

    yes

    export to dataframe

    yes

    export to arrow table

    yes

    export to arrow batches

    no

    export to SQL

    no

    export to data lake (S3, GCS, etc.)

    no

    export to data warehouse

    no

    here
    functionality matrix

    export as Spark dataframe

    feature_store.yaml
    project: my_project
    registry: data/registry.db
    provider: local
    offline_store:
        type: duckdb
    online_store:
        path: data/online_store.db
    Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Snowflake as a temporary table in order to complete join operations.

    Getting started

    In order to use this offline store, you'll need to run pip install 'feast[snowflake]'.

    If you're using a file based registry, then you'll also need to install the relevant cloud extra (pip install 'feast[snowflake, CLOUD]' where CLOUD is one of aws, gcp, azure)

    You can get started by then running feast init -t snowflake.

    Example

    The full set of configuration options is available in SnowflakeOfflineStoreConfig.

    Limitation

    Please be aware that here is a restriction/limitation for using SQL query string in Feast with Snowflake. Try to avoid the usage of single quote in SQL query string. For example, the following query string will fail:

    That 'value' will fail in Snowflake. Instead, please use pairs of dollar signs like $$value$$ as mentioned in Snowflake document.

    Functionality Matrix

    The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Snowflake offline store.

    Snowflake

    get_historical_features (point-in-time correct join)

    yes

    pull_latest_from_table_or_query (retrieve latest feature values)

    yes

    pull_all_from_table_or_query (retrieve a saved dataset)

    yes

    offline_write_batch (persist dataframes to offline store)

    yes

    write_logged_features (persist logged features to offline store)

    yes

    Below is a matrix indicating which functionality is supported by SnowflakeRetrievalJob.

    Snowflake

    export to dataframe

    yes

    export to arrow table

    yes

    export to arrow batches

    yes

    export to SQL

    yes

    export to data lake (S3, GCS, etc.)

    yes

    export to data warehouse

    yes

    To compare this set of functionality against other offline stores, please see the full functionality matrix.

    Snowflake
    SnowflakeSources
    Disclaimer

    The PostgreSQL offline store does not achieve full test coverage. Please do not assume complete stability.

    Getting started

    In order to use this offline store, you'll need to run pip install 'feast[postgres]'. You can get started by then running feast init -t postgres.

    Example

    Note that sslmode, sslkey_path, sslcert_path, and sslrootcert_path are optional parameters. The full set of configuration options is available in PostgreSQLOfflineStoreConfig.

    Functionality Matrix

    The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the PostgreSQL offline store.

    Postgres

    get_historical_features (point-in-time correct join)

    yes

    pull_latest_from_table_or_query (retrieve latest feature values)

    yes

    pull_all_from_table_or_query (retrieve a saved dataset)

    yes

    offline_write_batch (persist dataframes to offline store)

    no

    write_logged_features (persist logged features to offline store)

    no

    Below is a matrix indicating which functionality is supported by PostgreSQLRetrievalJob.

    Postgres

    export to dataframe

    yes

    export to arrow table

    yes

    export to arrow batches

    no

    export to SQL

    yes

    export to data lake (S3, GCS, etc.)

    yes

    export to data warehouse

    yes

    To compare this set of functionality against other offline stores, please see the full functionality matrix.

    PostgreSQLSources
    Getting started

    In order to use this offline store, you'll need to run pip install 'feast[azure]'. You can get started by then following this tutorial.

    Disclaimer

    The MsSQL offline store does not achieve full test coverage. Please do not assume complete stability.

    Example

    Functionality Matrix

    The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Spark offline store.

    MsSql

    get_historical_features (point-in-time correct join)

    yes

    pull_latest_from_table_or_query (retrieve latest feature values)

    yes

    pull_all_from_table_or_query (retrieve a saved dataset)

    yes

    offline_write_batch (persist dataframes to offline store)

    no

    write_logged_features (persist logged features to offline store)

    no

    Below is a matrix indicating which functionality is supported by MsSqlServerRetrievalJob.

    MsSql

    export to dataframe

    yes

    export to arrow table

    yes

    export to arrow batches

    no

    export to SQL

    no

    export to data lake (S3, GCS, etc.)

    no

    export to data warehouse

    no

    To compare this set of functionality against other offline stores, please see the full functionality matrix.

    MsSQL Sources
    Synapse SQL
    Example

    The full set of configuration options is available in FileOfflineStoreConfig.

    Functionality Matrix

    The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the file offline store.

    File

    get_historical_features (point-in-time correct join)

    yes

    pull_latest_from_table_or_query (retrieve latest feature values)

    yes

    pull_all_from_table_or_query (retrieve a saved dataset)

    yes

    offline_write_batch (persist dataframes to offline store)

    yes

    write_logged_features (persist logged features to offline store)

    yes

    Below is a matrix indicating which functionality is supported by FileRetrievalJob.

    File

    export to dataframe

    yes

    export to arrow table

    yes

    export to arrow batches

    no

    export to SQL

    no

    export to data lake (S3, GCS, etc.)

    no

    export to data warehouse

    no

    To compare this set of functionality against other offline stores, please see the full functionality matrix.

    FileSources
    feature_store.yaml
    project: my_feature_repo
    registry: data/registry.db
    provider: local
    offline_store:
      type: snowflake.offline
      account: snowflake_deployment.us-east-1
      user: user_login
      password: user_password
      role: SYSADMIN
      warehouse: COMPUTE_WH
      database: FEAST
      schema: PUBLIC
    SELECT
        some_column
    FROM
        some_table
    WHERE
        other_column = 'value'
    feature_store.yaml
    project: my_project
    registry: data/registry.db
    provider: local
    offline_store:
      type: postgres
      host: DB_HOST
      port: DB_PORT
      database: DB_NAME
      db_schema: DB_SCHEMA
      user: DB_USERNAME
      password: DB_PASSWORD
      sslmode: verify-ca
      sslkey_path: /path/to/client-key.pem
      sslcert_path: /path/to/client-cert.pem
      sslrootcert_path: /path/to/server-ca.pem
    online_store:
        path: data/online_store.db
    feature_store.yaml
    registry:
      registry_store_type: AzureRegistryStore
      path: ${REGISTRY_PATH} # Environment Variable
    project: production
    provider: azure
    online_store:
        type: redis
        connection_string: ${REDIS_CONN} # Environment Variable
    offline_store:
        type: mssql
        connection_string: ${SQL_CONN}  # Environment Variable
    feature_store.yaml
    project: my_feature_repo
    registry: data/registry.db
    provider: local
    offline_store:
      type: file

    export as Spark dataframe

    yes

    local execution of Python-based on-demand transforms

    yes

    remote execution of Python-based on-demand transforms

    no

    persist results in the offline store

    yes

    preview the query plan before execution

    yes

    read partitioned data

    yes

    export as Spark dataframe

    no

    local execution of Python-based on-demand transforms

    yes

    remote execution of Python-based on-demand transforms

    no

    persist results in the offline store

    yes

    preview the query plan before execution

    yes

    read partitioned data

    yes

    local execution of Python-based on-demand transforms

    no

    remote execution of Python-based on-demand transforms

    no

    persist results in the offline store

    yes

    export as Spark dataframe

    no

    local execution of Python-based on-demand transforms

    yes

    remote execution of Python-based on-demand transforms

    no

    persist results in the offline store

    yes

    preview the query plan before execution

    yes

    read partitioned data

    yes

    Redshift

    Description

    The Redshift offline store provides support for reading RedshiftSources.

    • All joins happen within Redshift.

    • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Redshift temporarily in order to complete join operations.

    Getting started

    In order to use this offline store, you'll need to run pip install 'feast[aws]'. You can get started by then running feast init -t aws.

    Example

    The full set of configuration options is available in .

    Functionality Matrix

    The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Redshift offline store.

    Redshift

    Below is a matrix indicating which functionality is supported by RedshiftRetrievalJob.

    Redshift

    To compare this set of functionality against other offline stores, please see the full .

    Permissions

    Feast requires the following permissions in order to execute commands for Redshift offline store:

    The following inline policy can be used to grant Feast the necessary permissions:

    In addition to this, Redshift offline store requires an IAM role that will be used by Redshift itself to interact with S3. More concretely, Redshift has to use this IAM role to run and commands. Once created, this IAM role needs to be configured in feature_store.yaml file as offline_store: iam_role.

    The following inline policy can be used to grant Redshift necessary permissions to access S3:

    While the following trust relationship is necessary to make sure that Redshift, and only Redshift can assume this role:

    Redshift Serverless

    In order to use , specify a workgroup instead of a cluster_id and user.

    Please note that the IAM policies above will need the version, rather than the standard .

    no

    local execution of Python-based on-demand transforms

    yes

    remote execution of Python-based on-demand transforms

    no

    persist results in the offline store

    yes

    preview the query plan before execution

    yes

    read partitioned data

    yes

    redshift-data:ExecuteStatement

    redshift:GetClusterCredentials

    arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>

    arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>

    arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>

    Get Historical Features

    redshift-data:DescribeStatement

    *

    Get Historical Features

    s3:ListBucket

    s3:GetObject

    s3:PutObject

    s3:DeleteObject

    arn:aws:s3:::<bucket_name>

    arn:aws:s3:::<bucket_name>/*

    get_historical_features (point-in-time correct join)

    yes

    pull_latest_from_table_or_query (retrieve latest feature values)

    yes

    pull_all_from_table_or_query (retrieve a saved dataset)

    yes

    offline_write_batch (persist dataframes to offline store)

    yes

    write_logged_features (persist logged features to offline store)

    yes

    export to dataframe

    yes

    export to arrow table

    yes

    export to arrow batches

    yes

    export to SQL

    yes

    export to data lake (S3, GCS, etc.)

    no

    export to data warehouse

    yes

    Command

    Permissions

    Resources

    Apply

    redshift-data:DescribeTable

    redshift:GetClusterCredentials

    arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>

    arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>

    arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>

    Materialize

    redshift-data:ExecuteStatement

    arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>

    Materialize

    redshift-data:DescribeStatement

    *

    Materialize

    s3:ListBucket

    s3:GetObject

    s3:DeleteObject

    arn:aws:s3:::<bucket_name>

    arn:aws:s3:::<bucket_name>/*

    RedshiftOfflineStoreConfig
    here
    functionality matrix
    UNLOAD
    COPY
    AWS Redshift Serverless
    redshift-serverless
    redshift

    export as Spark dataframe

    Get Historical Features

    feature_store.yaml
    project: my_feature_repo
    registry: data/registry.db
    provider: aws
    offline_store:
      type: redshift
      region: us-west-2
      cluster_id: feast-cluster
      database: feast-database
      user: redshift-user
      s3_staging_location: s3://feast-bucket/redshift
      iam_role: arn:aws:iam::123456789012:role/redshift_s3_access_role
    {
        "Statement": [
            {
                "Action": [
                    "s3:ListBucket",
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:DeleteObject"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::<bucket_name>/*",
                    "arn:aws:s3:::<bucket_name>"
                ]
            },
            {
                "Action": [
                    "redshift-data:DescribeTable",
                    "redshift:GetClusterCredentials",
                    "redshift-data:ExecuteStatement"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username>",
                    "arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name>",
                    "arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id>"
                ]
            },
            {
                "Action": [
                    "redshift-data:DescribeStatement"
                ],
                "Effect": "Allow",
                "Resource": "*"
            }
        ],
        "Version": "2012-10-17"
    }
    {
        "Statement": [
            {
                "Action": "s3:*",
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::feast-int-bucket",
                    "arn:aws:s3:::feast-int-bucket/*"
                ]
            }
        ],
        "Version": "2012-10-17"
    }
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "redshift.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    feature_store.yaml
    project: my_feature_repo
    registry: data/registry.db
    provider: aws
    offline_store:
      type: redshift
      region: us-west-2
      workgroup: feast-workgroup
      database: feast-database
      s3_staging_location: s3://feast-bucket/redshift
      iam_role: arn:aws:iam::123456789012:role/redshift_s3_access_role

    Trino (contrib)

    Description

    The Trino offline store provides support for reading TrinoSources.

    • Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Trino as a table in order to complete join operations.

    Disclaimer

    The Trino offline store does not achieve full test coverage. Please do not assume complete stability.

    Getting started

    In order to use this offline store, you'll need to run pip install 'feast[trino]'. You can then run feast init, then swap out feature_store.yaml with the below example to connect to Trino.

    Example

    The full set of configuration options is available in .

    Functionality Matrix

    The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Trino offline store.

    Trino

    Below is a matrix indicating which functionality is supported by TrinoRetrievalJob.

    Trino

    To compare this set of functionality against other offline stores, please see the full .

    no

    local execution of Python-based on-demand transforms

    yes

    remote execution of Python-based on-demand transforms

    no

    persist results in the offline store

    no

    preview the query plan before execution

    yes

    read partitioned data

    yes

    get_historical_features (point-in-time correct join)

    yes

    pull_latest_from_table_or_query (retrieve latest feature values)

    yes

    pull_all_from_table_or_query (retrieve a saved dataset)

    yes

    offline_write_batch (persist dataframes to offline store)

    no

    write_logged_features (persist logged features to offline store)

    no

    export to dataframe

    yes

    export to arrow table

    yes

    export to arrow batches

    no

    export to SQL

    yes

    export to data lake (S3, GCS, etc.)

    no

    export to data warehouse

    no

    TrinoOfflineStoreConfig
    here
    functionality matrix

    export as Spark dataframe

    feature_store.yaml
    project: feature_repo
    registry: data/registry.db
    provider: local
    offline_store:
        type: feast_trino.trino.TrinoOfflineStore
        host: localhost
        port: 8080
        catalog: memory
        connector:
            type: memory
        user: trino
        source: feast-trino-offline-store
        http-scheme: https
        ssl-verify: false
        x-trino-extra-credential-header: foo=bar, baz=qux
    
        # enables authentication in Trino connections, pick the one you need
        # if you don't need authentication, you can safely remove the whole auth block
        auth:
            # Basic Auth
            type: basic
            config:
                username: foo
                password: $FOO
    
            # Certificate
            type: certificate
            config:
                cert-file: /path/to/cert/file
                key-file: /path/to/key/file
    
            # JWT
            type: jwt
            config:
                token: $JWT_TOKEN
    
            # OAuth2 (no config required)
            type: oauth2
    
            # Kerberos
            type: kerberos
            config:
                config-file: /path/to/kerberos/config/file
                service-name: foo
                mutual-authentication: true
                force-preemptive: true
                hostname-override: custom-hostname
                sanitize-mutual-error-response: true
                principal: principal-name
                delegate: true
                ca_bundle: /path/to/ca/bundle/file
    online_store:
        path: data/online_store.db