Spark (contrib)

Description

The Spark offline store provides support for reading SparkSources.

  • Entity dataframes can be provided as a SQL query, Pandas dataframe or can be provided as a Pyspark dataframe. A Pandas dataframes will be converted to a Spark dataframe and processed as a temporary view.

Disclaimer

The Spark offline store does not achieve full test coverage. Please do not assume complete stability.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[spark]'. You can get started by then running feast init -t spark.

Example

feature_store.yaml
project: my_project
registry: data/registry.db
provider: local
offline_store:
    type: spark
    spark_conf:
        spark.master: "local[*]"
        spark.ui.enabled: "false"
        spark.eventLog.enabled: "false"
        spark.sql.catalogImplementation: "hive"
        spark.sql.parser.quotedRegexColumnNames: "true"
        spark.sql.session.timeZone: "UTC"
        spark.sql.execution.arrow.fallback.enabled: "true"
        spark.sql.execution.arrow.pyspark.enabled: "true"
online_store:
    path: data/online_store.db

The full set of configuration options is available in SparkOfflineStoreConfig.

Functionality Matrix

The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Spark offline store.

Below is a matrix indicating which functionality is supported by SparkRetrievalJob.

To compare this set of functionality against other offline stores, please see the full functionality matrix.

Last updated