Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Snowflake data sources are Snowflake tables or views. These can be specified either by a table reference or a SQL query.
Using a table reference:
Using a query:
Be careful about how Snowflake handles table and column name conventions. In particular, you can read more about quote identifiers here.
The full set of configuration options is available here.
Snowflake data sources support all eight primitive types, but currently do not support array types. For a comparison against other batch data sources, please see here.
Redshift data sources are Redshift tables or views. These can be specified either by a table reference or a SQL query. However, no performance guarantees can be provided for SQL query-based sources, so table references are recommended.
Using a table name:
Using a query:
The full set of configuration options is available here.
Redshift data sources support all eight primitive types, but currently do not support array types. For a comparison against other batch data sources, please see here.
Please see Data Source for a conceptual explanation of data sources.
In Feast, each batch data source is associated with a corresponding offline store. For example, a SnowflakeSource
can only be processed by the Snowflake offline store. Otherwise, the primary difference between batch data sources is the set of supported types. Feast has an internal type system, and aims to support eight primitive types (bytes
, string
, int32
, int64
, float32
, float64
, bool
, and timestamp
) along with the corresponding array types. However, not every batch data source supports all of these types.
For more details on the Feast type system, see here.
There are currently four core batch data source implementations: FileSource
, BigQuerySource
, SnowflakeSource
, and RedshiftSource
. There are several additional implementations contributed by the Feast community (PostgreSQLSource
, SparkSource
, and TrinoSource
), which are not guaranteed to be stable or to match the functionality of the core implementations. Details for each specific data source can be found here.
Below is a matrix indicating which data sources support which types.
File | BigQuery | Snowflake | Redshift | Postgres | Spark | Trino | |
---|---|---|---|---|---|---|---|
File data sources are files on disk or on S3. Currently only Parquet files are supported.
FileSource is meant for development purposes only and is not optimized for production use.
The full set of configuration options is available .
File data sources support all eight primitive types and their corresponding array types. For a comparison against other batch data sources, please see .
Spark data sources are tables or files that can be loaded from some Spark store (e.g. Hive or in-memory). They can also be specified by a SQL query.
The Spark data source does not achieve full test coverage. Please do not assume complete stability.
Using a table reference from SparkSession (for example, either in-memory or a Hive Metastore):
Using a query:
Using a file reference:
BigQuery data sources are BigQuery tables or views. These can be specified either by a table reference or a SQL query. However, no performance guarantees can be provided for SQL query-based sources, so table references are recommended.
Using a table reference:
Using a query:
The full set of configuration options is available .
Trino data sources are Trino tables or views. These can be specified either by a table reference or a SQL query.
The Trino data source does not achieve full test coverage. Please do not assume complete stability.
Defining a Trino source:
The full set of configuration options is available .
Push sources allow feature values to be pushed to the online store and offline store in real time. This allows fresh feature values to be made available to applications. Push sources supercede the .
Push sources can be used by multiple feature views. When data is pushed to a push source, Feast propagates the feature values to all the consuming feature views.
Push sources must have a batch source specified. The batch source will be used for retrieving historical features. Thus users are also responsible for pushing data to a batch data source such as a data warehouse table. When using a push source as a stream source in the definition of a feature view, a batch source doesn't need to be specified in the feature view definition explicitly.
Streaming data sources are important sources of feature values. A typical setup with streaming data looks like:
Raw events come in (stream 1)
Streaming transformations applied (e.g. generating features like last_N_purchased_categories
) (stream 2)
Write stream 2 values to an offline store as a historical log for training (optional)
Write stream 2 values to an online store for low latency feature serving
Periodically materialize feature values from the offline store into the online store for decreased training-serving skew and improved model performance
Feast allows users to push features previously registered in a feature view to the online store for fresher features. It also allows users to push batches of stream data to the offline store by specifying that the push be directed to the offline store. This will push the data to the offline store declared in the repository configuration used to initialize the feature store.
Note that the push schema needs to also include the entity.
Note that the to
parameter is optional and defaults to online but we can specify these options: PushMode.ONLINE
, PushMode.OFFLINE
, or PushMode.ONLINE_AND_OFFLINE
.
The default option to write features from a stream is to add the Python SDK into your existing PySpark pipeline.
Warning: This is an experimental feature. It's intended for early testing and feedback, and could change without warnings in future releases.
Kafka sources allow users to register Kafka streams as data sources. Feast currently does not launch or monitor jobs to ingest data from Kafka. Users are responsible for launching and monitoring their own ingestion jobs, which should write feature values to the online store through . An example of how to launch such a job with Spark can be found . Feast also provides functionality to write to the offline store using the write_to_offline_store
functionality.
Kafka sources must have a batch source specified. The batch source will be used for retrieving historical features. Thus users are also responsible for writing data from their Kafka streams to a batch data source such as a data warehouse table. When using a Kafka source as a stream source in the definition of a feature view, a batch source doesn't need to be specified in the feature view definition explicitly.
Streaming data sources are important sources of feature values. A typical setup with streaming data looks like:
Raw events come in (stream 1)
Streaming transformations applied (e.g. generating features like last_N_purchased_categories
) (stream 2)
Write stream 2 values to an offline store as a historical log for training (optional)
Write stream 2 values to an online store for low latency feature serving
Periodically materialize feature values from the offline store into the online store for decreased training-serving skew and improved model performance
Note that the Kafka source has a batch source.
The Kafka source can be used in a stream feature view.
Warning: This is an experimental feature. It's intended for early testing and feedback, and could change without warnings in future releases.
Kinesis sources allow users to register Kinesis streams as data sources. Feast currently does not launch or monitor jobs to ingest data from Kinesis. Users are responsible for launching and monitoring their own ingestion jobs, which should write feature values to the online store through . An example of how to launch such a job with Spark to ingest from Kafka can be found ; by using a different plugin, the example can be adapted to Kinesis. Feast also provides functionality to write to the offline store using the write_to_offline_store
functionality.
Kinesis sources must have a batch source specified. The batch source will be used for retrieving historical features. Thus users are also responsible for writing data from their Kinesis streams to a batch data source such as a data warehouse table. When using a Kinesis source as a stream source in the definition of a feature view, a batch source doesn't need to be specified in the feature view definition explicitly.
Streaming data sources are important sources of feature values. A typical setup with streaming data looks like:
Raw events come in (stream 1)
Streaming transformations applied (e.g. generating features like last_N_purchased_categories
) (stream 2)
Write stream 2 values to an offline store as a historical log for training (optional)
Write stream 2 values to an online store for low latency feature serving
Periodically materialize feature values from the offline store into the online store for decreased training-serving skew and improved model performance
Note that the Kinesis source has a batch source.
The Kinesis source can be used in a stream feature view.
The full set of configuration options is available .
Spark data sources support all eight primitive types and their corresponding array types. For a comparison against other batch data sources, please see .
BigQuery data sources support all eight primitive types and their corresponding array types. For a comparison against other batch data sources, please see .
Trino data sources support all eight primitive types, but currently do not support array types. For a comparison against other batch data sources, please see .
The full set of configuration options is available .
PostgreSQL data sources support all eight primitive types and their corresponding array types. For a comparison against other batch data sources, please see .
See also for instructions on how to push data to a deployed feature server.
This can also be used under the hood by a contrib stream processor (see )
See for a example of how to ingest data from a Kafka source into Feast.
See for a example of how to ingest data from a Kafka source into Feast. The approach used in the tutorial can be easily adapted to work for Kinesis as well.
bytes
yes
yes
yes
yes
yes
yes
yes
string
yes
yes
yes
yes
yes
yes
yes
int32
yes
yes
yes
yes
yes
yes
yes
int64
yes
yes
yes
yes
yes
yes
yes
float32
yes
yes
yes
yes
yes
yes
yes
float64
yes
yes
yes
yes
yes
yes
yes
bool
yes
yes
yes
yes
yes
yes
yes
timestamp
yes
yes
yes
yes
yes
yes
yes
array types
yes
yes
no
no
yes
yes
no