The Bytewax batch materialization engine provides an execution engine for batch materializing operations (materialize and materialize-incremental).
Guide
In order to use the Bytewax materialization engine, you will need a cluster running version 1.22.10 or greater.
Kubernetes Authentication
The Bytewax materialization engine loads authentication and cluster information from the . By default, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable.
Resource Authentication
Bytewax jobs can be configured to access as environment variables to access online and offline stores during job runs.
To configure secrets, first create them using kubectl:
Then configure them in the batch_engine section of feature_store.yaml:
Configuration
The Bytewax materialization engine is configured through the The feature_store.yaml configuration file:
The namespace configuration directive specifies which Kubernetes jobs, services and configuration maps will be created in.
Building a custom Bytewax Docker image
The image configuration directive specifies which container image to use when running the materialization job. To create a custom image based on this container, run the following command:
Once that image is built and pushed to a registry, it can be specified as a part of the batch engine configuration:
The Spark batch materialization engine is considered alpha status. It relies on the offline store to output feature values to S3 via to_remote_storage, and then loads them into the online store.
...
offline_store:
type: snowflake.offline
...
batch_engine:
type: spark.engine
partitions: [optional num partitions to use to write to online store]
feature_store.py
from feast import FeatureStore, RepoConfig
from feast.repo_config import RegistryConfig
from feast.infra.online_stores.dynamodb import DynamoDBOnlineStoreConfig
from feast.infra.offline_stores.contrib.spark_offline_store.spark import SparkOfflineStoreConfig
repo_config = RepoConfig(
registry="s3://[YOUR_BUCKET]/feast-registry.db",
project="feast_repo",
provider="aws",
offline_store=SparkOfflineStoreConfig(
spark_conf={
"spark.ui.enabled": "false",
"spark.eventLog.enabled": "false",
"spark.sql.catalogImplementation": "hive",
"spark.sql.parser.quotedRegexColumnNames": "true",
"spark.sql.session.timeZone": "UTC"
}
),
batch_engine={
"type": "spark.engine",
"partitions": 10
},
online_store=DynamoDBOnlineStoreConfig(region="us-west-1"),
entity_key_serialization_version=2
)
store = FeatureStore(config=repo_config)
AWS Lambda (alpha)
Description
The AWS Lambda batch materialization engine is considered alpha status. It relies on the offline store to output feature values to S3 via to_remote_storage, and then loads them into the online store.
See also Dockerfile for a Dockerfile that can be used below with materialization_image.
Example
feature_store.yaml
...
offline_store:
type: snowflake.offline
...
batch_engine:
type: lambda
lambda_role: [your iam role]
materialization_image: [image uri of above Docker image]
Snowflake
Description
The Snowflake batch materialization engine provides a highly scalable and parallel execution engine using a Snowflake Warehouse for batch materializations operations (materialize and materialize-incremental) when using a SnowflakeSource.
The engine requires no additional configuration other than for you to supply Snowflake's standard login and context details. The engine leverages custom (automatically deployed for you) Python UDFs to do the proper serialization of your offline store data to your online serving tables.
When using all three options together, snowflake.offline, snowflake.engine, and snowflake.online, you get the most unique experience of unlimited scale and performance + governance and data security.