Redshift
Description
The Redshift offline store provides support for reading RedshiftSources.
All joins happen within Redshift.
Entity dataframes can be provided as a SQL query or can be provided as a Pandas dataframe. A Pandas dataframes will be uploaded to Redshift temporarily in order to complete join operations.
Getting started
In order to use this offline store, you'll need to run pip install 'feast[aws]'
. You can get started by then running feast init -t aws
.
Example
The full set of configuration options is available in RedshiftOfflineStoreConfig.
Functionality Matrix
The set of functionality supported by offline stores is described in detail here. Below is a matrix indicating which functionality is supported by the Redshift offline store.
Redshift | |
---|---|
| yes |
| yes |
| yes |
| yes |
| yes |
Below is a matrix indicating which functionality is supported by RedshiftRetrievalJob
.
Redshift | |
---|---|
export to dataframe | yes |
export to arrow table | yes |
export to arrow batches | yes |
export to SQL | yes |
export to data lake (S3, GCS, etc.) | no |
export to data warehouse | yes |
export as Spark dataframe | no |
local execution of Python-based on-demand transforms | yes |
remote execution of Python-based on-demand transforms | no |
persist results in the offline store | yes |
preview the query plan before execution | yes |
read partitioned data | yes |
To compare this set of functionality against other offline stores, please see the full functionality matrix.
Permissions
Feast requires the following permissions in order to execute commands for Redshift offline store:
Command | Permissions | Resources |
Apply | redshift-data:DescribeTable redshift:GetClusterCredentials | arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username> arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name> arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id> |
Materialize | redshift-data:ExecuteStatement | arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id> |
Materialize | redshift-data:DescribeStatement | * |
Materialize | s3:ListBucket s3:GetObject s3:DeleteObject | arn:aws:s3:::<bucket_name> arn:aws:s3:::<bucket_name>/* |
Get Historical Features | redshift-data:ExecuteStatement redshift:GetClusterCredentials | arn:aws:redshift:<region>:<account_id>:dbuser:<redshift_cluster_id>/<redshift_username> arn:aws:redshift:<region>:<account_id>:dbname:<redshift_cluster_id>/<redshift_database_name> arn:aws:redshift:<region>:<account_id>:cluster:<redshift_cluster_id> |
Get Historical Features | redshift-data:DescribeStatement | * |
Get Historical Features | s3:ListBucket s3:GetObject s3:PutObject s3:DeleteObject | arn:aws:s3:::<bucket_name> arn:aws:s3:::<bucket_name>/* |
The following inline policy can be used to grant Feast the necessary permissions:
In addition to this, Redshift offline store requires an IAM role that will be used by Redshift itself to interact with S3. More concretely, Redshift has to use this IAM role to run UNLOAD and COPY commands. Once created, this IAM role needs to be configured in feature_store.yaml
file as offline_store: iam_role
.
The following inline policy can be used to grant Redshift necessary permissions to access S3:
While the following trust relationship is necessary to make sure that Redshift, and only Redshift can assume this role:
Redshift Serverless
In order to use AWS Redshift Serverless, specify a workgroup instead of a cluster_id and user.
Please note that the IAM policies above will need the redshift-serverless version, rather than the standard redshift.