Development guide
Table of Contents
Overview
This guide is targeted at developers looking to contribute to Feast components in the main Feast repository:
Please see this page for more details on the structure of the entire codebase.
Compatibility
The compatibility policy for Feast can be found here, and should be followed for all changes proposed, by maintainers or contributors.
Community
See Contribution process and Community for details on how to get more involved in the community.
Making a pull request
We use the convention that the assignee of a PR is the person with the next action.
If the assignee is empty it means that no reviewer has been found yet. If a reviewer has been found, they should also be the assigned the PR. Finally, if there are comments to be addressed, the PR author should be the one assigned the PR.
PRs that are submitted by the general public need to be identified as ok-to-test
. Once enabled, Prow will run a range of tests to verify the submission, after which community members will help to review the pull request.
Pull request checklist
A quick list of things to keep in mind as you're making changes:
As you make changes
Make your changes in a forked repo (instead of making a branch on the main Feast repo)
Sign your commits as you go (to avoid DCO checks failing)
Rebase from master instead of using
git pull
on your PR branchInstall pre-commit hooks to ensure all the default linters / formatters are run when you push.
When you make the PR
Make a pull request from the forked repo you made
Ensure the title of the PR matches semantic release conventions (e.g. start with
feat:
orfix:
orci:
orchore:
ordocs:
). Keep in mind that any PR withfeat:
orfix:
will directly make it into the change log of a release, so make sure they are understandable!Ensure you add a GitHub label (i.e. a kind tag to the PR (e.g.
kind/bug
orkind/housekeeping
)) or else checks will fail.Ensure you leave a release note for any user facing changes in the PR. There is a field automatically generated in the PR request. You can write
NONE
in that field if there are no user facing changes.Please run tests locally before submitting a PR (e.g. for Python, the local integration tests)
Try to keep PRs smaller. This makes them easier to review.
Good practices to keep in mind
Fill in the description based on the default template configured when you first open the PR
What this PR does/why we need it
Which issue(s) this PR fixes
Does this PR introduce a user-facing change
Add
WIP:
to PR name if more work needs to be done prior to review
Forking the repo
Fork the Feast Github repo and clone your fork locally. Then make changes to a local branch to the fork.
See Creating a pull request from a fork
Pre-commit Hooks
Setup pre-commit
to automatically lint and format the codebase on commit:
Ensure that you have Python (3.7 and above) with
pip
, installed.Install
pre-commit
withpip
& install pre-push hooks
On push, the pre-commit hook will run. This runs
make format
andmake lint
.
Signing off commits
Use git signoffs to sign your commits. See https://docs.github.com/en/github/authenticating-to-github/managing-commit-signature-verification for details
Then, you can sign off commits with the -s
flag:
GPG-signing commits with -S
is optional.
Incorporating upstream changes from master
Our preference is the use of git rebase [master]
instead of git merge
: git pull -r
.
Note that this means if you are midway through working through a PR and rebase, you'll have to force push: git push --force-with-lease origin [branch name]
Feast Python SDK and CLI
Environment Setup
Tools
Docker: Docker is used to provision service dependencies during testing, and build images for feature servers and other components.
Please note that we use Docker with BuiltKit.
make
is used to run various scriptsuv for managing python dependencies. installation instructions
(M1 Mac only): Follow the dev guide if you have issues
(Optional): Node & Yarn (needed for building the feast UI)
(Optional): Pixi for recompile python lock files. Only when you make changes to requirements or simply want to update python lock files to reflect latest versioons.
Quick start
create a new virtual env:
uv venv --python 3.11
(Replace the python version with your desired version)activate the venv:
source venv/bin/activate
Install dependencies
make install-python-dependencies-dev
building the UI
Recompiling python lock files
Recompile python lock files. This only needs to be run when you make changes to requirements or simply want to update python lock files to reflect latest versions.
Building protos
Building a docker image for development
Code Style and Linting
Feast Python SDK and CLI codebase:
Conforms to Black code style
Has type annotations as enforced by
mypy
Has imports sorted by
ruff
(see isort (I) rules)Is lintable by
ruff
To ensure your Python code conforms to Feast Python code standards:
Autoformat your code to conform to the code style:
Lint your Python code before submitting it for review:
Setup pre-commit hooks to automatically format and lint on commit.
Unit Tests
Unit tests (pytest
) for the Feast Python SDK and CLI can run as follows:
Ensure no AWS configuration is present > and no AWS credentials can be accessed by
boto3
Ensure Feast Python SDK and CLI is not configured with configuration overrides (ie
~/.feast/config
should be empty).
Integration Tests
There are two sets of tests you can run:
Local integration tests (for faster development, tests file offline store & key online stores)
Full integration tests (requires cloud environment setups)
Local integration tests
For this approach of running tests, you'll need to have docker set up locally: Get Docker
It leverages a file based offline store to test against emulated versions of Datastore, DynamoDB, and Redis, using ephemeral containers.
These tests create new temporary tables / datasets locally only, and they are cleaned up. when the containers are torn down.
(Advanced) Full integration tests
To test across clouds, on top of setting up Redis, you also need GCP / AWS / Snowflake setup.
Note: you can manually control what tests are run today by inspecting RepoConfiguration and commenting out tests that are added to
DEFAULT_FULL_REPO_CONFIGS
GCP
You can get free credits here.
You will need to setup a service account, enable the BigQuery API, and create a staging location for a bucket.
Setup your service account and project using steps 1-5 here.
Remember to save your
PROJECT_ID
and yourkey.json
. These will be your secrets that you will need to configure in Github actions. Namely,secrets.GCP_PROJECT_ID
andsecrets.GCP_SA_KEY
. TheGCP_SA_KEY
value is the contents of yourkey.json
file.
Follow these instructions in your project to create a bucket for running GCP tests and remember to save the bucket name.
Make sure to add the service account email that you created in the previous step to the users that can access your bucket. Then, make sure to give the account the correct access roles, namely
objectCreator
,objectViewer
,objectAdmin
, andadmin
, so that your tests can use the bucket.
Install the Cloud SDK.
Login to gcloud if you haven't already:
When you run
gcloud auth application-default login
, you should see some output of the form:You should run
export GOOGLE_APPLICATION_CREDENTIALS="$HOME/.config/gcloud/application_default_credentials.json”
to add the application credentials to your .zshrc or .bashrc.
Run
export GCLOUD_PROJECT=[your project id from step 2]
to your .zshrc or .bashrc.Running
gcloud config list
should give you something like this:
Export GCP specific environment variables in your workflow. Namely,
NOTE: Your GCS_STAGING_LOCATION
should be in the form gs://<bucket name>
where the bucket name is from step 2.
Once authenticated, you should be able to run the integration tests for BigQuery without any failures.
AWS
Setup AWS by creating an account, database, and cluster. You will need to enable Redshift and Dynamo.
You can get free credits here.
To run the AWS Redshift and Dynamo integration tests you will have to export your own AWS credentials. Namely,
Snowflake
See https://signup.snowflake.com/ to setup a trial.
Setup your account and if you are not an
ACCOUNTADMIN
(if you created your own account, you should be), give yourself theSYSADMIN
role.
Also remember to save your account name, username, and role.
Your account name can be found under
Create Dashboard and add a Tile.
Create a warehouse and database named
FEAST
with the schemasOFFLINE
andONLINE
.
You will need to create a data unloading location(either on S3, GCP, or Azure). Detailed instructions here. You will need to save the storage export location and the storage export name. You will need to create a storage integration in your warehouse to make this work. Name this storage integration
FEAST_S3
.Then to run successfully, you'll need some environment variables setup:
Once everything is setup, running snowflake integration tests should pass without failures.
Note that for Snowflake / GCP / AWS, running make test-python-integration
will create new temporary tables / datasets in your cloud storage tables.
(Advanced) Running specific provider tests or running your test against specific online or offline stores
If you don't need to have your test run against all of the providers(
gcp
,aws
, andsnowflake
) or don't need to run against all of the online stores, you can tag your test with specific providers or stores that you need(@pytest.mark.universal_online_stores
or@pytest.mark.universal_online_stores
with theonly
parameter). Theonly
parameter selects specific offline providers and online stores that your test will test against. Example:
You can also filter tests to run by using pytest's cli filtering. Instead of using the make commands to test Feast, you can filter tests by name with the
-k
parameter. The parametrized integration tests are all uniquely identified by their provider and online store so the-k
option can select only the tests that you need to run. For example, to run only Redshift related tests, you can use the following command:
(Experimental) Run full integration tests against containerized services
Test across clouds requires existing accounts on GCP / AWS / Snowflake, and may incur costs when using these services.
For this approach of running tests, you'll need to have docker set up locally: Get Docker
It's possible to run some integration tests against emulated local versions of these services, using ephemeral containers. These tests create new temporary tables / datasets locally only, and they are cleaned up. when the containers are torn down.
The services with containerized replacements currently implemented are:
Datastore
DynamoDB
Redis
Trino
HBase
Postgres
Cassandra
You can run make test-python-integration-container
to run tests against the containerized versions of dependencies.
Contrib integration tests
(Contrib) Running tests for Spark offline store
You can run make test-python-universal-spark
to run all tests against the Spark offline store. (Note: you'll have to run pip install -e ".[dev]"
first).
Not all tests are passing yet
(Contrib) Running tests for Trino offline store
You can run make test-python-universal-trino
to run all tests against the Trino offline store. (Note: you'll have to run pip install -e ".[dev]"
first)
(Contrib) Running tests for Postgres offline store
You can run test-python-universal-postgres-offline
to run all tests against the Postgres offline store. (Note: you'll have to run pip install -e ".[dev]"
first)
(Contrib) Running tests for Postgres online store
You can run test-python-universal-postgres-online
to run all tests against the Postgres offline store. (Note: you'll have to run pip install -e ".[dev]"
first)
(Contrib) Running tests for HBase online store
TODO
(Experimental) Feast UI
Feast Java Serving
See also development instructions related to the helm chart below at Developing the Feast Helm charts
Developing the Feast Helm charts
There are 2 helm charts:
Feast Java feature server
Feast Python feature server
Generally, you can override the images in the helm charts with locally built Docker images, and install the local helm chart.
All README's for helm charts are generated using helm-docs. You can install it (e.g. with brew install norwoodj/tap/helm-docs
) and then run make build-helm-docs
.
Feast Java Feature Server Helm Chart
See the Java demo example (it has development instructions too using minikube) here
It will:
run
make build-java-docker-dev
to build local Java feature server binariesconfigure the included
application-override.yaml
to override the image tag to use the locally built dev images.install the local chart with
helm install feast-release ../../../infra/charts/feast --values application-override.yaml
Feast Python Feature Server Helm Chart
See the Python demo example (it has development instructions too using minikube) here
It will:
run
make build-feature-server-dev
to build a local python feature server binaryinstall the local chart with
helm install feast-release ../../../infra/charts/feast-feature-server --set image.tag=dev --set feature_store_yaml_base64=$(base64 feature_store.yaml)
Testing with Github Actions workflows
Please refer to the maintainers doc if you would like to locally test out the github actions workflow changes. This document will help you setup your fork to test the ci integration tests and other workflows without needing to make a pull request against feast-dev master.
Feast Data Storage Format
Feast data storage contracts are documented in the following locations:
Feast Offline Storage Format: Used by BigQuery, Snowflake (Future), Redshift (Future).
Feast Online Storage Format: Used by Redis, Google Datastore.
Last updated