feature_repos has setup files for most tests in the test suite and pytest fixtures for other tests. These fixtures parametrize on different offline stores, online stores, etc. and thus abstract away store specific implementations so tests don't need to rewrite e.g. uploading dataframes to a specific store for setup.
Understanding an example test
Let's look at a sample test using the universal repo:
The key fixtures are the environment and universal_data_sources fixtures, which are defined in the feature_repos directories. This by default pulls in a standard dataset with driver and customer entities, certain feature views, and feature values. By including the environment as a parameter, the test automatically parametrizes across other offline / online store combinations.
Writing a new test or reusing existing tests
To add a new test to an existing test file
Use the same function signatures as an existing test (e.g. use environment as an argument) to include the relevant test fixtures.
If possible, expand an individual test instead of writing a new test, due to the cost of standing up offline / online stores.
To test a new offline / online store from a plugin repo
Install Feast in editable mode with pip install -e.
The core tests for offline / online store behavior are parametrized by the FULL_REPO_CONFIGS variable defined in feature_repos/repo_configuration.py. To overwrite this variable without modifying the Feast repo, create your own file that contains a FULL_REPO_CONFIGS (which will require adding a new IntegrationTestRepoConfig or two) and set the environment variable FULL_REPO_CONFIGS_MODULE to point to that file. Then the core offline / online store tests can be run with make test-python-universal.
To include a new offline / online store in the main Feast repo
Extend data_source_creator.py for your offline store.
In repo_configuration.py add a newIntegrationTestRepoConfig or two (depending on how many online stores you want to test).
Run the full test suite with make test-python-integration.
To include a new online store
In repo_configuration.py add a new config that maps to a serialized version of configuration you need in feature_store.yaml to setup the online store.
In repo_configuration.py, add newIntegrationTestRepoConfig for offline stores you want to test.
Run the full test suite with make test-python-integration
To use custom data in a new test
Check test_universal_types.py for an example of how to do this.
@pytest.mark.integrationdefyour_test(environment: Environment): df =#...# data_source = environment.data_source_creator.create_data_source( df, destination_name=environment.feature_store.project ) your_fv =driver_feature_view(data_source) entity =driver(value_type=ValueType.UNKNOWN) fs.apply([fv, entity])# ... run test
Running your own redis cluster for testing
Install redis on your computer. If you are a mac user, you should be able to brew install redis.
Running redis-server --help and redis-cli --help should show corresponding help menus.
Run cd scripts/create-cluster and run ./create-cluster start then ./create-cluster create to start the server. You should see output that looks like this:
You should be able to run the integration tests and have the redis cluster tests pass.
If you would like to run your own redis cluster, you can run the above commands with your own specified ports and connect to the newly configured cluster.
To stop the cluster, run ./create-cluster stop and then ./create-cluster clean.