Search…
Adding or reusing tests

Overview

This guide will go over:
  1. 1.
    how Feast tests are setup
  2. 2.
    how to extend the test suite to test new functionality
  3. 3.
    how to use the existing test suite to test a new custom offline / online store.

Test suite overview

Let's inspect the test setup in sdk/python/tests/integration:
1
$ tree
2
3
.
4
├── e2e
5
│ └── test_universal_e2e.py
6
├── feature_repos
7
│ ├── repo_configuration.py
8
│ └── universal
9
│ ├── data_source_creator.py
10
│ ├── data_sources
11
│ │ ├── bigquery.py
12
│ │ ├── file.py
13
│ │ └── redshift.py
14
│ ├── entities.py
15
│ └── feature_views.py
16
├── offline_store
17
│ ├── test_s3_custom_endpoint.py
18
│ └── test_universal_historical_retrieval.py
19
├── online_store
20
│ ├── test_e2e_local.py
21
│ ├── test_feature_service_read.py
22
│ ├── test_online_retrieval.py
23
│ └── test_universal_online.py
24
├── registration
25
│ ├── test_cli.py
26
│ ├── test_cli_apply_duplicated_featureview_names.py
27
│ ├── test_cli_chdir.py
28
│ ├── test_feature_service_apply.py
29
│ ├── test_feature_store.py
30
│ ├── test_inference.py
31
│ ├── test_registry.py
32
│ ├── test_universal_odfv_feature_inference.py
33
│ └── test_universal_types.py
34
└── scaffolding
35
├── test_init.py
36
├── test_partial_apply.py
37
├── test_repo_config.py
38
└── test_repo_operations.py
39
40
8 directories, 27 files
Copied!
feature_repos has setup files for most tests in the test suite and pytest fixtures for other tests. These fixtures parametrize on different offline stores, online stores, etc. and thus abstract away store specific implementations so tests don't need to rewrite e.g. uploading dataframes to a specific store for setup.

Understanding an example test

Let's look at a sample test using the universal repo:
Python
1
@pytest.mark.integration
2
@pytest.mark.parametrize("full_feature_names", [True, False], ids=lambda v: str(v))
3
def test_historical_features(environment, universal_data_sources, full_feature_names):
4
store = environment.feature_store
5
6
(entities, datasets, data_sources) = universal_data_sources
7
feature_views = construct_universal_feature_views(data_sources)
8
9
customer_df, driver_df, orders_df, global_df, entity_df = (
10
datasets["customer"],
11
datasets["driver"],
12
datasets["orders"],
13
datasets["global"],
14
datasets["entity"],
15
)
16
# ... more test code
17
18
customer_fv, driver_fv, driver_odfv, order_fv, global_fv = (
19
feature_views["customer"],
20
feature_views["driver"],
21
feature_views["driver_odfv"],
22
feature_views["order"],
23
feature_views["global"],
24
)
25
26
feature_service = FeatureService(
27
"convrate_plus100",
28
features=[
29
feature_views["driver"][["conv_rate"]],
30
feature_views["driver_odfv"]
31
],
32
)
33
34
feast_objects = []
35
feast_objects.extend(
36
[
37
customer_fv,
38
driver_fv,
39
driver_odfv,
40
order_fv,
41
global_fv,
42
driver(),
43
customer(),
44
feature_service,
45
]
46
)
47
store.apply(feast_objects)
48
# ... more test code
49
50
job_from_df = store.get_historical_features(
51
entity_df=entity_df_with_request_data,
52
features=[
53
"driver_stats:conv_rate",
54
"driver_stats:avg_daily_trips",
55
"customer_profile:current_balance",
56
"customer_profile:avg_passenger_count",
57
"customer_profile:lifetime_trip_count",
58
"conv_rate_plus_100:conv_rate_plus_100",
59
"conv_rate_plus_100:conv_rate_plus_val_to_add",
60
"order:order_is_success",
61
"global_stats:num_rides",
62
"global_stats:avg_ride_length",
63
],
64
full_feature_names=full_feature_names,
65
)
66
actual_df_from_df_entities = job_from_df.to_df()
67
# ... more test code
68
69
assert_frame_equal(
70
expected_df, actual_df_from_df_entities, check_dtype=False,
71
)
72
# ... more test code
Copied!
The key fixtures are the environment and universal_data_sources fixtures, which are defined in the feature_repos directories. This by default pulls in a standard dataset with driver and customer entities, certain feature views, and feature values. By including the environment as a parameter, the test automatically parametrizes across other offline / online store combinations.

Writing a new test or reusing existing tests

To add a new test to an existing test file

  • Use the same function signatures as an existing test (e.g. use environment as an argument) to include the relevant test fixtures.
  • If possible, expand an individual test instead of writing a new test, due to the cost of standing up offline / online stores.

To test a new offline / online store from a plugin repo

  • Install Feast in editable mode with pip install -e.
  • The core tests for offline / online store behavior are parametrized by the FULL_REPO_CONFIGS variable defined in feature_repos/repo_configuration.py. To overwrite this variable without modifying the Feast repo, create your own file that contains a FULL_REPO_CONFIGS (which will require adding a new IntegrationTestRepoConfig or two) and set the environment variable FULL_REPO_CONFIGS_MODULE to point to that file. Then the core offline / online store tests can be run with make test-python-universal.

To include a new offline / online store in the main Feast repo

  • Extend data_source_creator.py for your offline store.
  • In repo_configuration.py add a newIntegrationTestRepoConfig or two (depending on how many online stores you want to test).
  • Run the full test suite with make test-python-integration.

Including a new offline / online store in the main Feast repo from external plugins with community maintainers.

  • This folder is for plugins that are officially maintained with community owners. Place the APIs in feast/infra/offline_stores/contrib/.
  • Extend data_source_creator.py for your offline store and implement the required APIs.
  • In contrib_repo_configuration.py add a new IntegrationTestRepoConfig (depending on how many online stores you want to test).
  • Run the test suite on the contrib test suite with make test-python-contrib-universal.

To include a new online store

  • In repo_configuration.py add a new config that maps to a serialized version of configuration you need in feature_store.yaml to setup the online store.
  • In repo_configuration.py, add newIntegrationTestRepoConfig for offline stores you want to test.
  • Run the full test suite with make test-python-integration

To use custom data in a new test

  • Check test_universal_types.py for an example of how to do this.
1
@pytest.mark.integration
2
def your_test(environment: Environment):
3
df = #...#
4
data_source = environment.data_source_creator.create_data_source(
5
df,
6
destination_name=environment.feature_store.project
7
)
8
your_fv = driver_feature_view(data_source)
9
entity = driver(value_type=ValueType.UNKNOWN)
10
fs.apply([fv, entity])
11
12
# ... run test
Copied!

Running your own redis cluster for testing

  • Install redis on your computer. If you are a mac user, you should be able to brew install redis.
    • Running redis-server --help and redis-cli --help should show corresponding help menus.
  • Run cd scripts/create-cluster and run ./create-cluster start then ./create-cluster create to start the server. You should see output that looks like this:
1
Starting 6001
2
Starting 6002
3
Starting 6003
4
Starting 6004
5
Starting 6005
6
Starting 6006
Copied!
  • You should be able to run the integration tests and have the redis cluster tests pass.
  • If you would like to run your own redis cluster, you can run the above commands with your own specified ports and connect to the newly configured cluster.
  • To stop the cluster, run ./create-cluster stop and then ./create-cluster clean.