This guide installs Feast on an existing IBM Cloud Kubernetes cluster or Red Hat OpenShift on IBM Cloud , and ensures the following services are running:
Feast Core
Feast Online Serving
Postgres
Redis
Kafka (Optional)
Feast Jupyter (Optional)
Prometheus (Optional)
Install Kubectl that matches the major.minor versions of your IKS or Install the OpenShift CLI that matches your local operating system and OpenShift cluster version.
Install Helm 3
Install Kustomize
:warning: If you have Red Hat OpenShift Cluster on IBM Cloud skip to this section.
By default, IBM Cloud Kubernetes cluster uses IBM Cloud File Storage based on NFS as the default storage class, and non-root users do not have write permission on the volume mount path for NFS-backed storage. Some common container images in Feast, such as Redis, Postgres, and Kafka specify a non-root user to access the mount path in the images. When containers are deployed using these images, the containers fail to start due to insufficient permissions of the non-root user creating folders on the mount path.
IBM Cloud Block Storage allows for the creation of raw storage volumes and provides faster performance without the permission restriction of NFS-backed storage
Therefore, to deploy Feast we need to set up IBM Cloud Block Storage as the default storage class so that you can have all the functionalities working and get the best experience from Feast.
Follow the instructions to install the Helm version 3 client on your local machine.
Add the IBM Cloud Helm chart repository to the cluster where you want to use the IBM Cloud Block Storage plug-in.
Install the IBM Cloud Block Storage plug-in. When you install the plug-in, pre-defined block storage classes are added to your cluster.
Example output:
Verify that all block storage plugin pods are in a "Running" state.
Verify that the storage classes for Block Storage were added to your cluster.
Set the Block Storage as the default storageclass.
Example output:
Security Context Constraint Setup (OpenShift only)
By default, in OpenShift, all pods or containers will use the Restricted SCC which limits the UIDs pods can run with, causing the Feast installation to fail. To overcome this, you can allow Feast pods to run with any UID by executing the following:
Install Feast using kustomize. The pods may take a few minutes to initialize.
You may optionally enable the Feast Jupyter component which contains code examples to demonstrate Feast. Some examples require Kafka to stream real time features to the Feast online serving. To enable, edit the following properties in the values.yaml
under the manifests/contrib/feast
folder:
Then regenerate the resource manifests and deploy:
After all the pods are in a RUNNING
state, port-forward to the Jupyter Notebook Server in the cluster:
You can now connect to the bundled Jupyter Notebook Server at localhost:8888
and follow the example Jupyter notebook.
When running the minimal_ride_hailing_example Jupyter Notebook example the following errors may occur:
When running job = client.get_historical_features(...)
:
or
Add the following environment variable:
When running job.get_status()
Add the following environment variable:
When running job = client.start_stream_to_online_ingestion(...)
Add the following environment variable: