LogoLogo
v0.11-branch
v0.11-branch
  • Introduction
  • Quickstart
  • Getting started
    • Install Feast
    • Create a feature repository
    • Deploy a feature store
    • Build a training dataset
    • Load data into the online store
    • Read features from the online store
  • Community
  • Roadmap
  • Changelog
  • Concepts
    • Overview
    • Feature view
    • Data model
    • Online store
    • Offline store
    • Provider
    • Architecture
  • Reference
    • Data sources
      • BigQuery
      • File
    • Offline stores
      • File
      • BigQuery
    • Online stores
      • SQLite
      • Redis
      • Datastore
    • Providers
      • Local
      • Google Cloud Platform
    • Feature repository
      • feature_store.yaml
      • .feastignore
    • Feast CLI reference
    • Python API reference
    • Usage
  • Feast on Kubernetes
    • Getting started
      • Install Feast
        • Docker Compose
        • Kubernetes (with Helm)
        • Amazon EKS (with Terraform)
        • Azure AKS (with Helm)
        • Azure AKS (with Terraform)
        • Google Cloud GKE (with Terraform)
        • IBM Cloud Kubernetes Service (IKS) and Red Hat OpenShift (with Kustomize)
      • Connect to Feast
        • Python SDK
        • Feast CLI
      • Learn Feast
    • Concepts
      • Overview
      • Architecture
      • Entities
      • Sources
      • Feature Tables
      • Stores
    • Tutorials
      • Minimal Ride Hailing Example
    • User guide
      • Overview
      • Getting online features
      • Getting training features
      • Define and ingest features
      • Extending Feast
    • Reference
      • Configuration Reference
      • Feast and Spark
      • Metrics Reference
      • Limitations
      • API Reference
        • Go SDK
        • Java SDK
        • Core gRPC API
        • Python SDK
        • Serving gRPC API
        • gRPC Types
    • Advanced
      • Troubleshooting
      • Metrics
      • Audit Logging
      • Security
      • Upgrading Feast
  • Contributing
    • Contribution process
    • Development guide
    • Versioning policy
    • Release process
Powered by GitBook
On this page
  • Overview
  • 1. Prerequisites
  • 2. Preparation
  • IBM Cloud Block Storage Setup (IKS only)
  • 3. Installation
  • Optional: Enable Feast Jupyter and Kafka
  • 4. Use Feast Jupyter Notebook Server to connect to Feast
  • 5. Uninstall Feast
  • 6. Troubleshooting

Was this helpful?

Edit on Git
Export as PDF
  1. Feast on Kubernetes
  2. Getting started
  3. Install Feast

IBM Cloud Kubernetes Service (IKS) and Red Hat OpenShift (with Kustomize)

PreviousGoogle Cloud GKE (with Terraform)NextConnect to Feast

Last updated 3 years ago

Was this helpful?

Overview

This guide installs Feast on an existing IBM Cloud Kubernetes cluster or Red Hat OpenShift on IBM Cloud , and ensures the following services are running:

  • Feast Core

  • Feast Online Serving

  • Postgres

  • Redis

  • Kafka (Optional)

  • Feast Jupyter (Optional)

  • Prometheus (Optional)

1. Prerequisites

  1. or

  2. Install that matches the major.minor versions of your IKS or Install the that matches your local operating system and OpenShift cluster version.

  3. Install

  4. Install

2. Preparation

IBM Cloud Block Storage Setup (IKS only)

  1. Add the IBM Cloud Helm chart repository to the cluster where you want to use the IBM Cloud Block Storage plug-in.

     helm repo add iks-charts https://icr.io/helm/iks-charts
     helm repo update
  2. Install the IBM Cloud Block Storage plug-in. When you install the plug-in, pre-defined block storage classes are added to your cluster.

     helm install v2.0.2 iks-charts/ibmcloud-block-storage-plugin -n kube-system

    Example output:

    NAME: v2.0.2
    LAST DEPLOYED: Fri Feb  5 12:29:50 2021
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    NOTES:
    Thank you for installing: ibmcloud-block-storage-plugin.   Your release is named: v2.0.2
     ...
  3. Verify that all block storage plugin pods are in a "Running" state.

     kubectl get pods -n kube-system | grep ibmcloud-block-storage
  4. Verify that the storage classes for Block Storage were added to your cluster.

     kubectl get storageclasses | grep ibmc-block
  5. Set the Block Storage as the default storageclass.

     kubectl patch storageclass ibmc-block-gold -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
     kubectl patch storageclass ibmc-file-gold -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
    
     # Check the default storageclass is block storage
     kubectl get storageclass | grep \(default\)

    Example output:

     ibmc-block-gold (default)   ibm.io/ibmc-block   65s

    Security Context Constraint Setup (OpenShift only)

oc adm policy add-scc-to-user anyuid -z default,kf-feast-kafka -n feast

3. Installation

Install Feast using kustomize. The pods may take a few minutes to initialize.

git clone https://github.com/kubeflow/manifests
cd manifests/contrib/feast/
kustomize build feast/base | kubectl apply -n feast -f -

Optional: Enable Feast Jupyter and Kafka

You may optionally enable the Feast Jupyter component which contains code examples to demonstrate Feast. Some examples require Kafka to stream real time features to the Feast online serving. To enable, edit the following properties in the values.yaml under the manifests/contrib/feast folder:

kafka.enabled: true
feast-jupyter.enabled: true

Then regenerate the resource manifests and deploy:

make feast/base
kustomize build feast/base | kubectl apply -n feast -f -

4. Use Feast Jupyter Notebook Server to connect to Feast

After all the pods are in a RUNNING state, port-forward to the Jupyter Notebook Server in the cluster:

kubectl port-forward \
$(kubectl get pod -l app=feast-jupyter -o custom-columns=:metadata.name) 8888:8888 -n feast
Forwarding from 127.0.0.1:8888 -> 8888
Forwarding from [::1]:8888 -> 8888

You can now connect to the bundled Jupyter Notebook Server at localhost:8888 and follow the example Jupyter notebook.

5. Uninstall Feast

kustomize build feast/base | kubectl delete -n feast -f -

6. Troubleshooting

When running the minimal_ride_hailing_example Jupyter Notebook example the following errors may occur:

  1. When running job = client.get_historical_features(...):

     KeyError: 'historical_feature_output_location'

    or

     KeyError: 'spark_staging_location'

    Add the following environment variable:

     os.environ["FEAST_HISTORICAL_FEATURE_OUTPUT_LOCATION"] = "file:///home/jovyan/historical_feature_output"
     os.environ["FEAST_SPARK_STAGING_LOCATION"] = "file:///home/jovyan/test_data"
  2. When running job.get_status()

     <SparkJobStatus.FAILED: 2>

    Add the following environment variable:

     os.environ["FEAST_REDIS_HOST"] = "feast-release-redis-master"
  3. When running job = client.start_stream_to_online_ingestion(...)

     org.apache.kafka.vendor.common.KafkaException: Failed to construct kafka consumer

    Add the following environment variable:

     os.environ["DEMO_KAFKA_BROKERS"] = "feast-release-kafka:9092"

:warning: If you have Red Hat OpenShift Cluster on IBM Cloud skip to this .

By default, IBM Cloud Kubernetes cluster uses based on NFS as the default storage class, and non-root users do not have write permission on the volume mount path for NFS-backed storage. Some common container images in Feast, such as Redis, Postgres, and Kafka specify a non-root user to access the mount path in the images. When containers are deployed using these images, the containers fail to start due to insufficient permissions of the non-root user creating folders on the mount path.

allows for the creation of raw storage volumes and provides faster performance without the permission restriction of NFS-backed storage

Therefore, to deploy Feast we need to set up as the default storage class so that you can have all the functionalities working and get the best experience from Feast.

to install the Helm version 3 client on your local machine.

By default, in OpenShift, all pods or containers will use the which limits the UIDs pods can run with, causing the Feast installation to fail. To overcome this, you can allow Feast pods to run with any UID by executing the following:

IBM Cloud Kubernetes Service
Red Hat OpenShift on IBM Cloud
Kubectl
OpenShift CLI
Helm 3
Kustomize
IBM Cloud File Storage
IBM Cloud Block Storage
IBM Cloud Block Storage
Follow the instructions
Restricted SCC
section
http://localhost:8888/tree?localhost