LogoLogo
v0.11-branch
v0.11-branch
  • Introduction
  • Quickstart
  • Getting started
    • Install Feast
    • Create a feature repository
    • Deploy a feature store
    • Build a training dataset
    • Load data into the online store
    • Read features from the online store
  • Community
  • Roadmap
  • Changelog
  • Concepts
    • Overview
    • Feature view
    • Data model
    • Online store
    • Offline store
    • Provider
    • Architecture
  • Reference
    • Data sources
      • BigQuery
      • File
    • Offline stores
      • File
      • BigQuery
    • Online stores
      • SQLite
      • Redis
      • Datastore
    • Providers
      • Local
      • Google Cloud Platform
    • Feature repository
      • feature_store.yaml
      • .feastignore
    • Feast CLI reference
    • Python API reference
    • Usage
  • Feast on Kubernetes
    • Getting started
      • Install Feast
        • Docker Compose
        • Kubernetes (with Helm)
        • Amazon EKS (with Terraform)
        • Azure AKS (with Helm)
        • Azure AKS (with Terraform)
        • Google Cloud GKE (with Terraform)
        • IBM Cloud Kubernetes Service (IKS) and Red Hat OpenShift (with Kustomize)
      • Connect to Feast
        • Python SDK
        • Feast CLI
      • Learn Feast
    • Concepts
      • Overview
      • Architecture
      • Entities
      • Sources
      • Feature Tables
      • Stores
    • Tutorials
      • Minimal Ride Hailing Example
    • User guide
      • Overview
      • Getting online features
      • Getting training features
      • Define and ingest features
      • Extending Feast
    • Reference
      • Configuration Reference
      • Feast and Spark
      • Metrics Reference
      • Limitations
      • API Reference
        • Go SDK
        • Java SDK
        • Core gRPC API
        • Python SDK
        • Serving gRPC API
        • gRPC Types
    • Advanced
      • Troubleshooting
      • Metrics
      • Audit Logging
      • Security
      • Upgrading Feast
  • Contributing
    • Contribution process
    • Development guide
    • Versioning policy
    • Release process
Powered by GitBook
On this page
  • Overview
  • 1. Requirements
  • 2. Preparation
  • 3. Feast installation
  • 4. Spark operator installation
  • 5. Use Jupyter to connect to Feast
  • 6. Environment variables
  • 7. Further Reading

Was this helpful?

Edit on Git
Export as PDF
  1. Feast on Kubernetes
  2. Getting started
  3. Install Feast

Azure AKS (with Helm)

PreviousAmazon EKS (with Terraform)NextAzure AKS (with Terraform)

Last updated 3 years ago

Was this helpful?

Overview

This guide installs Feast on Azure Kubernetes cluster (known as AKS), and ensures the following services are running:

  • Feast Core

  • Feast Online Serving

  • Postgres

  • Redis

  • Spark

  • Kafka

  • Feast Jupyter (Optional)

  • Prometheus (Optional)

1. Requirements

  1. Install and configure

  2. Install and configure

  3. Install

2. Preparation

az group create --name myResourceGroup  --location eastus
az acr create --resource-group myResourceGroup  --name feast-AKS-ACR --sku Basic
az aks create -g myResourceGroup  -n feast-AKS --location eastus --attach-acr feast-AKS-ACR --generate-ssh-keys

az aks install-cli
az aks get-credentials --resource-group myResourceGroup  --name  feast-AKS

Add the Feast Helm repository and download the latest charts:

helm version # make sure you have the latest Helm installed
helm repo add feast-charts https://feast-helm-charts.storage.googleapis.com
helm repo update

Feast includes a Helm chart that installs all necessary components to run Feast Core, Feast Online Serving, and an example Jupyter notebook.

Feast Core requires Postgres to run, which requires a secret to be set on Kubernetes:

kubectl create secret generic feast-postgresql --from-literal=postgresql-password=password

3. Feast installation

Install Feast using Helm. The pods may take a few minutes to initialize.

helm install feast-release feast-charts/feast

4. Spark operator installation

helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator 
helm install my-release spark-operator/spark-operator  --set serviceAccounts.spark.name=spark --set image.tag=v1beta2-1.1.2-2.4.5

and ensure the service account used by Feast has permissions to manage Spark Application resources. This depends on your k8s setup, but typically you'd need to configure a Role and a RoleBinding like the one below:

cat <<EOF | kubectl apply -f -
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: use-spark-operator
  namespace: <REPLACE ME>
rules:
- apiGroups: ["sparkoperator.k8s.io"]
  resources: ["sparkapplications"]
  verbs: ["create", "delete", "deletecollection", "get", "list", "update", "watch", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: use-spark-operator
  namespace: <REPLACE ME>
roleRef:
  kind: Role
  name: use-spark-operator
  apiGroup: rbac.authorization.k8s.io
subjects:
  - kind: ServiceAccount
    name: default
EOF

5. Use Jupyter to connect to Feast

After all the pods are in a RUNNING state, port-forward to the Jupyter Notebook Server in the cluster:

kubectl port-forward \
$(kubectl get pod -o custom-columns=:metadata.name | grep jupyter) 8888:8888
Forwarding from 127.0.0.1:8888 -> 8888
Forwarding from [::1]:8888 -> 8888

You can now connect to the bundled Jupyter Notebook Server at localhost:8888 and follow the example Jupyter notebook.

6. Environment variables

demo_data_location = "wasbs://<container_name>@<storage_account_name>.blob.core.windows.net/"
os.environ["FEAST_AZURE_BLOB_ACCOUNT_NAME"] = "<storage_account_name>"
os.environ["FEAST_AZURE_BLOB_ACCOUNT_ACCESS_KEY"] = <Insert your key here>
os.environ["FEAST_HISTORICAL_FEATURE_OUTPUT_LOCATION"] = "wasbs://<container_name>@<storage_account_name>.blob.core.windows.net/out/"
os.environ["FEAST_SPARK_STAGING_LOCATION"] = "wasbs://<container_name>@<storage_account_name>.blob.core.windows.net/artifacts/"
os.environ["FEAST_SPARK_LAUNCHER"] = "k8s"
os.environ["FEAST_SPARK_K8S_NAMESPACE"] = "default"
os.environ["FEAST_HISTORICAL_FEATURE_OUTPUT_FORMAT"] = "parquet"
os.environ["FEAST_REDIS_HOST"] = "feast-release-redis-master.default.svc.cluster.local"
os.environ["DEMO_KAFKA_BROKERS"] = "feast-release-kafka.default.svc.cluster.local:9092"

7. Further Reading

Create an AKS cluster with Azure CLI. The detailed steps can be found , and a high-level walk through includes:

Follow the documentation , and Feast documentation to

If you are running the , you may want to make sure the following environment variables are correctly set:

Azure CLI
Kubectl
Helm 3
here
to install Spark operator on Kubernetes
configure Spark roles
Minimal Ride Hailing Example
Feast Concepts
Feast Examples/Tutorials
Feast Helm Chart Documentation
Configuring Feast components
Feast and Spark
http://localhost:8888/tree?localhost