LogoLogo
v0.11-branch
v0.11-branch
  • Introduction
  • Quickstart
  • Getting started
    • Install Feast
    • Create a feature repository
    • Deploy a feature store
    • Build a training dataset
    • Load data into the online store
    • Read features from the online store
  • Community
  • Roadmap
  • Changelog
  • Concepts
    • Overview
    • Feature view
    • Data model
    • Online store
    • Offline store
    • Provider
    • Architecture
  • Reference
    • Data sources
      • BigQuery
      • File
    • Offline stores
      • File
      • BigQuery
    • Online stores
      • SQLite
      • Redis
      • Datastore
    • Providers
      • Local
      • Google Cloud Platform
    • Feature repository
      • feature_store.yaml
      • .feastignore
    • Feast CLI reference
    • Python API reference
    • Usage
  • Feast on Kubernetes
    • Getting started
      • Install Feast
        • Docker Compose
        • Kubernetes (with Helm)
        • Amazon EKS (with Terraform)
        • Azure AKS (with Helm)
        • Azure AKS (with Terraform)
        • Google Cloud GKE (with Terraform)
        • IBM Cloud Kubernetes Service (IKS) and Red Hat OpenShift (with Kustomize)
      • Connect to Feast
        • Python SDK
        • Feast CLI
      • Learn Feast
    • Concepts
      • Overview
      • Architecture
      • Entities
      • Sources
      • Feature Tables
      • Stores
    • Tutorials
      • Minimal Ride Hailing Example
    • User guide
      • Overview
      • Getting online features
      • Getting training features
      • Define and ingest features
      • Extending Feast
    • Reference
      • Configuration Reference
      • Feast and Spark
      • Metrics Reference
      • Limitations
      • API Reference
        • Go SDK
        • Java SDK
        • Core gRPC API
        • Python SDK
        • Serving gRPC API
        • gRPC Types
    • Advanced
      • Troubleshooting
      • Metrics
      • Audit Logging
      • Security
      • Upgrading Feast
  • Contributing
    • Contribution process
    • Development guide
    • Versioning policy
    • Release process
Powered by GitBook
On this page
  • Overview
  • 1. Requirements
  • 2. Configure Terraform
  • 3. Apply
  • 4. Connect to Feast using Jupyter

Was this helpful?

Edit on Git
Export as PDF
  1. Feast on Kubernetes
  2. Getting started
  3. Install Feast

Azure AKS (with Terraform)

PreviousAzure AKS (with Helm)NextGoogle Cloud GKE (with Terraform)

Last updated 3 years ago

Was this helpful?

Overview

This guide installs Feast on Azure using our .

The Terraform configuration used here is a greenfield installation that neither assumes anything about, nor integrates with, existing resources in your Azure account. The Terraform configuration presents an easy way to get started, but you may want to customize this set up before using Feast in production.

This Terraform configuration creates the following resources:

  • Kubernetes cluster on Azure AKS

  • Kafka managed by HDInsight

  • Postgres database for Feast metadata, running as a pod on AKS

  • Redis cluster, using Azure Cache for Redis

  • to run Spark

  • Staging Azure blob storage container to store temporary data

1. Requirements

  • Create an Azure account and

  • Install (tested with 0.13.5)

  • Install (tested with v3.4.2)

2. Configure Terraform

Create a .tfvars file underfeast/infra/terraform/azure. Name the file. In our example, we use my_feast.tfvars. You can see the full list of configuration variables in variables.tf. At a minimum, you need to set name_prefix and resource_group:

my_feast.tfvars
name_prefix = "feast"
resource_group = "Feast" # pre-existing resource group

3. Apply

After completing the configuration, initialize Terraform and apply:

$ cd feast/infra/terraform/azure
$ terraform init
$ terraform apply -var-file=my_feast.tfvars

4. Connect to Feast using Jupyter

After all pods are running, connect to the Jupyter Notebook Server running in the cluster.

To connect to the remote Feast server you just created, forward a port from the remote k8s cluster to your local machine.

kubectl port-forward $(kubectl get pod -o custom-columns=:metadata.name | grep jupyter) 8888:8888
Forwarding from 127.0.0.1:8888 -> 8888
Forwarding from [::1]:8888 -> 8888

You can now connect to the bundled Jupyter Notebook Server at localhost:8888 and follow the example Jupyter notebook.

reference Terraform configuration
spark-on-k8s-operator
configure credentials locally
Terraform
Helm