LogoLogo
v0.35-branch
v0.35-branch
  • Introduction
  • Community & getting help
  • Roadmap
  • Changelog
  • Getting started
    • Quickstart
    • Concepts
      • Overview
      • Data ingestion
      • Entity
      • Feature view
      • Feature retrieval
      • Point-in-time joins
      • Registry
      • [Alpha] Saved dataset
    • Architecture
      • Overview
      • Registry
      • Offline store
      • Online store
      • Batch Materialization Engine
      • Provider
    • Third party integrations
    • FAQ
  • Tutorials
    • Sample use-case tutorials
      • Driver ranking
      • Fraud detection on GCP
      • Real-time credit scoring on AWS
      • Driver stats on Snowflake
    • Validating historical features with Great Expectations
    • Using Scalable Registry
    • Building streaming features
  • How-to Guides
    • Running Feast with Snowflake/GCP/AWS
      • Install Feast
      • Create a feature repository
      • Deploy a feature store
      • Build a training dataset
      • Load data into the online store
      • Read features from the online store
      • Scaling Feast
      • Structuring Feature Repos
    • Running Feast in production (e.g. on Kubernetes)
    • Upgrading for Feast 0.20+
    • Customizing Feast
      • Adding a custom batch materialization engine
      • Adding a new offline store
      • Adding a new online store
      • Adding a custom provider
    • Adding or reusing tests
  • Reference
    • Codebase Structure
    • Type System
    • Data sources
      • Overview
      • File
      • Snowflake
      • BigQuery
      • Redshift
      • Push
      • Kafka
      • Kinesis
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
    • Offline stores
      • Overview
      • File
      • Snowflake
      • BigQuery
      • Redshift
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
    • Online stores
      • Overview
      • SQLite
      • Snowflake
      • Redis
      • Dragonfly
      • Datastore
      • DynamoDB
      • Bigtable
      • PostgreSQL (contrib)
      • Cassandra + Astra DB (contrib)
      • MySQL (contrib)
      • Rockset (contrib)
      • Hazelcast (contrib)
    • Providers
      • Local
      • Google Cloud Platform
      • Amazon Web Services
      • Azure
    • Batch Materialization Engines
      • Bytewax
      • Snowflake
      • AWS Lambda (alpha)
      • Spark (contrib)
    • Feature repository
      • feature_store.yaml
      • .feastignore
    • Feature servers
      • Python feature server
      • [Alpha] Go feature server
      • [Alpha] AWS Lambda feature server
    • [Beta] Web UI
    • [Alpha] On demand feature view
    • [Alpha] Data quality monitoring
    • Feast CLI reference
    • Python API reference
    • Usage
  • Project
    • Contribution process
    • Development guide
    • Backwards Compatibility Policy
      • Maintainer Docs
    • Versioning policy
    • Release process
    • Feast 0.9 vs Feast 0.10+
Powered by GitBook
On this page
  • Description
  • Getting started
  • Examples
  • Functionality Matrix

Was this helpful?

Export as PDF
  1. Reference
  2. Online stores

Hazelcast (contrib)

PreviousRockset (contrib)NextProviders

Was this helpful?

Description

Hazelcast online store is in alpha development.

The online store provides support for materializing feature values into a Hazelcast cluster for serving online features in real-time. In order to use Hazelcast as online store, you need to have a running Hazelcast cluster. You can create a cluster using Hazelcast Viridian Serverless. See this page for more details.

  • Each feature view is mapped one-to-one to a specific Hazelcast IMap

  • This implementation inherits all strengths of Hazelcast such as high availability, fault-tolerance, and data distribution.

  • Secure TSL/SSL connection is supported by Hazelcast online store.

  • You can set TTL (Time-To-Live) setting for your features in Hazelcast cluster.

Each feature view corresponds to an IMap in Hazelcast cluster and the entries in that IMap corresponds to features of entities. Each feature value stored separately and can be retrieved individually.

Getting started

In order to use Hazelcast online store, you'll need to run pip install 'feast[hazelcast]'. You can then get started with the command feast init REPO_NAME -t hazelcast.

Examples

feature_store.yaml

```yaml project: my_feature_repo registry: data/registry.db provider: local online_store: type: hazelcast cluster_name: dev cluster_members: ["localhost:5701"] key_ttl_seconds: 36000 ```

Functionality Matrix

Hazelcast

write feature values to the online store

yes

read feature values from the online store

yes

update infrastructure (e.g. tables) in the online store

yes

teardown infrastructure (e.g. tables) in the online store

yes

generate a plan of infrastructure changes

no

support for on-demand transforms

yes

readable by Python SDK

yes

readable by Java

no

readable by Go

no

support for entityless feature views

yes

support for concurrent writing to the same key

yes

support for ttl (time to live) at retrieval

yes

support for deleting expired data

yes

collocated by feature view

no

collocated by feature service

no

collocated by entity key

yes

To compare this set of functionality against other online stores, please see the full .

Hazelcast
getting started
functionality matrix