LogoLogo
v0.40-branch
v0.40-branch
  • Introduction
  • Community & getting help
  • Roadmap
  • Changelog
  • Getting started
    • Quickstart
    • Concepts
      • Overview
      • Data ingestion
      • Entity
      • Feature view
      • Feature retrieval
      • Point-in-time joins
      • Registry
      • [Alpha] Saved dataset
    • Architecture
      • Overview
      • Language
      • Registry
      • Offline store
      • Online store
      • Batch Materialization Engine
      • Provider
    • Third party integrations
    • FAQ
  • Tutorials
    • Sample use-case tutorials
      • Driver ranking
      • Fraud detection on GCP
      • Real-time credit scoring on AWS
      • Driver stats on Snowflake
    • Validating historical features with Great Expectations
    • Using Scalable Registry
    • Building streaming features
  • How-to Guides
    • Running Feast with Snowflake/GCP/AWS
      • Install Feast
      • Create a feature repository
      • Deploy a feature store
      • Build a training dataset
      • Load data into the online store
      • Read features from the online store
      • Scaling Feast
      • Structuring Feature Repos
    • Running Feast in production (e.g. on Kubernetes)
    • Customizing Feast
      • Adding a custom batch materialization engine
      • Adding a new offline store
      • Adding a new online store
      • Adding a custom provider
    • Adding or reusing tests
  • Reference
    • Codebase Structure
    • Type System
    • Data sources
      • Overview
      • File
      • Snowflake
      • BigQuery
      • Redshift
      • Push
      • Kafka
      • Kinesis
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
    • Offline stores
      • Overview
      • Dask
      • Snowflake
      • BigQuery
      • Redshift
      • DuckDB
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
      • Remote Offline
    • Online stores
      • Overview
      • SQLite
      • Snowflake
      • Redis
      • Dragonfly
      • IKV
      • Datastore
      • DynamoDB
      • Bigtable
      • Remote
      • PostgreSQL (contrib)
      • Cassandra + Astra DB (contrib)
      • MySQL (contrib)
      • Rockset (contrib)
      • Hazelcast (contrib)
      • ScyllaDB (contrib)
      • SingleStore (contrib)
    • Providers
      • Local
      • Google Cloud Platform
      • Amazon Web Services
      • Azure
    • Batch Materialization Engines
      • Snowflake
      • AWS Lambda (alpha)
      • Spark (contrib)
    • Feature repository
      • feature_store.yaml
      • .feastignore
    • Feature servers
      • Python feature server
      • [Alpha] Go feature server
      • Offline Feature Server
    • [Beta] Web UI
    • [Beta] On demand feature view
    • [Alpha] Vector Database
    • [Alpha] Data quality monitoring
    • Feast CLI reference
    • Python API reference
    • Usage
  • Project
    • Contribution process
    • Development guide
    • Backwards Compatibility Policy
      • Maintainer Docs
    • Versioning policy
    • Release process
    • Feast 0.9 vs Feast 0.10+
Powered by GitBook
On this page
  • Description
  • Example

Was this helpful?

Edit on GitHub
Export as PDF
  1. Reference
  2. Online stores

Rockset (contrib)

PreviousMySQL (contrib)NextHazelcast (contrib)

Last updated 9 months ago

Was this helpful?

Description

In Alpha Development.

The online store provides support for materializing feature values within a Rockset collection in order to serve features in real-time.

  • Each document is uniquely identified by its '_id' value. Repeated inserts into the same document '_id' will result in an upsert.

Rockset indexes all columns allowing for quick per feature look up and also allows for a dynamic typed schema that can change based on any new requirements. API Keys can be found in the Rockset console. You can also find host urls on the same tab by clicking "View Region Endpoint Urls".

Data Model Used Per Doc

{
  "_id": (STRING) Unique Identifier for the feature document.
  <key_name>: (STRING) Feature Values Mapped by Feature Name. Feature
                       values stored as a serialized hex string.
  ....
  "event_ts": (STRING) ISO Stringified Timestamp.
  "created_ts": (STRING) ISO Stringified Timestamp.
}

Example

project: my_feature_app
registry: data/registry.db
provider: local
online_store:
    ## Basic Configs ##

    # If apikey or host is left blank the driver will try to pull
    # these values from environment variables ROCKSET_APIKEY and 
    # ROCKSET_APISERVER respectively.
    type: rockset
    api_key: <your_api_key_here>
    host: <your_region_endpoint_here>
  
    ## Advanced Configs ## 

    # Batch size of records that will be turned per page when
    # paginating a batched read.
    #
    # read_pagination_batch_size: 100

    # The amount of time, in seconds, we will wait for the
    # collection to become visible to the API.
    #
    # collection_created_timeout_secs: 60

    # The amount of time, in seconds, we will wait for the
    # collection to enter READY state.
    #
    # collection_ready_timeout_secs: 1800

    # Whether to wait for all writes to be flushed from log
    # and queryable before returning write as completed. If
    # False, documents that are written may not be seen
    # immediately in subsequent reads.
    #
    # fence_all_writes: True

    # The amount of time we will wait, in seconds, for the
    # write fence to be passed
    #
    # fence_timeout_secs: 600

    # Initial backoff, in seconds, we will wait between
    # requests when polling for a response.
    #
    # initial_request_backoff_secs: 2

    # Initial backoff, in seconds, we will wait between
    # requests when polling for a response.
    # max_request_backoff_secs: 30

    # The max amount of times we will retry a failed request.
    # max_request_attempts: 10000
Rockset