LogoLogo
v0.47-branch
v0.47-branch
  • Introduction
  • Blog
  • Community & getting help
  • Roadmap
  • Changelog
  • Getting started
    • Quickstart
    • Architecture
      • Overview
      • Language
      • Push vs Pull Model
      • Write Patterns
      • Feature Transformation
      • Feature Serving and Model Inference
      • Role-Based Access Control (RBAC)
    • Concepts
      • Overview
      • Project
      • Data ingestion
      • Entity
      • Feature view
      • Feature retrieval
      • Point-in-time joins
      • [Alpha] Saved dataset
      • Permission
      • Tags
    • Components
      • Overview
      • Registry
      • Offline store
      • Online store
      • Feature server
      • Batch Materialization Engine
      • Provider
      • Authorization Manager
      • OpenTelemetry Integration
    • Third party integrations
    • FAQ
  • Tutorials
    • Sample use-case tutorials
      • Driver ranking
      • Fraud detection on GCP
      • Real-time credit scoring on AWS
      • Driver stats on Snowflake
    • Validating historical features with Great Expectations
    • Building streaming features
  • How-to Guides
    • Running Feast with Snowflake/GCP/AWS
      • Install Feast
      • Create a feature repository
      • Deploy a feature store
      • Build a training dataset
      • Load data into the online store
      • Read features from the online store
      • Scaling Feast
      • Structuring Feature Repos
    • Running Feast in production (e.g. on Kubernetes)
    • Customizing Feast
      • Adding a custom batch materialization engine
      • Adding a new offline store
      • Adding a new online store
      • Adding a custom provider
    • Adding or reusing tests
    • Starting Feast servers in TLS(SSL) Mode
  • Reference
    • Codebase Structure
    • Type System
    • Data sources
      • Overview
      • File
      • Snowflake
      • BigQuery
      • Redshift
      • Push
      • Kafka
      • Kinesis
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
      • Couchbase (contrib)
    • Offline stores
      • Overview
      • Dask
      • Snowflake
      • BigQuery
      • Redshift
      • DuckDB
      • Couchbase Columnar (contrib)
      • Spark (contrib)
      • PostgreSQL (contrib)
      • Trino (contrib)
      • Azure Synapse + Azure SQL (contrib)
      • Remote Offline
    • Online stores
      • Overview
      • SQLite
      • Snowflake
      • Redis
      • Dragonfly
      • IKV
      • Datastore
      • DynamoDB
      • Bigtable
      • Remote
      • PostgreSQL
      • Cassandra + Astra DB
      • Couchbase
      • MySQL
      • Hazelcast
      • ScyllaDB
      • SingleStore
      • Milvus
    • Registries
      • Local
      • S3
      • GCS
      • SQL
      • Snowflake
    • Providers
      • Local
      • Google Cloud Platform
      • Amazon Web Services
      • Azure
    • Batch Materialization Engines
      • Snowflake
      • AWS Lambda (alpha)
      • Spark (contrib)
    • Feature repository
      • feature_store.yaml
      • .feastignore
    • Feature servers
      • Python feature server
      • [Alpha] Go feature server
      • Offline Feature Server
    • [Beta] Web UI
    • [Beta] On demand feature view
    • [Alpha] Vector Database
    • [Alpha] Data quality monitoring
    • [Alpha] Streaming feature computation with Denormalized
    • Feast CLI reference
    • Python API reference
    • Usage
  • Project
    • Contribution process
    • Development guide
    • Backwards Compatibility Policy
      • Maintainer Docs
    • Versioning policy
    • Release process
    • Feast 0.9 vs Feast 0.10+
Powered by GitBook
On this page
  • Description
  • Disclaimer
  • Getting started
  • Example
  • Functionality Matrix

Was this helpful?

Edit on GitHub
Export as PDF
  1. Reference
  2. Offline stores

Couchbase Columnar (contrib)

PreviousDuckDBNextSpark (contrib)

Last updated 1 month ago

Was this helpful?

Description

The Couchbase Columnar offline store provides support for reading . Note that Couchbase Columnar is available through .

  • Entity dataframes can be provided as a SQL++ query or can be provided as a Pandas dataframe. A Pandas dataframe will be uploaded to Couchbase Capella Columnar as a collection.

Disclaimer

The Couchbase Columnar offline store does not achieve full test coverage. Please do not assume complete stability.

Getting started

In order to use this offline store, you'll need to run pip install 'feast[couchbase]'. You can get started by then running feast init -t couchbase.

To get started with Couchbase Capella Columnar:

  1. Sign up for a account

    • This account should be able to read and write.

    • For testing purposes, it is recommended to assign all roles to avoid any permission issues.

    • You must allow the IP address of the machine running Feast.

Example

feature_store.yaml
project: my_project
registry: data/registry.db
provider: local
offline_store:
  type: couchbase.offline
  connection_string: COUCHBASE_COLUMNAR_CONNECTION_STRING # Copied from Settings > Connection String page in Capella Columnar console, starts with couchbases://
  user: COUCHBASE_COLUMNAR_USER # Couchbase cluster access name from Settings > Access Control page in Capella Columnar console
  password: COUCHBASE_COLUMNAR_PASSWORD # Couchbase password from Settings > Access Control page in Capella Columnar console
  timeout: 120 # Timeout in seconds for Columnar operations, optional
online_store:
    path: data/online_store.db

Functionality Matrix

Couchbase Columnar

get_historical_features (point-in-time correct join)

yes

pull_latest_from_table_or_query (retrieve latest feature values)

yes

pull_all_from_table_or_query (retrieve a saved dataset)

yes

offline_write_batch (persist dataframes to offline store)

no

write_logged_features (persist logged features to offline store)

no

Below is a matrix indicating which functionality is supported by CouchbaseColumnarRetrievalJob.

Couchbase Columnar

export to dataframe

yes

export to arrow table

yes

export to arrow batches

no

export to SQL

yes

export to data lake (S3, GCS, etc.)

yes

export to data warehouse

yes

export as Spark dataframe

no

local execution of Python-based on-demand transforms

yes

remote execution of Python-based on-demand transforms

no

persist results in the offline store

yes

preview the query plan before execution

yes

read partitioned data

yes

Note that timeoutis an optional parameter. The full set of configuration options is available in .

The set of functionality supported by offline stores is described in detail . Below is a matrix indicating which functionality is supported by the Couchbase Columnar offline store.

To compare this set of functionality against other offline stores, please see the full .

CouchbaseColumnarSources
Couchbase Capella
Couchbase Capella
Deploy a Columnar cluster
Create an Access Control Account
Configure allowed IP addresses
CouchbaseColumnarOfflineStoreConfig
here
functionality matrix