Table formats
Overview
Table formats are metadata and transaction layers built on top of data storage formats (like Parquet). They provide advanced capabilities for managing large-scale data lakes, including ACID transactions, time travel, schema evolution, and efficient data management.
Feast supports modern table formats to enable data lakehouse architectures with your feature store.
Supported Table Formats
Apache Iceberg
Apache Iceberg is an open table format designed for huge analytic datasets. It provides:
ACID transactions: Atomic commits with snapshot isolation
Time travel: Query data as of any snapshot
Schema evolution: Add, drop, rename, or reorder columns safely
Hidden partitioning: Partitioning is transparent to users
Performance: Advanced pruning and filtering
Basic Configuration
from feast.table_format import IcebergFormat
iceberg_format = IcebergFormat(
catalog="my_catalog",
namespace="my_database"
)Configuration Options
catalog
str (optional)
Iceberg catalog name
namespace
str (optional)
Namespace/schema within the catalog
properties
dict (optional)
Additional Iceberg configuration properties
Common Properties
Time Travel Example
Delta Lake
Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark and big data workloads. It provides:
ACID transactions: Serializable isolation for reads and writes
Time travel: Access and revert to earlier versions
Schema enforcement: Prevent bad data from corrupting tables
Unified batch and streaming: Process data incrementally
Audit history: Full history of all changes
Basic Configuration
Configuration Options
checkpoint_location
str (optional)
Location for Delta transaction log checkpoints
properties
dict (optional)
Additional Delta configuration properties
Common Properties
Time Travel Example
Apache Hudi
Apache Hudi (Hadoop Upserts Deletes and Incrementals) is a data lake storage framework for simplifying incremental data processing. It provides:
Upserts and deletes: Efficient record-level updates
Incremental queries: Process only changed data
Time travel: Query historical versions
Multiple table types: Optimize for read vs. write workloads
Change data capture: Track data changes over time
Basic Configuration
Configuration Options
table_type
str (optional)
COPY_ON_WRITE or MERGE_ON_READ
record_key
str (optional)
Field(s) that uniquely identify a record
precombine_field
str (optional)
Field used to determine the latest version
properties
dict (optional)
Additional Hudi configuration properties
Table Types
COPY_ON_WRITE (COW)
Stores data in columnar format (Parquet)
Updates create new file versions
Best for read-heavy workloads
Lower query latency
MERGE_ON_READ (MOR)
Uses columnar + row-based formats
Updates written to delta logs
Best for write-heavy workloads
Lower write latency
Common Properties
Incremental Query Example
Table Format vs File Format
It's important to understand the distinction:
What it is
Physical encoding of data
Metadata and transaction layer
Examples
Parquet, Avro, ORC, CSV
Iceberg, Delta Lake, Hudi
Handles
Data serialization
ACID, versioning, schema evolution
Layer
Storage layer
Metadata layer
Can be used together
Benefits of Table Formats
Reliability
ACID transactions: Ensure data consistency across concurrent operations
Automatic retries: Handle transient failures gracefully
Schema validation: Prevent incompatible schema changes
Data quality: Constraints and validation rules
Performance
Data skipping: Read only relevant files based on metadata
Partition pruning: Skip entire partitions based on predicates
Compaction: Merge small files for better performance
Columnar pruning: Read only necessary columns
Indexing: Advanced indexing for fast lookups
Flexibility
Schema evolution: Add, remove, or modify columns without rewriting data
Time travel: Access historical data states for auditing or debugging
Incremental processing: Process only changed data efficiently
Multiple readers/writers: Concurrent access without conflicts
Choosing the Right Table Format
Large-scale analytics with frequent schema changes
Iceberg
Best schema evolution, hidden partitioning, mature ecosystem
Streaming + batch workloads
Delta Lake
Unified architecture, strong integration with Spark, good docs
CDC and upsert-heavy workloads
Hudi
Efficient record-level updates, incremental queries
Read-heavy analytics
Iceberg or Delta
Excellent query performance
Write-heavy transactional
Hudi (MOR)
Optimized for fast writes
Multi-engine support
Iceberg
Widest engine support (Spark, Flink, Trino, etc.)
Best Practices
1. Choose Appropriate Partitioning
2. Enable Optimization Features
3. Manage Table History
4. Monitor Metadata Size
Table formats maintain metadata for all operations
Monitor metadata size and clean up old versions
Configure retention policies based on your needs
5. Test Schema Evolution
Data Source Support
Currently, table formats are supported with:
Spark data source - Full support for Iceberg, Delta, and Hudi
Future support planned for:
BigQuery (Iceberg)
Snowflake (Iceberg)
Other data sources
See Also
Last updated
Was this helpful?