[Alpha] Static Artifacts Loading

Warning: This is an experimental feature. To our knowledge, this is stable, but there are still rough edges in the experience. Contributions are welcome!

Overview

Static Artifacts Loading allows you to load models, lookup tables, and other static resources once during feature server startup instead of loading them on each request. These artifacts are cached in memory and accessible to on-demand feature views for real-time inference.

This feature optimizes the performance of on-demand feature views that require external resources by eliminating the overhead of repeatedly loading the same artifacts during request processing.

Why Use Static Artifacts Loading?

Static artifacts loading enables data scientists and ML engineers to:

  1. Improve performance: Eliminate model loading overhead from each feature request

  2. Enable complex transformations: Use pre-trained models in on-demand feature views without performance penalties

  3. Share resources: Multiple feature views can access the same loaded artifacts

  4. Simplify deployment: Package models and lookup tables with your feature repository

Common use cases include:

  • Sentiment analysis using pre-trained transformers models

  • Text classification with small neural networks

  • Lookup-based transformations using static dictionaries

  • Embedding generation with pre-computed vectors

How It Works

  1. Feature Repository Setup: Create a static_artifacts.py file in your feature repository root

  2. Server Startup: When feast serve starts, it automatically looks for and loads the artifacts

  3. Memory Storage: Artifacts are stored in the FastAPI application state and accessible via global references

  4. Request Processing: On-demand feature views access pre-loaded artifacts for fast transformations

Example 1: Basic Model Loading

Create a static_artifacts.py file in your feature repository:

Use the pre-loaded model in your on-demand feature view:

Example 2: Multiple Artifacts with Lookup Tables

Load multiple types of artifacts:

Use multiple artifacts in feature transformations:

Container Deployment

Static artifacts work with containerized deployments. Include your artifacts in the container image:

The server will automatically load static artifacts during container startup.

Supported Artifact Types

  • Small ML models: Sentiment analysis, text classification, small neural networks

  • Lookup tables: Label mappings, category dictionaries, user segments

  • Configuration data: Model parameters, feature mappings, business rules

  • Pre-computed embeddings: User vectors, item features, static representations

  • Large Language Models: Use dedicated serving solutions (vLLM, TensorRT-LLM, TGI)

  • Models requiring specialized hardware: GPU clusters, TPUs

  • Frequently updated models: Consider model registries with versioning

  • Large datasets: Use feature views with proper data sources instead

Error Handling

Static artifacts loading includes graceful error handling:

  • Missing file: Server starts normally without static artifacts

  • Loading errors: Warnings are logged, feature views should implement fallback logic

  • Partial failures: Successfully loaded artifacts remain available

Always implement fallback behavior in your feature transformations:

Starting the Feature Server

Start the feature server as usual:

You'll see log messages indicating artifact loading:

Template Example

The PyTorch NLP template demonstrates static artifacts loading:

This template includes a complete example with sentiment analysis model loading, lookup tables, and integration with on-demand feature views.

Performance Considerations

  • Startup time: Artifacts are loaded during server initialization, which may increase startup time

  • Memory usage: All artifacts remain in memory for the server's lifetime

  • Concurrency: Artifacts are shared across all request threads

  • Container resources: Ensure sufficient memory allocation for your artifacts

Configuration

Currently, static artifacts loading uses convention-based configuration:

  • File name: Must be named static_artifacts.py

  • Location: Must be in the feature repository root directory

  • Function name: Must implement load_artifacts(app: FastAPI) function

Limitations

  • File name and location are currently fixed (not configurable)

  • Artifacts are loaded synchronously during startup

  • No built-in artifact versioning or hot reloading

  • Limited to Python-based artifacts (no external binaries)

Contributing

This is an alpha feature and we welcome contributions! Areas for improvement:

  • Configurable artifact file locations

  • Asynchronous artifact loading

  • Built-in artifact versioning

  • Performance monitoring and metrics

  • Integration with model registries

Please report issues and contribute improvements via the Feast GitHub repository.

Last updated

Was this helpful?