GenAI
Overview
Feast provides robust support for Generative AI applications, enabling teams to build, deploy, and manage feature infrastructure for Large Language Models (LLMs) and other Generative AI (GenAI) applications. With Feast's vector database integrations and feature management capabilities, teams can implement production-ready Retrieval Augmented Generation (RAG) systems and other GenAI applications with the same reliability and operational excellence as traditional ML systems.
Key Capabilities for GenAI
Vector Database Support
Feast integrates with popular vector databases to store and retrieve embedding vectors efficiently:
Milvus: Full support for vector similarity search with the
retrieve_online_documents_v2
methodSQLite: Local vector storage and retrieval for development and testing
Elasticsearch: Scalable vector search capabilities
Postgres with PGVector: SQL-based vector operations
Qdrant: Purpose-built vector database integration
These integrations allow you to:
Store embeddings as features
Perform vector similarity search to find relevant context
Retrieve both vector embeddings and traditional features in a single API call
Retrieval Augmented Generation (RAG)
Feast simplifies building RAG applications by providing:
Embedding storage: Store and version embeddings alongside your other features
Vector similarity search: Find the most relevant data/documents for a given query
Feature retrieval: Combine embeddings with structured features for richer context
Versioning and governance: Track changes to your document repository over time
The typical RAG workflow with Feast involves:
Transforming Unstructured Data to Structured Data
Feast provides powerful capabilities for transforming unstructured data (like PDFs, text documents, and images) into structured embeddings that can be used for RAG applications:
Document Processing Pipelines: Integrate with document processing tools like Docling to extract text from PDFs and other document formats
Chunking and Embedding Generation: Process documents into smaller chunks and generate embeddings using models like Sentence Transformers
On-Demand Transformations: Use
@on_demand_feature_view
decorator to transform raw documents into embeddings in real-timeBatch Processing with Spark: Scale document processing for large datasets using Spark integration
The transformation workflow typically involves:
Raw Data Ingestion: Load documents or other data from various sources (file systems, databases, etc.)
Text Extraction: Extract text content from unstructured documents
Chunking: Split documents into smaller, semantically meaningful chunks
Embedding Generation: Convert text chunks into vector embeddings
Storage: Store embeddings and metadata in Feast's feature store
Feature Transformation for LLMs
Feast supports transformations that can be used to:
Process raw text into embeddings
Chunk documents for more effective retrieval
Normalize and preprocess features before serving to LLMs
Apply custom transformations to adapt features for specific LLM requirements
Use Cases
Document Question-Answering
Build document Q&A systems by:
Storing document chunks and their embeddings in Feast
Converting user questions to embeddings
Retrieving relevant document chunks
Providing these chunks as context to an LLM
Knowledge Base Augmentation
Enhance your LLM's knowledge by:
Storing company-specific information as embeddings
Retrieving relevant information based on user queries
Injecting this information into the LLM's context
Semantic Search
Implement semantic search by:
Storing document embeddings in Feast
Converting search queries to embeddings
Finding semantically similar documents using vector search
Scaling with Spark Integration
Feast integrates with Apache Spark to enable large-scale processing of unstructured data for GenAI applications:
Spark Data Source: Load data from Spark tables, files, or SQL queries for feature generation
Spark Offline Store: Process large document collections and generate embeddings at scale
Spark Batch Materialization: Efficiently materialize features from offline to online stores
Distributed Processing: Handle gigabytes of documents and millions of embeddings
This integration enables:
Processing large document collections in parallel
Generating embeddings for millions of text chunks
Efficiently materializing features to vector databases
Scaling RAG applications to enterprise-level document repositories
Learn More
For more detailed information and examples:
Last updated
Was this helpful?