The quickstart is the easiest way to learn about Feast. For more detailed tutorials, please check out the tutorials page.
Feature tables from Feast 0.9 have been renamed to feature views in Feast 0.10+. For more details, please see the discussion here.
No, there are feature views without entities.
Feast currently does not support any access control other than the access control required for the Provider's environment (for example, GCP and AWS permissions).
Yes. In earlier versions of Feast, we used Feast Spark to manage ingestion from stream sources. In the current version of Feast, we support push based ingestion.
A feature view can be defined with multiple entities. Since each entity has a unique join_key, using multiple entities will achieve the effect of a composite key.
Please see a detailed comparison of Feast vs. Tecton here. For another comparison, please see here.
Feast is designed to work at scale and support low latency online serving. Benchmarks to be released soon, and active work is underway to support very latency sensitive use cases.
Yes. Specifically:
Simple lists / dense embeddings:
BigQuery supports list types natively
Redshift does not support list types, so you'll need to serialize these features into strings (e.g. json or protocol buffers)
Feast's implementation of online stores serializes features into Feast protocol buffers and supports list types (see reference)
Sparse embeddings (e.g. one hot encodings)
One way to do this efficiently is to have a protobuf or string representation of https://www.tensorflow.org/guide/sparse_tensor
The list of supported offline and online stores can be found here and here, respectively. The roadmap indicates the stores for which we are planning to add support. Finally, our Provider abstraction is built to be extensible, so you can plug in your own implementations of offline and online stores. Please see more details about custom providers here.
Please follow the instructions here.
Yes. There are two ways to use S3 in Feast:
Using Redshift as a data source via Spectrum (AWS tutorial), and then continuing with the Running Feast with GCP/AWS guide. See a presentation we did on this at our apply() meetup.
Using the s3_endpoint_override
in a FileSource
data source. This endpoint is more suitable for quick proof of concepts that won't necessarily scale for production use cases.
Feast does not support Spark natively. However, you can create a custom provider that will support Spark, which can help with more scalable materialization and ingestion.
Please see the roadmap.
Feast 0.10+ is much lighter weight and more extensible than Feast 0.9. It is designed to be simple to install and use. Please see this document for more details.
Please see this document. If you have any questions or suggestions, feel free to leave a comment on the document!
For more details on contributing to the Feast community, see here and this here.
Feast Core and Feast Serving were both part of Feast Java. We plan to support Feast Serving. We will not support Feast Core; instead we will support our object store based registry. We will not support Feast Spark. For more details on what we plan on supporting, please see the roadmap.
Don't see your question?
We encourage you to ask questions on Slack or Github. Even better, once you get an answer, add the answer to this FAQ via a pull request!