Feast users can choose to retrieve features from a feature server, as opposed to through the Python SDK.
The Python feature server is an HTTP endpoint that serves features with JSON I/O. This enables users to write and read features from the online store using any programming language that can make HTTP requests.
There is a CLI command that starts the server: feast serve
. By default, Feast uses port 6566; the port be overridden with a --port
flag.
One can deploy a feature server by building a docker image that bundles in the project's feature_store.yaml
. See this helm chart for an example on how to run Feast on Kubernetes.
A remote feature server on AWS Lambda is also available.
Here's an example of how to start the Python feature server with a local feature repo:
After the server starts, we can execute cURL commands from another terminal tab:
It's also possible to specify a feature service name instead of the list of features:
The Python feature server also exposes an endpoint for push sources. This endpoint allows you to push data to the online and/or offline store.
The request definition for PushMode
is a string parameter to
where the options are: ["online"
, "offline"
, "online_and_offline"
].
Note: timestamps need to be strings, and might need to be timezone aware (matching the schema of the offline store)
or equivalently from Python:
The Go feature server is an HTTP/gRPC endpoint that serves features. It is written in Go, and is therefore significantly faster than the Python feature server. See this blog post for more details on the comparison between Python and Go. In general, we recommend the Go feature server for all production use cases that require extremely low-latency feature serving. Currently only the Redis and SQLite online stores are supported.
By default, the Go feature server is turned off. To turn it on you can add go_feature_serving: True
to your feature_store.yaml
:
Then the feast serve
CLI command will start the Go feature server. As with Python, the Go feature server uses port 6566 by default; the port be overridden with a --port
flag. Moreover, the server uses HTTP by default, but can be set to use gRPC with --type=grpc
.
Alternatively, if you wish to experiment with the Go feature server instead of permanently turning it on, you can just run feast serve --go
.
The Go component comes pre-compiled when you install Feast with Python versions 3.8-3.10 on macOS or Linux (on x86). In order to install the additional Python dependencies, you should install Feast with
You must also install the Apache Arrow C++ libraries. This is because the Go feature server uses the cgo memory allocator from the Apache Arrow C++ library for interoperability between Go and Python, to prevent memory from being accidentally garbage collected when executing on-demand feature views. You can read more about the usage of the cgo memory allocator in these docs.
For macOS, run brew install apache-arrow
. For linux users, you have to install libarrow-dev
.
For developers, if you want to build from source, run make compile-go-lib
to build and compile the go server. In order to build the go binaries, you will need to install the apache-arrow
c++ libraries.
The Go feature server can log all requested entities and served features to a configured destination inside an offline store. This allows users to create new datasets from features served online. Those datasets could be used for future trainings or for feature validations. To enable feature logging we need to edit feature_store.yaml
:
Feature logging configuration in feature_store.yaml
also allows to tweak some low-level parameters to achieve the best performance:
All these parameters are optional.
The logic for the Go feature server can also be used to retrieve features during a Python get_online_features
call. To enable this behavior, you must add go_feature_retrieval: True
to your feature_store.yaml
. You must also have all the dependencies installed as detailed above.
Warning: This is an experimental feature. It's intended for early testing and feedback, and could change without warnings in future releases.
The AWS Lambda feature server is an HTTP endpoint that serves features with JSON I/O, deployed as a Docker image through AWS Lambda and AWS API Gateway. This enables users to get features from Feast using any programming language that can make HTTP requests. A is also available. A remote feature server on GCP Cloud Run is currently being developed.
The AWS Lambda feature server is only available to projects using the AwsProvider
with registries on S3. It is disabled by default. To enable it, feature_store.yaml
must be modified; specifically, the enable
flag must be on and an execution_role_name
must be specified. For example, after running feast init -t aws
, changing the registry to be on S3, and enabling the feature server, the contents of feature_store.yaml
should look similar to the following:
If enabled, the feature server will be deployed during feast apply
. After it is deployed, the feast endpoint
CLI command will indicate the server's endpoint.
Feast requires the following permissions in order to deploy and teardown AWS Lambda feature server:
The following inline policy can be used to grant Feast the necessary permissions:
After feature_store.yaml
has been modified as described in the previous section, it can be deployed as follows:
After the feature server starts, we can execute cURL commands against it:
lambda:CreateFunction
lambda:GetFunction
lambda:DeleteFunction
lambda:AddPermission
lambda:UpdateFunctionConfiguration
arn:aws:lambda:<region>:<account_id>:function:feast-*
ecr:CreateRepository
ecr:DescribeRepositories
ecr:DeleteRepository
ecr:PutImage
ecr:DescribeImages
ecr:BatchDeleteImage
ecr:CompleteLayerUpload
ecr:UploadLayerPart
ecr:InitiateLayerUpload
ecr:BatchCheckLayerAvailability
ecr:GetDownloadUrlForLayer
ecr:GetRepositoryPolicy
ecr:SetRepositoryPolicy
ecr:GetAuthorizationToken
*
iam:PassRole
arn:aws:iam::<account_id>:role/
apigateway:*
arn:aws:apigateway:::/apis//routes//routeresponses
arn:aws:apigateway:::/apis//routes//routeresponses/
arn:aws:apigateway:::/apis//routes/
arn:aws:apigateway:::/apis//routes
arn:aws:apigateway:::/apis//integrations
arn:aws:apigateway:::/apis//stages//routesettings/
arn:aws:apigateway:::/apis/
arn:aws:apigateway:*::/apis