S3 API
Alluxio provides a RESTful API compatible with the Amazon S3 API, allowing applications built for S3 to interact with data managed by Alluxio. This enables you to leverage Alluxio's data caching, sharing, and storage abstraction capabilities without modifying your existing S3-based applications.
Getting Started
Prerequisites
First, enable the S3 API on all Alluxio workers by adding the following property to your conf/alluxio-site.properties file:
alluxio.worker.s3.api.enabled=trueConnecting to the S3 Endpoint
The S3 API is exposed on every Alluxio worker. It is highly recommended to set up a load balancer (e.g., Nginx, LVS, or DNS round-robin) to distribute API requests across all workers. The address of your load balancer will serve as the S3 endpoint for your clients.
HTTP Port:
29998(default)HTTPS Port:
29996(default)
To enable HTTPS, please refer to the TLS configuration guide. You can force HTTPS-only access by setting alluxio.worker.s3.only.https.access=true.
Configuring Your S3 Client
Configuring your S3 client involves setting the endpoint, authentication credentials, and addressing style.
Authentication Methods
Alluxio's S3 API supports two authentication methods: SIMPLE (default) and token-based OIDC.
SIMPLE Authentication (Default)
By default, Alluxio uses a SIMPLE authentication scheme, not standard AWS credential validation.
How it Works: For compatibility, clients should still generate an
Authorizationheader formatted according to AWS Signature Version 4. Alluxio parses this header to extract the user, but does not validate the cryptographic signature.Access Key: The Alluxio username you wish to perform operations as. This is the
Credentialpart of theAuthorizationheader. If you do not provide an access key, operations will be performed as the user that launched the Alluxio worker process.Secret Key: Can be any dummy value. It is required by the client to generate the signature, but it is ignored by Alluxio.
OIDC Token-Based Authentication
For more secure, centralized identity management, you can configure the S3 API to use OIDC (OpenID Connect) tokens. For more details, see the full Authentication guide.
Addressing Style
Clients must use path-style requests (e.g.,
http://<endpoint>/<bucket>/<object>).Virtual-hosted style requests (
http://<bucket>.<endpoint>/<object>) are not supported.
Advanced Configuration
Performance and HTTP Redirects
By default, Alluxio's S3 API uses HTTP redirects to achieve zero-copy reads. When a client requests an object, it is redirected to the specific worker that holds the data.
However, some S3 clients, like Python's boto3 and the PyTorch S3 connector, do not handle these redirects correctly. If you are using such a client, you must disable redirects by setting the following property:
When redirects are disabled, data is proxied through the worker that initially receives the request, which introduces an extra network hop and may impact performance.
Tagging and Metadata
Enable Tagging: To use S3 object tagging, you must enable extended attribute (xattr) support for your UFS.
Tag Limits: By default, User-defined tags on buckets & objects are limited to 10 and obey the S3 tag restrictions. You can disable this with
alluxio.proxy.s3.tagging.restrictions.enabled=false.Metadata Size: The maximum size for user-defined metadata in PUT-requests is 2KB by default in accordance with S3 object metadata restrictions. You can change this with
alluxio.proxy.s3.header.metadata.max.size.
HTTP Persistent Connections (Keep-Alive)
HTTP persistent connection (also called HTTP keep-alive), is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair.
The main advantages of persistent connections include:
Reduced Latency: Minimizes delay caused by frequent requests.
Resource Savings: Reduces server and client resource consumption through fewer connections and less repeated requests.
Real-time Capability: Enables quick transmission of the latest data.
However, long connections also have some drawbacks, such as:
Increased Server Pressure: Many open connections can increase the memory and CPU burden on the server.
Timeout Issues: Requires handling cases where connections are unresponsive for a long time to ensure the effectiveness of timeout mechanisms.
To enable HTTP long connection keep-alive for S3 API, you need to modify the conf/alluxio-site.properties file to include the following content:
Limitations
Buckets: Only top-level directories in the Alluxio namespace are treated as S3 buckets. The root directory (
/) is not a bucket, and objects at the root are not accessible via the S3 API. To seamlessly migrate existing application logic without editing the S3 URIs, be sure to use the bucket name as the mount path. For example:Object Overwrites: Alluxio does not provide object locking or versioning. If multiple clients write to the same object simultaneously, the last write will win.
Unsupported Characters: Do not use
?,\,./, or../in object keys. Using//in a path may lead to undefined behavior.Folder Objects: Subdirectories are returned as 0-byte folder objects in
ListObjects(V2)responses, matching the behavior of the AWS S3 console.
Supported S3 Actions
The following table lists the supported S3 API actions. For detailed usage, see the official S3 API documentation.
Content-Type, x-amz-copy-source, x-amz-metadata-directive, x-amz-tagging-directive, x-amz-tagging
N/A
Usage Examples
boto3 client
Since the boto3 client is unable to process redirect responses, explicitly disable redirects by configuring
The following example python script shows how to initialize a boto3 client and test it with a list buckets request.
This assumes boto3 is installed by pip install -r requirements.txt, with boto3 as the only entry inside requirements.txt.
Pytorch
Since the Pytorch client is unable to process redirect responses, explicitly disable redirects by configuring
The following example python script uses the S3 connector for Pytorch to read data. It assumes a UFS has been mounted along the path /s3-mount.
This assumes Pytorch and related libraries are installed with pip.
Nvidia Triton inference server
The following steps shows how to prepare a Triton model repository, server, and client. It assumes the following preparation for Alluxio:
Alluxio is deployed in K8s
The Alluxio S3 endpoint is available at
<LOAD_BALANCER_ADDRESS>An S3 bucket, named
<MY_BUCKET>is mounted in Alluxio at the mount point/s3-mount
Prepare model repository and upload to the mounted S3 bucket.
Create triton-server.yaml and deploy it with kubectl create -f triton-server.yaml.
As part of starting the server, the model data will be read and therefore cached in Alluxio.
Create triton-client.yaml and deploy it with kubectl create -f triton-client.yaml.
Send a request from within the client
Last updated