S3 API

Alluxio provides a RESTful API compatible with the Amazon S3 API, allowing applications built for S3 to interact with data managed by Alluxio. This enables you to leverage Alluxio's data caching, sharing, and storage abstraction capabilities without modifying your existing S3-based applications.

Getting Started

Prerequisites

First, enable the S3 API on all Alluxio workers by adding the following property to your conf/alluxio-site.properties file:

alluxio.worker.s3.api.enabled=true

Connecting to the S3 Endpoint

The S3 API is exposed on every Alluxio worker. It is highly recommended to set up a load balancer (e.g., Nginx, LVS, or DNS round-robin) to distribute API requests across all workers. The address of your load balancer will serve as the S3 endpoint for your clients.

  • HTTP Port: 29998 (default)

  • HTTPS Port: 29996 (default)

To enable HTTPS, please refer to the TLS configuration guide. You can force HTTPS-only access by setting alluxio.worker.s3.only.https.access=true.

Configuring Your S3 Client

Configuring your S3 client involves setting the endpoint, authentication credentials, and addressing style.

Authentication Methods

Alluxio's S3 API supports two authentication methods: SIMPLE (default) and token-based OIDC.

SIMPLE Authentication (Default)

By default, Alluxio uses a SIMPLE authentication scheme, not standard AWS credential validation.

  • How it Works: For compatibility, clients should still generate an Authorization header formatted according to AWS Signature Version 4. Alluxio parses this header to extract the user, but does not validate the cryptographic signature.

  • Access Key: The Alluxio username you wish to perform operations as. This is the Credential part of the Authorization header. If you do not provide an access key, operations will be performed as the user that launched the Alluxio worker process.

  • Secret Key: Can be any dummy value. It is required by the client to generate the signature, but it is ignored by Alluxio.

OIDC Token-Based Authentication

For more secure, centralized identity management, you can configure the S3 API to use OIDC (OpenID Connect) tokens. For more details, see the full Authentication guide.

Addressing Style

  • Clients must use path-style requests (e.g., http://<endpoint>/<bucket>/<object>).

  • Virtual-hosted style requests (http://<bucket>.<endpoint>/<object>) are not supported.

Advanced Configuration

Performance and HTTP Redirects

By default, Alluxio's S3 API uses HTTP redirects to achieve zero-copy reads. When a client requests an object, it is redirected to the specific worker that holds the data.

However, some S3 clients, like Python's boto3 and the PyTorch S3 connector, do not handle these redirects correctly. If you are using such a client, you must disable redirects by setting the following property:

alluxio.worker.s3.redirect.enabled=false

When redirects are disabled, data is proxied through the worker that initially receives the request, which introduces an extra network hop and may impact performance.

Tagging and Metadata

  • Enable Tagging: To use S3 object tagging, you must enable extended attribute (xattr) support for your UFS.

    alluxio.underfs.xattr.change.enabled=true
  • Tag Limits: By default, User-defined tags on buckets & objects are limited to 10 and obey the S3 tag restrictions. You can disable this with alluxio.proxy.s3.tagging.restrictions.enabled=false.

  • Metadata Size: The maximum size for user-defined metadata in PUT-requests is 2KB by default in accordance with S3 object metadata restrictions. You can change this with alluxio.proxy.s3.header.metadata.max.size.

HTTP Persistent Connections (Keep-Alive)

HTTP persistent connection (also called HTTP keep-alive), is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair.

The main advantages of persistent connections include:

  • Reduced Latency: Minimizes delay caused by frequent requests.

  • Resource Savings: Reduces server and client resource consumption through fewer connections and less repeated requests.

  • Real-time Capability: Enables quick transmission of the latest data.

However, long connections also have some drawbacks, such as:

  • Increased Server Pressure: Many open connections can increase the memory and CPU burden on the server.

  • Timeout Issues: Requires handling cases where connections are unresponsive for a long time to ensure the effectiveness of timeout mechanisms.

To enable HTTP long connection keep-alive for S3 API, you need to modify the conf/alluxio-site.properties file to include the following content:

# Enable keep-alive
alluxio.worker.s3.connection.keep.alive.enabled=true

# Set an idle timeout. The connection will be closed if idle for this duration.
# A value of 0 means to turn off this function.
alluxio.worker.s3.connection.idle.max.time=0sec

Limitations

  • Buckets: Only top-level directories in the Alluxio namespace are treated as S3 buckets. The root directory (/) is not a bucket, and objects at the root are not accessible via the S3 API.

  • Object Overwrites: Alluxio does not provide object locking or versioning. If multiple clients write to the same object simultaneously, the last write will win.

  • Unsupported Characters: Do not use ?, \, ./, or ../ in object keys. Using // in a path may lead to undefined behavior.

  • Folder Objects: Subdirectories are returned as 0-byte folder objects in ListObjects(V2) responses, matching the behavior of the AWS S3 console.

Supported S3 Actions

The following table lists the supported S3 API actions. For detailed usage, see the official S3 API documentation.

S3 API Action
Supported Headers
Supported Query Parameters

Content-Type, x-amz-copy-source, x-amz-metadata-directive, x-amz-tagging-directive, x-amz-tagging

N/A

Range

N/A

N/A

delimiter, encoding-type, marker, max-keys, prefix

N/A

continuation-token, delimiter, encoding-type, max-keys, prefix, start-after

N/A

N/A

Content-Length, Content-MD5, Content-Type, x-amz-tagging

N/A

Content-Length, Content-MD5

N/A

Usage Examples

boto3 client

Since the boto3 client is unable to process redirect responses, explicitly disable redirects by configuring

alluxio.worker.s3.redirect.enabled=false

The following example python script shows how to initialize a boto3 client and test it with a list buckets request.

import boto3
from botocore.exceptions import ClientError

ALLUXIO_S3_ENDPOINT = "http://<LOAD_BALANCER_ADDRESS>"  # Alluxio's S3 API endpoint when using a load balancer to distribute requests to all workers
# ALLUXIO_S3_ENDPOINT = "http://<ALLUXIO_WORKER>:29998"  # an alternative to a load balancer is to directly connect to a worker
ACCESS_KEY = "placeholder"  # Alluxio does not validate credentials
SECRET_KEY = "placeholder"
REGION = "us-east-1"

FOLDER_PREFIX_TO_LIST = "/"

def main():
    try:
        s3 = boto3.client(
            "s3",
            aws_access_key_id=ACCESS_KEY,
            aws_secret_access_key=SECRET_KEY,
            region_name=REGION,
            endpoint_url=ALLUXIO_S3_ENDPOINT
        )
        print("Client initialized successfully.")

        # Example: list objects with prefix
        response = s3.list_buckets()
        print("Buckets (Alluxio mount points):")
        for bucket in response.get("Buckets", []):
            print(f" - {bucket['Name']}")
    except Exception as e:
        print(f"Error: {e}")

if __name__ == "__main__":
    main()

This assumes boto3 is installed by pip install -r requirements.txt, with boto3 as the only entry inside requirements.txt.

Pytorch

Since the Pytorch client is unable to process redirect responses, explicitly disable redirects by configuring

alluxio.worker.s3.redirect.enabled=false

The following example python script uses the S3 connector for Pytorch to read data. It assumes a UFS has been mounted along the path /s3-mount.

# ref https://github.com/awslabs/s3-connector-for-pytorch/tree/main?tab=readme-ov-file#sample-examples

from s3torchconnector import S3MapDataset, S3IterableDataset, S3ClientConfig
import random

S3_ENDPOINT_URL = "http://<LOAD_BALANCER_ADDRESS>"  # Alluxio's S3 API endpoint when using a load balancer to distribute requests to all workers
# S3_ENDPOINT_URL = "http://<ALLUXIO_WORKER>:29998"  # an alternative to a load balancer is to directly connect to a worker
DATASET_URI="s3://s3-mount"
REGION = "us-east-1"

s3_client_config = S3ClientConfig(
  force_path_style=True,
)

iterable_dataset = S3IterableDataset.from_prefix(DATASET_URI,
  region=REGION,
  endpoint=S3_ENDPOINT_URL,
  s3client_config=s3_client_config,
)

for item in iterable_dataset:
  content = item.read()
  print(f"{item.key}:{len(content)}")

map_dataset = S3MapDataset.from_prefix(DATASET_URI,
  region=REGION,
  endpoint=S3_ENDPOINT_URL,
  s3client_config=s3_client_config,
)

# Randomly access to an item in map_dataset.
item = random.choice(map_dataset)
# # Learn about bucket, key, and content of the object
bucket = item.bucket
key = item.key
content = item.read()
print(f"{bucket} {key} {len(content)}")

This assumes Pytorch and related libraries are installed with pip.

$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
$ pip install --upgrade pip
$ pip install s3torchconnector

Nvidia Triton inference server

The following steps shows how to prepare a Triton model repository, server, and client. It assumes the following preparation for Alluxio:

  • Alluxio is deployed in K8s

  • The Alluxio S3 endpoint is available at <LOAD_BALANCER_ADDRESS>

  • An S3 bucket, named <MY_BUCKET> is mounted in Alluxio at the mount point /s3-mount

Prepare model repository and upload to the mounted S3 bucket.

$ kubectl run -it --rm debug-shell --image=ubuntu:22.04 --restart=Never -- sh
$ apt update -y
$ apt install -y awscli git python3 python3.10-venv wget
$ git clone -b r25.06 https://github.com/triton-inference-server/server.git
$ cd server/docs/examples
$ ./fetch_models.sh

# upload to s3. note that "/triton_model_repo" it will be used for the triton server
$ aws s3 sync model_repository s3://<MY_BUCKET>/triton_model_repo

Create triton-server.yaml and deploy it with kubectl create -f triton-server.yaml.

apiVersion: v1
kind: Pod
metadata:
  name: triton-inference-server-s3
  labels:
    app: triton-s3
spec:
  hostNetwork: true
  containers:
    - name: triton-s3-server
      image: nvcr.io/nvidia/tritonserver:24.05-py3
      imagePullPolicy: IfNotPresent
      ports:
        - name: http
          containerPort: 8000
          protocol: TCP
        - name: grpc
          containerPort: 8001
          protocol: TCP
        - name: metrics
          containerPort: 8002
          protocol: TCP
      command: ["/opt/tritonserver/bin/tritonserver"]
      args:
        - "--model-repository=s3://<LOAD_BALANCER_ADDRESS>/s3-mount/triton_model_repo"
        - "--log-verbose=1"
        - "--log-info=true"
      readinessProbe:
        httpGet:
          path: /v2/health/ready
          port: 8000
        initialDelaySeconds: 30
        periodSeconds: 10
        timeoutSeconds: 5
        failureThreshold: 3
      livenessProbe:
        httpGet:
          path: /v2/health/live
          port: 8000
        initialDelaySeconds: 60
        periodSeconds: 30
        timeoutSeconds: 5
        failureThreshold: 3

As part of starting the server, the model data will be read and therefore cached in Alluxio.

Create triton-client.yaml and deploy it with kubectl create -f triton-client.yaml.

apiVersion: v1
kind: Pod
metadata:
  name: triton-client
  labels:
    app: triton-s3
spec:
  hostNetwork: true
  containers:
    - image: nvcr.io/nvidia/tritonserver:24.05-py3-sdk
      imagePullPolicy: IfNotPresent
      name: tritonserver-client-test
      command: ["sleep", "infinity"]

Send a request from within the client

$ kubectl exec -it triton-client -- /workspace/install/bin/image_client -u $(kubectl get pod triton-inference-server-s3 -o jsonpath='{.status.podIP}'):8000 -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg
Request 0, batch size 1
Image '/workspace/images/mug.jpg':
    15.349564 (504) = COFFEE MUG
    13.227464 (968) = CUP
    10.424892 (505) = COFFEEPOT

Last updated