Kubernetes Installation
This documentation shows how to install Alluxio on Kubernetes via Operator.
Overview
Artifacts
Your will receive download links for three artifacts:
Helm chart
alluxio-operator-3.5.2-helmchart.tgz
Deploys the Operator onto Kubernetes
Operator image
alluxio-operator-3.5.2-linux-amd64-docker.tar
Container image for the Operator pod
Alluxio image
alluxio-enterprise-AI-3.8-15.1.2-linux-amd64-docker.tar
Container image for Alluxio worker and coordinator pods
License
Required to activate the cluster
Platform: Use
-linux-amd64-docker.tarfor x86 nodes or-linux-arm64-docker.tarfor ARM nodes.
The Docker images are never pulled from a public registry — they are loaded from the .tar files and pushed to your private registry before deployment.
Helm chart (.tgz)
└─► deploys ─► Operator pod (alluxio-operator image)
└─► watches AlluxioCluster CRD
└─► creates ─► Alluxio pods (alluxio-enterprise image)Kubernetes Components
A deployed Alluxio cluster consists of:
Operator — Manages the lifecycle of Alluxio clusters. Installed once per Kubernetes cluster.
Coordinator — Handles background operations (data loading, freeing). 1 replica.
Workers — Cache data and serve reads via S3 API or FUSE. Scale horizontally for more cache capacity.
ETCD — Service discovery and mount table storage. 3 replicas recommended for quorum.
Monitoring (optional) — Prometheus and Grafana. Enabled by default.
Before You Start
Run these checks before starting (~2 minutes). Skipping this step is the most common cause of deployment failures.
Installation Steps
0. Push Alluxio Images to Your Private Registry
Skip this step if the Alluxio images are already present in your private registry.
Alluxio images are delivered as .tar files and must be loaded and pushed to your private registry before the Helm chart can deploy them.
Load the images into your local Docker:
✅ Success: docker images shows both images:
Retag and push to your private registry:
✅ Success: Both images are now in your private registry. Note down the full image paths — you will use them in Steps 1 and 4.
If your Kubernetes cluster also cannot reach public registries (air-gapped), third-party images (etcd, CSI) also need to be relocated. See Appendix A: Air-Gapped Deployment.
1. Prepare Helm Chart
Extract the Helm chart:
This creates the alluxio-operator directory containing the Helm chart.
Create alluxio-operator.yaml (outside the chart directory) to specify the operator image from your private registry:
2. Create Namespace
3. Deploy Operator
✅ Success: Helm prints STATUS: deployed immediately after the command completes:
Then verify all pods are running:
✅ Success: All operator pods show READY 1/1 or 2/2, STATUS = Running, and RESTARTS = 0.
An example output is like:
If pods fail with image pull errors on etcd or CSI images, see Appendix A: Air-Gapped Deployment.
4. Deploy Cluster
Create a minimal alluxio-cluster.yaml:
Deploy:
✅ Success: Startup typically takes 2–3 minutes. The first deployment may take longer if the Alluxio image (~1.8 GB) needs to be pulled from the registry. To watch progress in real time:
Once all pods are running, kubectl -n alx-ns get alluxiocluster shows CLUSTERPHASE = Ready
and kubectl -n alx-ns get pod shows all pods to become Ready and STATUS = Running.
If any component fails to start, see Appendix F: Troubleshooting.
5. Mount Storage
Create ufs.yaml (S3 example; for other storage systems, see Underlying Storage):
Apply:
✅ Success: kubectl -n alx-ns get ufs shows PHASE = Ready.
Example out:
6. Verify Cluster
✅ Success: Output displays your mount point (e.g., s3://my-bucket/... on /s3/).
Example out:
7. Verify Data Access
✅ Success: Returns a directory listing without errors.
Alluxio provides several APIs for applications to access data:
POSIX API via FUSE — Mount Alluxio as a local filesystem. See FUSE Guide.
S3 API — S3-compatible endpoint. See S3 API Guide.
Python API via FSSpec — Native Python interface. See FSSpec Guide.
Uninstall
To remove the Alluxio deployment from your cluster, run the following commands in order:
1. Delete the UFS mount and cluster:
2. Uninstall the operator:
3. Delete the namespaces:
✅ Success: kubectl get namespace no longer shows alx-ns or alluxio-operator, and kubectl get alluxiocluster -A returns No resources found.
Recommended Production Configuration
The basic configuration in Step 4 is suitable for evaluation. For production deployments, apply the following additional settings for HA, resource tuning, and persistent metadata.
Label Nodes
A common practise is to assign dedicated nodes to each Alluxio component. This prevents resource contention between components (for example, etcd I/O interfering with worker cache I/O) and gives you predictable placement for capacity planning.
Worker pods have an anti-affinity rule by default — multiple worker pods will not be scheduled on the same node.
Production alluxio-cluster.yaml
alluxio-cluster.yamlKey differences from the basic configuration:
Node selectors: Pin each component to dedicated nodes to prevent resource contention and ensure predictable placement. See the label commands above.
Worker count: number of workers depending on the target cache volume and target throughput.
ETCD replicas: 3 for quorum-based HA. Deploy on dedicated, stable nodes.
Resource limits and JVM options: Explicitly set to prevent OOM. The container memory limit must exceed the sum of
-Xmxand-XX:MaxDirectMemorySize.Persistent metastore: Coordinator metadata survives pod restarts.
Other important settings for production deployment:
License Management: A cluster license is the simplest way to get started. For production environments, a deployment license is recommended. See Appendix E: License Management for details on both options.
Hash Ring Configuration: It is critical to configure the hash ring before deployment, as changes can be destructive. For detailed guidance, see Appendix B: Handling Hash Ring.
Heterogeneous Clusters: If your cluster includes workers with different capacities, you must define a specific data distribution strategy. See Appendix C: Handling Heterogeneous Workers for configuration steps.
Worker Page Store Sizing: Properly configure the
pagestoreon your workers. Thesizeparameter sets the cache capacity, whilereservedSizeallocates space for internal operations, including temporary page writes and file metadata caching. We recommend settingreservedSizeto ~10% ofsize(10–100 GiB) and ensuring the total (size + reservedSize) fits within the worker's storage.Advanced Configuration: For other settings, such as resource and JVM tuning or using an external etcd, refer to Appendix D: Advanced Configuration.
Appendix
Use the table below to find the relevant appendix section for your scenario:
Air-gapped (cluster cannot reach public registries)
Heterogeneous cluster (mixed worker disks)
External or custom ETCD
Performance tuning
Multi-cluster on shared nodes
Production licensing
Something went wrong
A. Air-Gapped Deployment
Symptom: After deploying the operator or cluster, some pods are stuck in ImagePullBackOff. Your Kubernetes cluster cannot reach public registries to pull third-party component images (CSI, etcd, monitoring).
Alluxio images are already in your private registry from Prepare: Push Alluxio Images. The remaining images to relocate depend on your specific operator version. Identify them by inspecting the stuck pods:
For each image that cannot be pulled: pull it from a machine with public internet access, retag it for your private registry, and push it. Then update alluxio-operator.yaml or alluxio-cluster.yaml to point to your private registry for that component.
The CSI images (part of the operator) can be overridden in alluxio-operator.yaml:
The etcd image (part of the cluster) can be overridden in alluxio-cluster.yaml:
B. Handling Hash Ring
The consistent hash ring determines how data is mapped to workers. It is critical to define your hash ring strategy before deploying the cluster, as changing these settings later is a destructive operation that will cause all cached data to be lost.
Key properties to consider, which should be set in alluxio-cluster.yaml under .spec.properties:
Hash Ring Mode (
alluxio.user.dynamic.consistent.hash.ring.enabled):true(Default): Dynamic mode. Includes only online workers. Best for most environments.false: Static mode. Includes all registered workers, online or offline. Use if you need a stable ring view despite temporary worker unavailability.
Virtual Nodes (
alluxio.user.worker.selection.policy.consistent.hash.virtual.node.count.per.worker):Default:
2000. Controls load balancing granularity.
Worker Capacity (
alluxio.user.worker.selection.policy.consistent.hash.provider.impl):DEFAULT(Default): Assumes all workers have equal capacity.CAPACITY: Allocates virtual nodes based on worker storage capacity. Use this for heterogeneous clusters.
For more details, see Hash Ring Management.
C. Handling Heterogeneous Workers
The Alluxio operator allows you to manage heterogeneous worker configurations, which is particularly useful for clusters where nodes have different disk specifications. This feature enables you to define distinct worker groups, each with its own storage settings.
Note: While this provides flexibility, it is crucial to ensure consistency within each worker group. Misconfigurations can lead to unexpected errors. This guide covers the supported use case of configuring workers with different disk setups.
To set up heterogeneous workers, follow these steps:
Group Nodes by Specification: First, identify and group your Kubernetes nodes based on their disk configurations. For example, you might have one group of 10 nodes with a single 1TB disk and another group of 12 nodes with two 800GB disks.
Label the Nodes: Assign unique labels to each group of nodes. This allows you to target specific configurations to the correct machines.
Define Worker Groups and Enable Capacity-Based Hashing: In your
alluxio-cluster.yaml, use the.spec.workerGroupsfield to define each group. Use anodeSelectorto apply the specific configuration to the nodes with the corresponding label.For heterogeneous clusters, it is also recommended to configure the hash ring to be capacity-aware. This ensures that workers with more storage capacity are allocated a proportionally larger share of data. You can do this by setting
alluxio.user.worker.selection.policy.consistent.hash.provider.impltoCAPACITY.The example below shows a complete configuration for a heterogeneous cluster:
D. Advanced Configuration
This section describes common configurations to adapt to different scenarios.
D.1. Configuring Alluxio Properties
To modify Alluxio's configuration, edit the .spec.properties field in the alluxio-cluster.yaml file. These properties are appended to the alluxio-site.properties file inside the Alluxio pods.
D.2. Resource and JVM Tuning
You can configure resource limits and JVM options for each component.
Memory limit formula:
For the worker config above (-Xmx22g, -XX:MaxDirectMemorySize=10g): minimum limit is 22 + 10 + 2 = 34 GiB, set to 36 GiB in the example.
If
-XX:MaxDirectMemorySizeis omitted, the JVM defaults it to the same value as-Xmx, so the container limit typically needs to be 2.5×-Xmxor more.
Diagnosing OOM
If a worker pod is killed due to OOM (exit code 137), use these commands to confirm the cause:
Exit Code 137, no Java error
Container limit exceeded — killed by Linux OOM killer
Increase resources.limits.memory
java.lang.OutOfMemoryError: Java heap space
-Xmx too small
Increase -Xmx and raise container limit accordingly
java.lang.OutOfMemoryError: Direct buffer memory
-XX:MaxDirectMemorySize too small
Increase -XX:MaxDirectMemorySize and raise container limit accordingly
D.3. Use PVC for Page Store
To persist worker cache data, specify a PersistentVolumeClaim (PVC) for the page store.
D.4. Mount Custom ConfigMaps or Secrets
You can mount custom ConfigMap or Secret files into your Alluxio pods. This is useful for providing configuration files like core-site.xml or credentials.
Example: Mount a Secret
Create the secret from a local file:
Specify the secret to load and the mount path in
alluxio-cluster.yaml:The file
my-filewill be available at/opt/alluxio/secret/my-fileon the pods.
D.5. Use External ETCD
If you have an external ETCD cluster, you can configure Alluxio to use it instead of the one deployed by the operator.
D.6. Customize ETCD configuration
The fields under spec.etcd follow the Bitnami ETCD helm chart. For example, to set node affinity for etcd pods, the affinity field can be used as described in the Kubernetes documentation.
D.7. nodeSelector
The nodeSelector field allows you to control which nodes Kubernetes schedules pods to. For instructions on labeling nodes and applying node selectors in production, see Recommended Production Configuration.
Additional scenario: Multiple clusters
If multiple Alluxio clusters are deployed and different clusters belong to different namespaces, services from different clusters may be scheduled by Kubernetes to the same node, causing deployment failures. You can label different nodes to indicate which cluster the node belongs to:
And specify the nodeSelector at the cluster level in your cluster.yaml:
D.8. Prepare Namespace
If you want to install Alluxio in a custom namespace (e.g., alluxio-test), creating the namespace is required before installation.
D.9. Configure Image Pull Secrets
If your container images are stored in a private registry that requires authentication, you need to create a Kubernetes Secret to store your registry credentials.
This secret must be created in the namespace where you plan to install Alluxio.
<SECRET_NAME>: Name of the secret (e.g.,alluxio-image-pull-secret).<REGISTRY_SERVER>: Your private registry server address (e.g.,https://index.docker.io/v1/for Docker Hub).<USERNAME>: Your registry username.<PASSWORD>: Your registry password.<NAMESPACE>: The namespace where Alluxio will be installed.
Once created, you can verify the secret exists:
E. License Management
Alluxio requires a license provided by your sales representative. There are two types: a cluster license (for single test clusters) and a deployment license (recommended for production).
E.1. Cluster License
A cluster license is set directly in the alluxio-cluster.yaml file. This method is not recommended for production.
E.2. Deployment License
A deployment license is the recommended method for production and can cover multiple clusters. It is applied by creating a separate License resource after the cluster has been created.
Step 1: Create the Cluster without a License Deploy the Alluxio cluster as described in Step 5 of the main guide, but do not include the alluxio.license property in alluxio-cluster.yaml. The pods will start but remain in an Init state, waiting for the license.
Step 2: Apply the License Create an alluxio-license.yaml file. The name and namespace in this file must match the metadata of your AlluxioCluster.
Apply this file with kubectl create -f alluxio-license.yaml. The Alluxio pods will detect the license and transition to Running.
Warning: Only specify running clusters in the
clusterslist. If the operator cannot find a listed cluster, the license operation will fail for all clusters.
E.3. Updating a Deployment License
To update an existing deployment license, update the licenseString in your alluxio-license.yaml and re-apply it:
E.4. Checking License Status
You can check the license details and utilization from within the Alluxio coordinator pod.
F. Troubleshooting
F.1. etcd pod stuck in pending status
If etcd pods are Pending, it is often due to storage issues. Use kubectl describe pod <etcd-pod-name> to check events.
Symptom: Event message shows pod has unbound immediate PersistentVolumeClaims.
Cause: No storageClass is set for the PVC, or no PV is available.
Solution: Specify a storageClass in alluxio-cluster.yaml:
Then, delete the old cluster and PVCs before recreating the cluster.
Symptom: Event message shows waiting for first consumer.
Cause: The storageClass does not support dynamic provisioning, and a volume must be manually created by an administrator.
Solution: Either use a dynamic provisioner or manually create a PersistentVolume that satisfies the claim.
F.2. etcd pods stuck in Pending due to anti-affinity (fewer than 3 nodes)
Symptom: etcd pods are Pending with event message 0/N nodes are available: N node(s) didn't match pod anti-affinity rules.
Cause: The operator deploys etcd with requiredDuringSchedulingIgnoredDuringExecution anti-affinity by hostname. With etcd.replicaCount: 3 (the default), Kubernetes requires 3 distinct nodes. If your cluster has fewer than 3 nodes, etcd pods cannot be scheduled.
Solution: For dev/test clusters with fewer than 3 nodes, reduce the replica count:
Do not use
replicaCount: 1in production — a single etcd instance has no quorum and is not fault-tolerant.
F.3. alluxio-cluster-fuse PVC in pending status
The alluxio-cluster-fuse PVC remaining in a Pending state is normal. It will automatically bind to a volume and become Bound once a client application pod starts using it.
F.3. Worker pod stuck in CrashLoopBackOff
Symptom: Worker pod repeatedly crashes and restarts.
Start by checking the worker logs:
Common causes include:
Pagestore quota exceeds disk space — log shows
quota (NNN) exceeds the total disk space. This commonly occurs because cloud providers advertise disk size in GB (base-10), while Kubernetes interpretsGias GiB (base-2). Fix: reducepagestore.sizeto ~90% of actual available space (df -h /mnt/alluxio) andreservedSizeto ~10% ofsize.License expired or invalid — log shows a license error. Fix: apply a new license. See Appendix E: License Management.
OOM killed — log shows
Exit Code 137orOutOfMemoryError. Fix: increase container memory limits and adjust-Xmx/-XX:MaxDirectMemorySize. See D.2. Resource and JVM Tuning.
G. Platform-Specific Notes
The main installation steps are platform-agnostic. This section documents known differences for specific Kubernetes environments.
Amazon EKS
EBS CSI driver required for EKS 1.23+: The in-tree EBS volume driver was removed in Kubernetes 1.23. On EKS 1.23+, install the AWS EBS CSI driver add-on, or PVC provisioning will silently fail even if a StorageClass is listed.
Verify:
If no output, the driver is not installed.
Google GKE
Read-only /mnt path: GKE nodes have a read-only root filesystem. Multiple Alluxio components default to hostPaths under /mnt/alluxio/, causing worker pods to fail with:
Workaround: Redirect all hostPaths to a writable base directory (e.g., /home/alluxio/) in alluxio-cluster.yaml:
Additionally, configure worker identity persistence to prevent workers from registering as new instances after each restart (leaving stale OFFLINE workers behind):
kind (Local Development)
Image loading: kind load docker-image can fail for multi-platform images with digest errors. Use the following workaround:
To find the kind container name: docker ps | grep kindest.
Last updated