Install on Kubernetes

This documentation shows how to install Alluxio on Kubernetes via Operator, a Kubernetes extension for managing applications.

Prerequisites

  • Kubernetes

    • A Kubernetes cluster with version at least 1.19, with feature gates enabled

    • Ensure the cluster’s Kubernetes Network Policy allows for connectivity between applications (Alluxio clients) and the Alluxio Pods on the defined ports

    • The Kubernetes cluster must have Helm 3 installed, with a version of 3.6.0 or higher

    • Image registry for storing and managing container images

  • Hardware

    • Each node should have at least 8 CPUs and 32GB of memory

    • Each node should have at least 100GB of storage space

  • Permissions. Reference: Using RBAC Authorization

    • Permission to create CRD (Custom Resource Definition)

    • Permission to create ServiceAccount, ClusterRole, and ClusterRoleBinding for the operator pod

    • Permission to create namespace that the operator will be in

Preparation

Download the files for Alluxio operator and Alluxio cluster

Please visit the Alluxio download page and select Alluxio Enterprise AI to download. After the download is complete, extract the files using the following command:

$ tar zxf alluxio-enterprise-k8s-ai-trial.tar.gz

After extraction, you will obtain the following files:

  • alluxio-operator-{{site.ALLUXIO_OPERATOR_VERSION_STRING}}-helmchart.tgz is the helm chart for deploying Alluxio operator

  • alluxio-operator-{{site.ALLUXIO_OPERATOR_VERSION_STRING}}-docker.tar is the docker image for all Alluxio operator components

  • alluxio-enterprise-{{site.ALLUXIO_VERSION_STRING}}-docker.tar is the docker image for Alluxio

Upload the images to an image registry

An image registry is a centralized location for storing and sharing your container images. It can be either public or private. Cloud provider may come with its container registry as a service. For example, Amazon Elastic Container Registry(ECR), Azure Container Registry (ACR), and Google Container Registry (GCR). Private registries may also be provided from your local system or within private networks of your organization.

This example shows how to upload the Alluxio operator images.

# load the image to local
$ docker load -i alluxio-operator-{{site.ALLUXIO_OPERATOR_VERSION_STRING}}-docker.tar
$ docker load -i alluxio-enterprise-{{site.ALLUXIO_VERSION_STRING}}-docker.tar

# retag the image with your private registry
$ docker tag alluxio/operator:{{site.ALLUXIO_OPERATOR_VERSION_STRING}} <PRIVATE_REGISTRY>/alluxio-operator:{{site.ALLUXIO_OPERATOR_VERSION_STRING}}
$ docker tag alluxio/alluxio-enterprise:{{site.ALLUXIO_VERSION_STRING}} <PRIVATE_REGISTRY>/alluxio-enterprise:{{site.ALLUXIO_VERSION_STRING}}

# push to the remote registry
$ docker push <PRIVATE_REGISTRY>/alluxio-operator:{{site.ALLUXIO_OPERATOR_VERSION_STRING}}
$ docker push <PRIVATE_REGISTRY>/alluxio-enterprise:{{site.ALLUXIO_VERSION_STRING}}

Extract the helm chart for operator

# the command will extract the files to the directory alluxio-operator/
$ tar zxf alluxio-operator-{{site.ALLUXIO_OPERATOR_VERSION_STRING}}-helmchart.tgz

The extracted alluxio-operator directory contains the Helm chart files responsible for deploying the operator.

Deployment

Deploy Alluxio operator

Create the alluxio-operator/alluxio-operator.yaml file to specify the image and version used for deploying the operator. The following example shows how to specify the operator image and version:

global:
  image: <PRIVATE_REGISTRY>/alluxio-operator
  imageTag: {{site.ALLUXIO_OPERATOR_VERSION_STRING}}

Move to the alluxio-operator directory and execute the following command to deploy the operator:

$ cd alluxio-operator
# the last parameter is the directory to the helm chart, "." means the current directory
$ helm install operator -f alluxio-operator.yaml .
NAME: operator
LAST DEPLOYED: Wed May 15 17:32:34 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

# verify if the operator is running as expected
$ kubectl get pod -n alluxio-operator
NAME                                              READY   STATUS    RESTARTS   AGE
alluxio-cluster-controller-5647cc664d-lrx84       1/1     Running   0          14s
alluxio-collectinfo-controller-667b746fd6-hfzqk   1/1     Running   0          14s
alluxio-csi-controller-7bd66df6cf-7kh6k           2/2     Running   0          14s
alluxio-csi-nodeplugin-9cc9v                      2/2     Running   0          14s
alluxio-csi-nodeplugin-fgs5z                      2/2     Running   0          14s
alluxio-csi-nodeplugin-v22q6                      2/2     Running   0          14s
alluxio-ufs-controller-5f6d7c4d66-drjgm           1/1     Running   0          14s

Deploying alluxio operator requires pulling dependent images from the public image repository. If you fail to deploy alluxio-operator because the network environment cannot access the public image repository, please refer to Configuring alluxio-operator image.

Deploy Alluxio

Create the alluxio-operator/alluxio-cluster.yaml file to deploy the Alluxio cluster. The file below shows the minimal configuration.

Please see Resource Prerequisites and Compatibility for resource planning recommendations.

The operator has already set the recommended configuration by default, which can start an Alluxio cluster. If you need to modify the configuration, you can edit the .spec.properties field in the alluxio-cluster.yaml file. The section for common use cases describes some general scenarios to modify these properties.

The properties specified under the .spec.properties field will be appended to the alluxio-site.properties configuration file and the Alluxio processes will read that file. You can find your configurations in the Alluxio coordinator, worker pod by looking at /opt/alluxio/conf/alluxio-site.properties.

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
metadata:
  name: alluxio
spec:
  image: <PRIVATE_REGISTRY>/alluxio-enterprise
  imageTag: {{site.ALLUXIO_VERSION_STRING}}
  properties:

  worker:
    count: 2
    pagestore:
      size: 100Gi
      reservedSize: 10Gi

Note that by default the page store is located at the host path /mnt/alluxio/pagestore. The size of the cache can be set by the .spec.worker.pagestore.size property. The size of the reserved space can be set by the .spec.worker.pagestore.reservedSize property, and is recommended to be 5% - 10% of the cache size. Please adjust the size of the cache and the reserved space according to the actual capacity of the storage device of the host.

The minimal configuration provided above can help you quickly deploy the Alluxio cluster for testing and validation. In a production environment, we recommend deploying the Alluxio cluster using labels and selectors. Labeling the nodes will ensure any persisted information will be available in case the pods are restarted. Examples of persisted information include job metadata on the coordinator and cached data on the workers.

First, select a group of Kubernetes nodes to run the Alluxio cluster, and label the nodes accordingly:

kubectl label nodes <node-name> alluxio-role=coordinator
kubectl label nodes <node-name> alluxio-role=worker

Then, specify the nodeSelector in the alluxio-cluster.yaml file. Below is an example:

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
metadata:
  name: alluxio
spec:
  image: <PRIVATE_REGISTRY>/alluxio-enterprise
  imageTag: {{site.ALLUXIO_VERSION_STRING}}
  properties:

  coordinator:
    nodeSelector:
      alluxio-role: coordinator
    
  worker:
    nodeSelector:
      alluxio-role: worker
    count: 2
    pagestore:
      size: 100Gi

We provide another option for the coordinator to persist the metastore by using a PVC. Below is an example:

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
metadata:
  name: alluxio
spec:
  image: <PRIVATE_REGISTRY>/alluxio-enterprise
  imageTag: {{site.ALLUXIO_VERSION_STRING}}
  properties:

  coordinator:
    metastore:
      type: persistentVolumeClaim
      storageClass: "gp2"
      size: 4Gi
    
  worker:
    nodeSelector:
      alluxio-role: worker
    count: 2
    pagestore:
      size: 100Gi
  • If your training data is stored in S3, OSS, or other storage, and the training task can access the data via s3:// or oss://, you can accelerate training by mounting the under file system into Alluxio through creating UFS resources after starting the Alluxio cluster.

  • If your training task accesses the training data through PVC or NAS, you need to mount the training data's PVC or NAS to the Alluxio pods when creating the Alluxio cluster. Please refer to Mount storage to Alluxio for details on mounting PVC or NAS/hostPath, and modify the alluxio-operator/alluxio-cluster.yaml accordingly.

Move to the alluxio-operator directory and execute the following command to deploy the Alluxio cluster:

$ cd alluxio-operator
$ kubectl create -f alluxio-cluster.yaml
alluxiocluster.k8s-operator.alluxio.com/alluxio created

# the cluster will be starting
$ kubectl get pod
NAME                                  READY   STATUS              RESTARTS   AGE
alluxio-coordinator-0                 0/1     Init:0/1            0          7s
alluxio-etcd-0                        0/1     ContainerCreating   0          7s
alluxio-etcd-1                        0/1     ContainerCreating   0          7s
alluxio-etcd-2                        0/1     ContainerCreating   0          7s
alluxio-grafana-847fd46f4b-84wgg      0/1     Running             0          7s
alluxio-prometheus-778547fd75-rh6r6   1/1     Running             0          7s
alluxio-worker-76c846bfb6-2jkmr       0/1     Init:0/2            0          7s
alluxio-worker-76c846bfb6-nqldm       0/1     Init:0/2            0          7s

# check the status of the cluster
$ kubectl get alluxiocluster
NAME      CLUSTERPHASE   AGE
alluxio   Ready          2m18s

# and check the running pods after the cluster is ready
$ kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
alluxio-coordinator-0                 1/1     Running   0          2m3s
alluxio-etcd-0                        1/1     Running   0          2m3s
alluxio-etcd-1                        1/1     Running   0          2m3s
alluxio-etcd-2                        1/1     Running   0          2m3s
alluxio-grafana-7b9477d66-mmcc5       1/1     Running   0          2m3s
alluxio-prometheus-78dbb89994-xxr4c   1/1     Running   0          2m3s
alluxio-worker-85fd45db46-c7n9p       1/1     Running   0          2m3s
alluxio-worker-85fd45db46-sqv2c       1/1     Running   0          2m3s

In Alluxio 3.x, the coordinator is a stateless control component that serves jobs like distributed load and acts as API gateway for the operator.

If some components in the cluster do not reach the Running state, you can use kubectl describe pod to view detailed information and identify the issue. For specific issues encountered during deployment, refer to the FAQ section.

Alluxio cluster also includes etcd and monitoring components. If the image cannot be pulled from the public image registry, causing etcd and monitoring to fail to start, please refer to Configuring Alluxio Cluster Image.

Mount storage to Alluxio

Alluxio supports integration with various underlying storage systems, including S3, HDFS, OSS, COS, and TOS.

With the operator, you can mount underlying storage by creating UnderFileSystem resources. An UnderFileSystem corresponds to a mount point for Alluxio. Regarding the Alluxio and the underlying storage namespace, please refer to Alluxio Namespace and Under File System Namespaces.

Below, we provide several examples of commonly used underlying storage mounts.

S3

Create the alluxio-operator/ufs.yaml file to specify the UFS configuration. The following example shows how to mount an S3 bucket to Alluxio:

apiVersion: k8s-operator.alluxio.com/v1
kind: UnderFileSystem
metadata:
  name: alluxio-s3
spec:
  alluxioCluster: alluxio
  path: s3://<S3_BUCKET>/<S3_DIRECTORY>
  mountPath: /s3
  mountOptions:
    s3a.accessKeyId: <S3_ACCESS_KEY_ID>
    s3a.secretKey: <S3_SECRET_KEY>
    alluxio.underfs.s3.region: <S3_REGION>

Find more details about mounting S3 to Alluxio in Amazon AWS S3.

OSS

Create the alluxio-operator/ufs.yaml file to specify the UFS configuration. The following example shows how to mount an OSS bucket to Alluxio:

apiVersion: k8s-operator.alluxio.com/v1
kind: UnderFileSystem
metadata:
  name: alluxio-oss
spec:
  alluxioCluster: alluxio
  path: oss://<OSS_BUCKET>/<OSS_DIRECTORY>
  mountPath: /oss
  mountOptions:
    fs.oss.accessKeyId: <OSS_ACCESS_KEY>
    fs.oss.accessKeySecret: <OSS_ACCESS_KEY_SECRET>
    fs.oss.endpoint: <OSS_ENDPOINT>

Find more details about mounting OSS to Alluxio in Aliyun Object Storage Service.

COS

Create the alluxio-operator/ufs.yaml file to specify the UFS configuration. The following example shows how to mount a COS bucket to Alluxio:

apiVersion: k8s-operator.alluxio.com/v1
kind: UnderFileSystem
metadata:
  name: alluxio-cos
spec:
  alluxioCluster: alluxio
  path: cos://<COS_BUCKET>/<COS_DIRECTORY>
  mountPath: /cos
  mountOptions:
    fs.cos.access.key: <COS_ACCESS_KEY>
    fs.cos.secret.key: <COS_ACCESS_KEY_SECRET>
    fs.cos.region: <COS_REGION>
    fs.cos.app.id: <COS_APP_ID>

Note: The full name of a COS bucket is <COS_BUCKET>-<COS_APP_ID>. The value of the path should only include the <COS_BUCKET> portion with the <COS_APP_ID> part omitted, resulting in cos://<COS_BUCKET>/<COS_DIRECTORY>. Also, make sure to set the fs.cos.app.id to <COS_APP_ID>.

Find more details about mounting COS to Alluxio in Tencent Cloud Object Storage.

NAS

To ensure that the Alluxio pods have access to NAS storage, you need to first mount the NAS to a path on the node. Alluxio operator supports mounting node local paths to Alluxio pods. Before creating the Alluxio cluster, you need to add the mount paths in alluxio-operator/alluxio-cluster.yaml. When the Alluxio cluster is started, these paths will be mounted into the Alluxio components.

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
metadata:
  name: alluxio
spec:
  image: <PRIVATE_REGISTRY>/alluxio-enterprise
  imageTag: {{site.ALLUXIO_VERSION_STRING}}

  hostPaths:
    coordinator:
      /mnt/nas: /ufs/data
    worker:
      /mnt/nas: /ufs/data
    fuse:
      /mnt/nas: /ufs/data

  properties:

  worker:
    count: 2
    pagestore:
      size: 100Gi
  • The key is the host path on the node, and the value is the mounted path in the container

  • If using a NAS as the UFS, the same path needs to be mounted to the coordinator, workers and FUSE processes so that the FUSE can fall back if any error occurs

Create the alluxio-operator/ufs.yaml file to specify the UFS configuration. The following example shows how to mount a NAS or host path to Alluxio:

apiVersion: k8s-operator.alluxio.com/v1
kind: UnderFileSystem
metadata:
  name: alluxio-nfs
spec:
  alluxioCluster: alluxio
  path: file:///ufs/data
  mountPath: /nfs
PVC

Originally, the user's training task accessed data by mounting a PVC to a specific path inside the container. The path inside the training task's container was bound to the PVC, allowing the task to access data via this mounted path.

Now, we're improving this process using Alluxio. When deploying the Alluxio cluster, you need to mount the training data's PVC (which was previously used for storing training data) to a path inside the Alluxio component's container. Then, this path inside the Alluxio component's container should be mounted to the Alluxio cluster. In this way, the training data's PVC is successfully mounted to the Alluxio cluster, allowing the data to be accessed and processed through Alluxio's interface.

you need to add the mount path in alluxio-cluster.yaml before creating the Alluxio cluster, mounting the existing PVC for training data to a path in the Alluxio components:

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
metadata:
  name: alluxio
spec:
  image: <PRIVATE_REGISTRY>/alluxio-enterprise
  imageTag: {{site.ALLUXIO_VERSION_STRING}}

  pvcMounts:
    coordinator:
      training-data-ufs-pvc: /ufs/data
    worker:
      training-data-ufs-pvc: /ufs/data
    fuse:
      training-data-ufs-pvc: /ufs/data

  properties:

  worker:
    count: 2
    pagestore:
      size: 100Gi
  • The key is the PVC, and the value is the mounted path in the container

  • If using a PVC as the UFS, the same path needs to be mounted to the coordinator, workers and FUSE processes so that the FUSE can fallback if any error occurs

Create the alluxio-operator/ufs.yaml file to specify the UFS configuration. The following example shows how to mount a PVC to Alluxio:

apiVersion: k8s-operator.alluxio.com/v1
kind: UnderFileSystem
metadata:
  name: alluxio-pvc
spec:
  alluxioCluster: alluxio
  path: file:///ufs/data
  mountPath: /pvc

Executing the mount

First, ensure that the Alluxio cluster is up and running with a Ready status. (Or if the status is WaitingForReady, can also mount UFS)

# check the status of the cluster
$ kubectl get alluxiocluster
NAME      CLUSTERPHASE   AGE
alluxio   Ready          2m18s

Execute the following command to create the UnderFileSystem resource and mount that to Alluxio namespace:

$ cd alluxio-operator
$ kubectl create -f ufs.yaml
underfilesystem.k8s-operator.alluxio.com/alluxio-s3 created

# verify the status of the storage
$ kubectl get ufs
NAME         PHASE   AGE
alluxio-s3   Ready   46s

# also check the mount table via Alluxio command line
$ kubectl exec -it alluxio-coordinator-0 -- alluxio mount list 2>/dev/null
Listing all mount points
s3://my-bucket/path/to/mount  on  /s3/ properties={s3a.secretKey=xxx, alluxio.underfs.s3.region=us-east-1, s3a.accessKeyId=xxx}

Monitoring

The Alluxio cluster enables monitoring by default. You can view various Alluxio metrics visually through Grafana. Please refer to the Monitoring and Metrics section on Kubernetes Operator.

Data Access Acceleration

In the steps above, you deployed the Alluxio cluster and mounted the under file system to Alluxio. Training tasks that read data through Alluxio can improve training speed and GPU utilization. Majorly, Alluxio provides three ways for applications to access data:

Common use cases

Change the resource limitations

For every component, like worker, coordinator, and FUSE, we can change the resource by the following configuration:

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
spec:
  worker:
    count: 2
    resources:
      limits:
        cpu: "12"
        memory: "36Gi"
      requests:
        cpu: "1"
        memory: "32Gi"
    jvmOptions:
      - "-Xmx22g"
      - "-Xms22g"
      - "-XX:MaxDirectMemorySize=10g"
  coordinator:
    resources:
      limits:
        cpu: "12"
        memory: "36Gi"
      requests:
        cpu: "1"
        memory: "32Gi"
    jvmOptions:
      - "-Xmx4g"
      - "-Xms1g"
  • The container will never be able to access the resource over the limits, and the requests are used during scheduling. For more information, please refer to Resource Management for Pods and Containers

  • The limit of the memory should be a little bit over the sum of the heap size(-Xmx) and the direct memory size(-XX:MaxDirectMemorySize=10g) to avoid out-of-memory problems.

Use PVC for page store

Page store here refer to the cache Alluxio uses.

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
spec:
  worker:
    pagestore:
      type: persistentVolumeClaim
      storageClass: ""
      size: 100Gi
      reservedSize: 10Gi
  • The PVC will be created by the operator

  • The storageClass defaults to standard, but can be specified to empty string for static binding

  • The size property specifies the size of the cache space. The reservedSize property specifies the amount of additional space used as an internal buffer for temporary data. The total size of the underlying storage will be the sum of the size of the cache and the reserved size. We recommend allocating a reserved size that's 5% - 10% of the size of the cache.

Mount customized config maps

A custom config map can be used to provide configuration files on pods. Although it can be used for other purposes such as environment variables, the following example will focus on files.

Create a new config map from a local file:

kubectl create configmap my-configmap --from-file=/path/to/my-configmap

Declare the config map with its mount point.

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
spec:
  configMaps:
    worker:
      my-configmap: /path/to/mount
    coordinator:
      my-configmap: /path/to/mount
  • The key is the name of the ConfigMap, and the value if the mounted path in the container

  • The /opt/alluxio/conf is already mounted by default. This means other files cannot be mounted directly within the conf/ directory. The custom config maps need to mount to other paths.

    • Using the cache filter json file as an example, mount it to /opt/alluxio/conf/cachefilter/cache_filter.json and set this path as the value of alluxio.user.client.cache.filter.config.file for Alluxio to read it.

Add a file onto pods as a secret

This mechanism can be used to provide credentials files on pods.

Create a new secret from a local file:

kubectl create secret my-file --from-file=/path/to/my-file

Specify which secrets to load and the file path on the pods.

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
spec:
  secrets:
    worker:
      my-file: /home/alluxio/my-file
    coordinator:
      my-file: /home/alluxio/my-file

Use the root user

The FUSE pod will always use the root user. The other processes use the user with uid 1000 by default. In the container, the user is named alluxio. To change it to the root user, use this configuration:

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
spec:
  user: 0
  group: 0
  fsGroup: 0
  • Sometimes it’s enough to specify the .spec.fsGroup = 0 only when the files can be accessed by the root group

  • The ownership of the mounted host path, such as the page store path and log path, will be transferred to root if changing to the root user.

Use external ETCD

If you have an external ETCD cluster, you can specify the endpoints for Alluxio to use.

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
spec:
  etcd:
    enabled: false
  properties:
    alluxio.etcd.endpoints: http://external-etcd:2379

When Client-to-server transport security with HTTPS, certificates are used for SSL/TLS connections to ETCD. For this, have a signed key pair (client.crt, pkcs8_key_encrypted.pem) and CA file (ca.crt) ready.

Here it needs a PKCS8 key, you can use the following command to convert the key:

$ openssl pkcs8 -topk8 -v2 aes256 -in server.key -out pkcs8_key_encrypted.pem

Note: If you use openssl pkcs8 -topk8 -nocrypt -in server.key -out pkcs8_key.pem generate unencrypted key file. You don’t need to set alluxio.etcd.tls.client.key.password in alluxio-site.properties.

Create secrets in Kubernetes with the created ca.crt, client.crt and pkcs8_key_encrypted.pem. For example,

$ kubectl create secret generic etcd-certs --from-file=/path/to/ca.crt  --from-file=/path/to/client.crt --from-file=/path/to/pkcs8_key_encrypted.pem

Configure the etcd properties in the alluxio-cluster.yaml file and specify the secrets for the coordinator, worker and fuse:

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
spec:
  etcd:
    enabled: false
  properties:
    alluxio.etcd.endpoints: https://external-etcd:2379
    alluxio.etcd.tls.enabled: "true"
    alluxio.etcd.tls.ca.cert: /secrets/etcd-certs/ca.crt
    alluxio.etcd.tls.client.cert: /secrets/etcd-certs/client.crt
    alluxio.etcd.tls.client.key: /secrets/etcd-certs/pkcs8_key_encrypted.pem
    alluxio.etcd.tls.client.key.password: <your key password>
  secrets:
    coordinator:
      etcd-certs: /secrets/etcd-certs
    worker:
      etcd-certs: /secrets/etcd-certs
    fuse:
      etcd-certs: /secrets/etcd-certs

Deploy workers on nodes with different disk specs

The operator supports heterogeneous configurations for workers, specifically to configure different disk specs. Generally, inconsistencies within the worker configurations may lead to serious unexpected errors and therefore we do not support other scenarios other than the following use case.

  1. Classify the nodes with the disk specs. e.g.: We have 10 nodes with one 1TB disk and 12 nodes with two 800GB disks.

  2. Label the nodes to uniquely identify different groups of workers, where each group shares the same configuration.

# label nodes with one disk
kubectl label nodes <node name> apps.alluxio.com/disks=1
# label nodes with two disks
kubectl label nodes <node name> apps.alluxio.com/disks=2
  1. Use .workerGroups to list the worker configurations, defining the labels to filter on with nodeSelector and its corresponding configuration.

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
spec:
  # you can still specify common configurations with .worker
  worker:
    # the resources and the jvmOptions will affect all worker groups
    resources:
      limits:
        memory: 40Gi
      requests:
        memory: 36Gi
    jvmOptions: ["-Xmx20g", "-Xms20g", "-XX:MaxDirectMemorySize=16g"]
  # configuration here will override the one in worker
  workerGroups:
  - worker:
      count: 10
      nodeSelector:
        apps.alluxio.com/disks: 1
      pagestore:
        hostPath: /mnt/disk1/alluxio/pagestore
        size: 1Ti
  - worker:
      count: 12
      nodeSelector:
        apps.alluxio.com/disks: 2
      pagestore:
        hostPath: /mnt/disk1/alluxio/pagestore,/mnt/disk2/alluxio/pagestore
        size: 800Gi,800Gi

Dynamically Update Alluxio Configuration in a Running Cluster

  1. Get configmap

$ kubectl get configmap
NAME                      DATA   AGE
alluxio-alluxio-conf      4      7m48s
  1. Edit configmap to update Alluxio configuration

$ kubectl edit configmap alluxio-alluxio-conf

There should be 4 files inside: alluxio-env.sh, alluxio-site.properties, log4j2.xml, and metrics.properties. Edit as you need, then save the configmap.

configmap/alluxio-alluxio-conf edited
  1. Restart Alluxio components as needed Assuming the cluster name is alluxio which is specified in alluxio-cluster.yaml

  • coordinator: kubectl rollout restart statefulset alluxio-coordinator

  • worker: kubectl rollout restart deployment alluxio-worker

  • daemonset fuse (fuse.type = daemonSet): kubectl rollout restart daemonset alluxio-fuse

  • csi fuse (fuse.type = csi): the csi fuse pods doesn’t have a rollout restart command, you can wait for the user’s pod exit and the current csi fuse pod exit, and the new csi fuse pod will use the latest configuration. or you can manually kill the csi fuse pod: kubectl delete pod alluxio-fuse-xxx, and the restarted csi fuse pod will use the latest configuration.

FAQ

etcd pod stuck in pending status

For example, if three etcd pods remain in the Pending state, you can use kubectl describe pod to view detailed information:

# Check the status of the pods
kubectl get pod

NAME                                  READY   STATUS     RESTARTS   AGE
alluxio-coordinator-0                 0/1     Init:1/2   0          73s
alluxio-etcd-0                        0/1     Pending    0          73s
alluxio-etcd-1                        0/1     Pending    0          73s
alluxio-etcd-2                        0/1     Pending    0          73s
alluxio-grafana-79db8c7dd9-lsq2l      1/1     Running    0          73s
alluxio-prometheus-7c6cbc4b4c-9nk25   1/1     Running    0          73s
alluxio-worker-8c79d5fd4-2c994        0/1     Init:1/2   0          73s
alluxio-worker-8c79d5fd4-jrchj        0/1     Init:1/2   0          73s

# Check detailed information about the etcd pod
kubectl describe pod alluxio-etcd-0

Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  3m57s  default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling., .

# Check the PVC Status in the Cluster
# If you find that the etcd PVCs are stuck in the Pending state (note that the alluxio-fuse being in Pending state is normal), you can investigate further.
kubectl get pvc

NAME                  STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS           AGE
alluxio-fuse          Pending                                      default-alluxio-fuse   5m31s
data-alluxio-etcd-0   Pending                                                             3h41m
data-alluxio-etcd-1   Pending                                                             3h41m
data-alluxio-etcd-2   Pending                                                             3h41m

# Check the PVC description
kubectl describe pvc data-alluxio-etcd-0

Events:
  Type    Reason         Age                      From                         Message
  ----    ------         ----                     ----                         -------
  Normal  FailedBinding  4m16s (x889 over 3h44m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

Based on the error message, the etcd pods are stuck in the Pending state because no storage class is set. You can resolve this issue by specifying the storage class for etcd in the alluxio-operator/alluxio-cluster.yaml file:

  etcd:
    persistence:
      storageClass: <STORAGE_CLASS>
      size: 

First, delete the Alluxio cluster and the etcd PVC, then recreate the Alluxio cluster:

# Delete the Alluxio cluster
$ kubectl delete -f alluxio-operator/alluxio-cluster.yaml

# Delete the etcd PVC
$ kubectl delete pvc data-alluxio-etcd-0
$ kubectl delete pvc data-alluxio-etcd-1
$ kubectl delete pvc data-alluxio-etcd-2

# Recreate the Alluxio cluster
$ kubectl create -f alluxio-operator/alluxio-cluster.yaml

Another issue is the etcd PVC specifies a storage class, but both the etcd pod and PVC remain in a pending state. For example, as shown in the detailed information of the PVC below, the storage class specified for the etcd PVC does not support dynamic provisioning, and the storage volume needs to be manually created by the cluster administrator.

# Check the PVC description
kubectl describe pvc data-alluxio-etcd-0

Events:
  Type    Reason                Age               From                         Message
  ----    ------                ----              ----                         -------
  Normal  WaitForFirstConsumer  25s               persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  ExternalProvisioning  8s (x3 over 25s)  persistentvolume-controller  Waiting for a volume to be created either by the external provisioner 'none' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

For similar issues where etcd pods remain in the Pending state, you can use the above method for troubleshooting.

alluxio-fuse PVC in pending status

After creating the cluster, you might notice that alluxio-fuse is in the Pending status. This is normal. The PVC will automatically bind to a PV and its status will change to Bound when it is used by a client pod.

Unable to access public image registry

Configuring alluxio-operator image

Deploying the Alluxio operator requires pulling dependent images from an accessible image registry. If your network environment cannot access the public image registry, you will encounter a timeout error when pulling the images:

# Check if the operator is running properly
$ kubectl get pod -n alluxio-operator
NAME                                              READY   STATUS              RESTARTS   AGE
alluxio-cluster-controller-65b59f65b4-5d667       1/1     Running             0          22s
alluxio-collectinfo-controller-667b746fd6-hfzqk   1/1     Running             0          22s
alluxio-csi-controller-c85f8f759-sqc56            0/2     ContainerCreating   0          22s
alluxio-csi-nodeplugin-5pgmg                      0/2     ContainerCreating   0          22s
alluxio-csi-nodeplugin-fpkcq                      0/2     ContainerCreating   0          22s
alluxio-csi-nodeplugin-j9wll                      0/2     ContainerCreating   0          22s
alluxio-ufs-controller-5f69bbb878-7km58           1/1     Running             0          22s

You may notice that the cluster controller, ufs controller and collectinfo controller have started successfully, but the csi controller and csi nodeplugin remain in the ContainerCreating state. This is due to a timeout while pulling the dependent images. By using kubectl describe pod to view detailed information, you will see error messages similar to the following:

$ kubectl -n alluxio-operator describe pod -l app.kubernetes.io/component=csi-controller

Events:
  Type     Reason          Age                    From               Message
  ----     ------          ----                   ----               -------
  Normal   Scheduled       10m                    default-scheduler  Successfully assigned alluxio-operator/alluxio-csi-controller-c85f8f759-sqc56 to cn-beijing.10.0.0.252
  Normal   AllocIPSucceed  10m                    terway-daemon      Alloc IP 10.0.0.27/24 took 28.443992ms
  Normal   Pulling         10m                    kubelet            Pulling image "registry.xxx.com/alluxio/operator:2.0.0"
  Normal   Pulled          10m                    kubelet            Successfully pulled image "registry.xxx.com/alluxio/operator:2.0.0" in 5.55s (5.55s including waiting)
  Normal   Created         10m                    kubelet            Created container csi-controller
  Normal   Started         10m                    kubelet            Started container csi-controller
  Warning  Failed          8m20s (x2 over 10m)    kubelet            Failed to pull image "registry.k8s.io/sig-storage/csi-provisioner:v2.0.5": failed to pull and unpack image "registry.k8s.io/sig-storage/csi-provisioner:v2.0.5": failed to resolve reference "registry.k8s.io/sig-storage/csi-provisioner:v2.0.5": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/sig-storage/csi-provisioner/manifests/v2.0.5": dial tcp 142.251.8.82:443: i/o timeout
  Warning  Failed          8m20s (x3 over 10m)    kubelet            Error: ErrImagePull
  Warning  Failed          7m40s (x5 over 10m)    kubelet            Error: ImagePullBackOff
  Warning  Failed          6m56s (x2 over 9m19s)  kubelet            Failed to pull image "registry.k8s.io/sig-storage/csi-provisioner:v2.0.5": rpc error: code = DeadlineExceeded desc = failed to pull and unpack image "registry.k8s.io/sig-storage/csi-provisioner:v2.0.5": failed to resolve reference "registry.k8s.io/sig-storage/csi-provisioner:v2.0.5": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/sig-storage/csi-provisioner/manifests/v2.0.5": dial tcp 64.233.187.82:443: i/o timeout
  Normal   Pulling         5m29s (x5 over 10m)    kubelet            Pulling image "registry.k8s.io/sig-storage/csi-provisioner:v2.0.5"
  Normal   BackOff         30s (x28 over 10m)     kubelet            Back-off pulling image "registry.k8s.io/sig-storage/csi-provisioner:v2.0.5"

You can download the dependent images locally, upload them to your private image registry, and then modify the image addresses in the alluxio-operator.yaml file before redeploying the operator.

Component
Image Name
Version
Purpose

operator CSI

registry.k8s.io/sig-storage/csi-node-driver-registrar

v2.0.0

csi driver registrar dependency

operator CSI

registry.k8s.io/sig-storage/csi-provisioner

v2.0.5

csi provisioner dependency

cluster ETCD

docker.io/bitnami/etcd

3.5.9-debian-11-r24

etcd dependency

cluster ETCD

docker.io/bitnami/os-shell

11-debian-11-r2

os-shell dependency

cluster monitor

grafana/grafana

10.4.5

Monitoring dashboard

cluster monitor

prom/prometheus

v2.52.0

Metrics collection

The commands to pull the Docker images and upload them to your private image registry are as follows:

# Pull the Docker images
$ docker pull registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.0.0
$ docker pull registry.k8s.io/sig-storage/csi-provisioner:v2.0.5
$ docker pull docker.io/bitnami/etcd:3.5.9-debian-11-r24
$ docker pull docker.io/bitnami/os-shell:11-debian-11-r2
$ docker pull grafana/grafana:10.4.5
$ docker pull prom/prometheus:v2.52.0

# Tag the images with your private registry
$ docker tag registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.0.0 <PRIVATE_REGISTRY>/csi-node-driver-registrar:v2.0.0
$ docker tag registry.k8s.io/sig-storage/csi-provisioner:v2.0.5 <PRIVATE_REGISTRY>/csi-provisioner:v2.0.5
$ docker tag docker.io/bitnami/etcd:3.5.9-debian-11-r24 <PRIVATE_REGISTRY>/etcd:3.5.9-debian-11-r24
$ docker tag docker.io/bitnami/os-shell:11-debian-11-r2 <PRIVATE_REGISTRY>/os-shell:11-debian-11-r2
$ docker tag grafana/grafana:10.4.5 <PRIVATE_REGISTRY>/grafana:10.4.5
$ docker tag prom/prometheus:v2.52.0 <PRIVATE_REGISTRY>/prometheus:v2.52.0

# Push the images to your private registry
$ docker push <PRIVATE_REGISTRY>/csi-node-driver-registrar:v2.0.0
$ docker push <PRIVATE_REGISTRY>/csi-provisioner:v2.0.5
$ docker push <PRIVATE_REGISTRY>/etcd:3.5.9-debian-11-r24
$ docker push <PRIVATE_REGISTRY>/os-shell:11-debian-11-r2
$ docker push <PRIVATE_REGISTRY>/grafana:10.4.5
$ docker push <PRIVATE_REGISTRY>/prometheus:v2.52.0

Update the image addresses in the alluxio-operator/alluxio-operator.yaml file, adding the provisioner and driverRegistrar image addresses:

global:
  image: <PRIVATE_REGISTRY>/alluxio-operator
  imageTag: {{site.ALLUXIO_OPERATOR_VERSION_STRING}}

alluxio-csi:
  controllerPlugin: 
    provisioner: 
      image: <PRIVATE_REGISTRY>/csi-provisioner:v2.0.5
  nodePlugin: 
    driverRegistrar: 
        image: <PRIVATE_REGISTRY>/csi-node-driver-registrar:v2.0.0

Move to the alluxio-operator directory and execute the following command to deploy the operator:

$ cd alluxio-operator
# the last parameter is the directory to the helm chart, "." means the current directory
$ helm install operator -f alluxio-operator.yaml .
NAME: operator
LAST DEPLOYED: Wed May 15 17:32:34 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

# verify if the operator is running as expected
$ kubectl get pod -n alluxio-operator
NAME                                              READY   STATUS    RESTARTS   AGE
alluxio-cluster-controller-5647cc664d-lrx84       1/1     Running   0          14s
alluxio-collectinfo-controller-667b746fd6-hfzqk   1/1     Running   0          14s
alluxio-csi-controller-7bd66df6cf-7kh6k           2/2     Running   0          14s
alluxio-csi-nodeplugin-9cc9v                      2/2     Running   0          14s
alluxio-csi-nodeplugin-fgs5z                      2/2     Running   0          14s
alluxio-csi-nodeplugin-v22q6                      2/2     Running   0          14s
alluxio-ufs-controller-5f6d7c4d66-drjgm           1/1     Running   0          14s

Configuring Alluxio cluster image

Starting the Alluxio cluster also involves etcd and monitoring components. If you cannot access the public image registry, you need to replace the image addresses for etcd and monitoring components with those from your private image registry. Modify the image addresses in the alluxio-operator/alluxio-cluster.yaml file accordingly.

apiVersion: k8s-operator.alluxio.com/v1
kind: AlluxioCluster
metadata:
  name: alluxio
spec:
  image: <PRIVATE_REGISTRY>/alluxio-enterprise
  imageTag: {{site.ALLUXIO_VERSION_STRING}}
  properties:
  worker:
    count: 2
    pagestore:
      size: 100Gi
  etcd:
    image:
      registry: <PRIVATE_REGISTRY>
      repository: <PRIVATE_REPOSITORY>/etcd
      tag: 3.5.9-debian-11-r24
    volumePermissions:
      image:
        registry: <PRIVATE_REGISTRY>
        repository: <PRIVATE_REPOSITORY>/os-shell
        tag: 11-debian-11-r2
  prometheus:
    image: <PRIVATE_REGISTRY>/prometheus
    imageTag: v2.52.0
  grafana:
    image: <PRIVATE_REGISTRY>/grafana
    imageTag: 10.4.5

Last updated