Alluxio
ProductsLanguageHome
AI-3.5 (stable)
AI-3.5 (stable)
  • Overview
  • Getting Started with K8s
    • Resource Prerequisites and Compatibility
    • Install on Kubernetes
    • Monitoring and Metrics
    • Cluster Administration
    • System Health Check & Quick Recovery
    • Collecting Cluster Information
  • Architecture
    • Alluxio Namespace and Under File System Namespaces
    • I/O Resiliency
    • Worker Management and Consistent Hashing
  • Storage Integrations
    • Amazon AWS S3
    • HDFS
    • Aliyun OSS
    • Tencent COS
    • Volcengine TOS
    • Google Cloud GCS
  • Client APIs
    • Alluxio Python Filesystem API based on FSSpec
    • FUSE based POSIX API
      • Client Writeback
    • S3 API
  • Caching Operations
    • Cache Preloading
    • Cache Filter Policy
    • Cache Eviction
      • TTL Policy
      • Priority Policy
      • Free CLI Command
  • Resource Management
    • Directory-Based Cluster Quota
    • UFS Bandwidth Limiting
  • Performance Optimizations
    • Read Throughput Via Replicas
    • Reading Large Files
    • Metadata Listing
    • Data Prefetch
    • Writing Temporary Files
  • Security
    • TLS Support
  • Performance Benchmarks
    • Fio (POSIX) Benchmark
    • COSBench (S3) Benchmark
    • MLPerf Storage Benchmark
  • Reference
    • User CLI
    • Metrics
    • S3 API Usage
    • Third Party Licenses
  • Release Notes
Powered by GitBook
On this page
  • Upgrading to a newer Alluxio version
  • Upgrade the Operator
  • Upgrade the Alluxio cluster
  • Scaling the size of the cluster
  • Scale Up the Workers
  1. Getting Started with K8s

Cluster Administration

Last updated 1 month ago

This document describes administrative operations on a running Alluxio cluster on Kubernetes, such as upgrading to a new version and adding new workers.

Upgrading to a newer Alluxio version

Upgrade the Operator

  1. Upload the new docker images corresponding to the new Alluxio operator version to your image registry and unpack the helm chart of the operator. Refer to the for details.

  2. Run the following command to apply the new changes to the cluster.

# uninstall the operator. the operator is independent and the status of the operator won't affect the existing Alluxio cluster
$ helm uninstall operator
release "operator" uninstalled

# check if all the resources are removed. the namespace will be the last resource to remove
$ kubectl get ns alluxio-operator
Error from server (NotFound): namespaces "alluxio-operator" not found

# run the command in the new helm chart directory to upgrade the CRDs first
$ kubectl apply -f alluxio-operator/crds 2>/dev/null
customresourcedefinition.apiextensions.k8s.io/alluxioclusters.k8s-operator.alluxio.com configured
customresourcedefinition.apiextensions.k8s.io/underfilesystems.k8s-operator.alluxio.com configured

# use the same operator-config.yaml with only the tag of the image changed to restart the operator
$ helm install operator -f operator-config.yaml alluxio-operator
NAME: operator
LAST DEPLOYED: Thu Jun 27 15:47:44 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Upgrade the Alluxio cluster

Before the operation you should know:

  • When the upgrade operation starts, the coordinator, workers, and the DaemonSet FUSE will perform rolling upgrade to use the new image. The existing CSI FUSE pods will not be restarted and upgraded, and only the new pods will use the new image.

  • While the cluster is being upgraded, the cache hit rate may decrease slightly, but will fully recover once the cluster is fully running again.

Following the steps to upgrade the cluster:

  1. Update the imageTag fields in alluxio-cluster.yaml to reflect the new Alluxio version. In the following example the new imageTag will be AI-3.5-10.2.0.

  2. Run the following command to apply the new changes to the cluster.

# apply the changes to Kubernetes
$ kubectl apply -f alluxio-cluster.yaml
alluxiocluster.k8s-operator.alluxio.com/alluxio configured

# verify the upgration. you can see the new pods are spawning
$ kubectl get pod
NAME                                  READY   STATUS     RESTARTS   AGE
alluxio-coordinator-0                 0/1     Init:0/2   0          7s
alluxio-etcd-0                        1/1     Running    0          10m
alluxio-grafana-b89bf9dbb-77pb6       1/1     Running    0          10m
alluxio-prometheus-59b7b8bd64-b95jh   1/1     Running    0          10m
alluxio-worker-58999f8ddd-cd6r2       0/1     Init:0/2   0          7s
alluxio-worker-5d6786f5bf-cxv5j       1/1     Running    0          10m

# check the status of the cluster
$ kubectl get alluxiocluster
NAME      CLUSTERPHASE   AGE
alluxio   Updating       10m

# wait until the cluster is ready again
$ kubectl get alluxiocluster
NAME      CLUSTERPHASE   AGE
alluxio   Ready          12m

# check the pods of the cluster. you can see the age of the alluxio pods are changed
$ kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
alluxio-coordinator-0                 1/1     Running   0          93s
alluxio-etcd-0                        1/1     Running   0          12m
alluxio-grafana-b89bf9dbb-77pb6       1/1     Running   0          12m
alluxio-prometheus-59b7b8bd64-b95jh   1/1     Running   0          12m
alluxio-worker-58999f8ddd-cd6r2       1/1     Running   0          93s
alluxio-worker-58999f8ddd-rtftk       1/1     Running   0          33s

# double check the version string
$ kubectl exec -it alluxio-coordinator-0 -- alluxio info version 2>/dev/null
AI-3.5-10.2.0

Scaling the size of the cluster

Scale Up the Workers

Before the operation you should know:

  • While the cluster is being upgraded, the cache hit rate may decrease slightly, but will fully recover once the cluster is fully running again.

Following the steps to scale up the workers:

  1. Change the alluxio-cluster.yaml to increase the count in the worker. In the following example we will scale from 2 workers to 3 workers.

  2. Run the following command to apply the new changes to the cluster.

# apply the changes to Kubernetes
$ kubectl apply -f alluxio-cluster.yaml
alluxiocluster.k8s-operator.alluxio.com/alluxio configured

# verify the cluster is upgrading. you should be able to see the new pods are spawning
$ kubectl get pod
NAME                                  READY   STATUS            RESTARTS   AGE
alluxio-coordinator-0                 1/1     Running           0          4m51s
alluxio-etcd-0                        1/1     Running           0          15m
alluxio-grafana-b89bf9dbb-77pb6       1/1     Running           0          15m
alluxio-prometheus-59b7b8bd64-b95jh   1/1     Running           0          15m
alluxio-worker-58999f8ddd-cd6r2       1/1     Running           0          4m51s
alluxio-worker-58999f8ddd-rtftk       1/1     Running           0          3m51s
alluxio-worker-58999f8ddd-p6n59       0/1     PodInitializing   0          4s

# check if the new instances are ready
$ kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
alluxio-coordinator-0                 1/1     Running   0          5m21s
alluxio-etcd-0                        1/1     Running   0          16m
alluxio-grafana-b89bf9dbb-77pb6       1/1     Running   0          16m
alluxio-prometheus-59b7b8bd64-b95jh   1/1     Running   0          16m
alluxio-worker-58999f8ddd-cd6r2       1/1     Running   0          5m21s
alluxio-worker-58999f8ddd-rtftk       1/1     Running   0          4m21s
alluxio-worker-58999f8ddd-p6n59       1/1     Running   0          34s

Upload the new docker images corresponding to the new Alluxio version to your image registry. Refer to the for details.

installation doc
installation doc