This page describes how to deploy Alluxio on Kubernetes and run FIO as validation.
Prerequisites
Kubernetes
A Kubernetes cluster with version at least 1.19, with feature gates enabled.
Ensure the cluster's Kubernetes Network Policy allows for connectivity between applications (Alluxio clients) and the Alluxio Pods on the defined ports.
The Kubernetes cluster has helm 3 with version at least 3.6.0 installed.
Image registry for storing and managing container image
Alluxio Operator
Permission to create CRD (Custom Resource Definition);
Permission to create ServiceAccount, ClusterRole, and ClusterRoleBinding for the operator pod;
Permission to create namespace that the operator will be in.
Create the following configuration files within the extracted directory of the alluxio operator helm chart.
Create the operator configuration in alluxio-operator/alluxio-operator.yaml
nameOverride: alluxio-operator
image: alluxio/operator # set this value to be an accessible registry containing this image
imageTag: 1.1.2
imagePullPolicy: Always
alluxio-csi: # disable CSI
enabled: false
Create the dataset configuration in alluxio-operator/dataset.yaml
Modify the image, imageTag, and dataset values in alluxio-operator/alluxio-cluster.yaml. Modify the cpu, memory, and count values for master, worker, etcd, and fuse configurations as needed. Specify the startup location of pods using nodeSelector.
Bind SSD paths. The above configuration will have two hostpath mounts:
Alluxio pods will use the hostpath mount at /mnt/alluxio/meta to store Alluxio's metadata information. It is recommended to use a SSD disk for this directory.
Alluxio workers will use the hostpath mount at /mnt/alluxio/page to store Alluxio's cached data. It is also recommended to use a SSD disk for this directory.
The path for FUSE's local_data_cache is at /mnt/alluxio/fuse-local-cache.
Other Hostpath configuration:
For mounting NAS, you need to first add the corresponding mount path in the hostPaths section of Workers.
If you want to use a different path for FUSE local data cache, you also need to add the corresponding mount path in the hostPaths section of FUSE.
By default, FUSE is mounted at /mnt/alluxio/fuse. You can view the mounted UFS storage file list in the host’s directory /mnt/alluxio/fuse.
S3 ECR configuration:
The configuration value for the docker images should be replaced with customer’s AWS ECR address in order to successfully pull the images served by the corresponding ECR.
Deploy cluster
Deploy Alluxio Operator
# deploy alluxio operator
$ helm install operator ./alluxio-operator \
-f ./alluxio-operator/alluxio-operator.yaml
NAME: operator
LAST DEPLOYED: Wed Feb 28 02:10:08 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
# check alluxio operator status
$ kubectl get pod -n alluxio-operator
NAME READY STATUS RESTARTS AGE
alluxio-controller-669699b5d7-zlv7h 1/1 Running 0 48s
dataset-controller-5649f66b5f-f7hx9 1/1 Running 0 48s
Deploy Alluxio dataset
# create alluxio namespace (use default namespace if not creating)
$ kubectl create namespace alluxio-test
namespace/alluxio-test created
# check namespace status
$ kubectl get namespaces | grep alluxio-test
alluxio-test Active 91m
# create alluxio dataset
$ kubectl create -f ./alluxio-operator/dataset.yaml -n alluxio-test
dataset.k8s-operator.alluxio.com/null-dataset created
# check alluxio dataset statue
$ kubectl get dataset -n alluxio-test
NAME DATASETPHASE BOUNDEDALLUXIOCLUSTER
null-dataset Pending
Deploy and start Alluxio cluster
# deploy alluxio cluster
$ kubectl create -f ./alluxio-operator/alluxio-cluster.yaml -n alluxio-test
alluxiocluster.k8s-operator.alluxio.com/alluxio created
# check alluxio cluster status
$ kubectl get alluxiocluster -n alluxio-test
NAME CLUSTERPHASE AGE
alluxio Creating/Updating 98s
# check alluxio cluster status
$ watch kubectl get pod -n alluxio-test
NAME READY STATUS RESTARTS AGE
alluxio-etcd-0 1/1 Running 0 101s
alluxio-fuse-cd8mm 1/1 Running 0 101s
alluxio-fuse-vqk7j 1/1 Running 0 102s
alluxio-master-0 1/1 Running 3 (48s ago) 101s
alluxio-monitor-grafana-56b97c5689-554c8 1/1 Running 0 102s
alluxio-monitor-prometheus-749fc5f96-cksv6 1/1 Running 0 102s
alluxio-worker-5d46cf9ddf-6c992 1/1 Running 0 101s
alluxio-worker-5d46cf9ddf-gwh8w 1/1 Running 0 101s
Mount storage
In this example, an existing S3 bucket is mounted to Alluxio
# go into alluxio worker pod
$ pod_worker=$(kubectl get pods -l name=alluxio-worker -o jsonpath='{.items[0].metadata.name}' -n alluxio-test)
$ kubectl exec -it $pod_worker -n alluxio-test -- bash
# mount ufs
$ alluxio mount add \
--option aws.accessKeyId=xxx \
--option aws.secretKey=xxx \
--option alluxio.underfs.s3.region=us-east-1 \
--path /bucket \
--ufs-uri s3://test/
Mounted ufsPath=s3://test/ to alluxioPath=/bucket with 3 options
# check mount point status
$ alluxio mount list
s3://test/ on /bucket/ properties={aws.secretKey=xxx, alluxio.underfs.s3.region=us-east-1, aws.accessKeyId=xxx}
# go into alluxio fuse pod check data in mount point
$ pod_fuse=$(kubectl get pods -l role=alluxio-fuse -o jsonpath='{.items[0].metadata.name}' -n alluxio-test)
$ kubectl exec -it $pod_fuse -n alluxio-test -- bash
$ ls -l /mnt/alluxio/fuse/bucket/
drwx------ 1 root root 0 Jan 1 1970 2023-10-17/
drwx------ 1 root root 0 Jan 1 1970 alluxio/
drwx------ 1 root root 0 Jan 1 1970 alluxio_ufs/
-rwx------ 1 root root 173279 Oct 17 08:26 log.tar.gz*
drwx------ 1 root root 0 Jan 1 1970 pach_alluxio/
# unmount (as needed)
$ alluxio mount remove --path /bucket
Unmounted /bucket from Alluxio.
Quick Verification - FIO
Follow the instructions here to install the FIO tool on the FUSE pod.
Execute the following tests via Alluxio FUSE with FIO
A Grafana dashboard is deployed in the same namespace as the Alluxio cluster, exposed through port 8080 on its host machine. The port needs to be opened and not firewalled by the host machine.
If using EKS
Run kubectl get pods -owide -n <alluxio namespace> | grep grafana to get the hostname of the node. It should be in the form of ip-10-0-6-132.ec2.internal.
If the machine we are using to access Grafana are in the same private network as the host machine, access the Grafana UI directly through http://<hostname>:8080. Otherwise, identify the external IP of the host machine to use as the hostname in the URL. Run kubectl get nodes -owide to find the corresponding external IP.
[centos@ip-172-31-92-52 ~]$ k get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-6-132.ec2.internal Ready <none> 210d v1.22.17-eks-0a21954 10.0.6.132 35.173.122.123 Amazon Linux 2 5.4.247-162.350.amzn2.x86_64 docker://20.10.23
In this example, the machine has an external IP of 35.173.122.123, so the Grafana UI should be accessible through http://35.173.122.123:8080
Appendix: Access Alluxio via Kubernetes CSI
Applications can use Alluxio FUSE as a Persistent Volume Claim (PVC) via CSI.
CSI yaml configuration file
Default configuration file at alluxio-operator/charts/alluxio-csi/values.yaml
If you are not able to access the internet, you will need to download the two dependent CSI images and upload them to the local image registry, then modify the values for provisioner.image and driverRegistrar.image to point to the corresponding local image addresses.
Update the Alluxio operator configuration at alluxio-operator/alluxio-operator.yaml
nameOverride: alluxio-operator
image: alluxio/operator # set to the accessible registry with the images
imageTag: 1.1.2
imagePullPolicy: Always
alluxio-csi: # enable CSI
enabled: true
image: alluxio/csi # set to the accessible registry with the images
imageTag: 1.1.2
To disable the FUSE daemonset, update the following section of Alluxio cluster configuration at alluxio-operator/alluxio-cluster.yaml
spec:
fuse:
enabled: false
Check Alluxio configuration
The following steps will add a CSI FUSE volume in the application pod.
Add a sample pod in alluxio-operator/alluxio-cluster.yaml
# Run the pod
$ kubectl apply -f alluxio-operator/app.yaml -n alluxio-test
pod/fuse-test created
# Enter the pod and check
$ kubectl exec -it fuse-test -n alluxio-test -- sh
$ ls -l /data/bucket/
drwx------ 1 root root 0 Jan 1 1970 2023-10-17
drwx------ 1 root root 0 Jan 1 1970 alluxio
drwx------ 1 root root 0 Jan 1 1970 alluxio_ufs
-rwx------ 1 root root 173279 Oct 17 08:26 log.tar.gz
Troubleshooting
Inspect and manipulate dataset credentials
$ kubectl get crd
# delete all crds ending with k8s-operator.alluxio.com
$ kubectl delete crd datasets.k8s-operator.alluxio.com loads.k8s-operator.alluxio.com updates.k8s-operator.alluxio.com unloads.k8s-operator.alluxio.com alluxioclusters.k8s-operator.alluxio.com
Wipe the mount table on ETCD
$ kubectl get pvc -n alluxio-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-alluxio-etcd-0 Bound pvc-7f066e82-6e56-4386-bcb4-da4bdbcf80f1 8Gi RWO gp2 58m
$ kubectl delete pvc data-alluxio-etcd-0 -n alluxio-test
persistentvolumeclaim "data-alluxio-etcd-0" deleted