Get Started
This page describes how to deploy Alluxio on Kubernetes and run FIO as validation.
Prerequisites
Kubernetes
A Kubernetes cluster with version at least 1.19, with feature gates enabled.
Ensure the cluster's Kubernetes Network Policy allows for connectivity between applications (Alluxio clients) and the Alluxio Pods on the defined ports.
The Kubernetes cluster has helm 3 with version at least 3.6.0 installed.
Image registry for storing and managing container image
Alluxio Operator
Permission to create CRD (Custom Resource Definition);
Permission to create ServiceAccount, ClusterRole, and ClusterRoleBinding for the operator pod;
Permission to create namespace that the operator will be in.
Reference: Using RBAC Authorization
Alluxio CSI dependency image if using CSI FUSE:
FIO tool for cluster verification
Refer to https://fio.readthedocs.io/en/latest/fio_doc.html
Preparation
Download files
Extract Operator helm chart
Upload images
This example shows how to upload Alluxio operator image. Repeat these steps for the Alluxio CSI and Alluxio Enterprise images.
Prepare configuration files
Create the following configuration files within the extracted directory of the alluxio operator helm chart.
Create the operator configuration in alluxio-operator/alluxio-operator.yaml
Create the dataset configuration in alluxio-operator/dataset.yaml
Note that placeholder values are inserted for the dataset name and path, otherwise it will not work with the mount table feature.
Create the cluster configuration in alluxio-operator/alluxio-cluster.yaml
Verify configurations
Modify the
image,imageTag, anddatasetvalues inalluxio-operator/alluxio-cluster.yaml. Modify thecpu,memory, andcountvalues formaster,worker,etcd, andfuseconfigurations as needed. Specify the startup location of pods usingnodeSelector.Bind SSD paths. The above configuration will have two hostpath mounts:
Alluxio pods will use the hostpath mount at
/mnt/alluxio/metato store Alluxio's metadata information. It is recommended to use a SSD disk for this directory.Alluxio workers will use the hostpath mount at
/mnt/alluxio/pageto store Alluxio's cached data. It is also recommended to use a SSD disk for this directory.
The path for FUSE's
local_data_cacheis at/mnt/alluxio/fuse-local-cache.Other Hostpath configuration:
For mounting NAS, you need to first add the corresponding mount path in the hostPaths section of Workers.
If you want to use a different path for FUSE local data cache, you also need to add the corresponding mount path in the hostPaths section of FUSE.
By default, FUSE is mounted at
/mnt/alluxio/fuse. You can view the mounted UFS storage file list in the host’s directory/mnt/alluxio/fuse.
S3 ECR configuration:
The configuration value for the docker images should be replaced with customer’s AWS ECR address in order to successfully pull the images served by the corresponding ECR.
Deploy cluster
Deploy Alluxio Operator
Deploy Alluxio dataset
Deploy and start Alluxio cluster
Mount storage
In this example, an existing S3 bucket is mounted to Alluxio
Quick Verification - FIO
Follow the instructions here to install the FIO tool on the FUSE pod.
Execute the following tests via Alluxio FUSE with FIO
Monitoring dashboard
A Grafana dashboard is deployed in the same namespace as the Alluxio cluster, exposed through port 8080 on its host machine. The port needs to be opened and not firewalled by the host machine.
If using EKS
Run
kubectl get pods -owide -n <alluxio namespace> | grep grafanato get the hostname of the node. It should be in the form ofip-10-0-6-132.ec2.internal.If the machine we are using to access Grafana are in the same private network as the host machine, access the Grafana UI directly through
http://<hostname>:8080. Otherwise, identify the external IP of the host machine to use as the hostname in the URL. Runkubectl get nodes -owideto find the corresponding external IP.In this example, the machine has an external IP of
35.173.122.123, so the Grafana UI should be accessible throughhttp://35.173.122.123:8080
Appendix: Access Alluxio via Kubernetes CSI
Applications can use Alluxio FUSE as a Persistent Volume Claim (PVC) via CSI.
CSI yaml configuration file
Default configuration file at alluxio-operator/charts/alluxio-csi/values.yaml
If you are not able to access the internet, you will need to download the two dependent CSI images and upload them to the local image registry, then modify the values for provisioner.image and driverRegistrar.image to point to the corresponding local image addresses.
Update the Alluxio operator configuration at alluxio-operator/alluxio-operator.yaml
To disable the FUSE daemonset, update the following section of Alluxio cluster configuration at alluxio-operator/alluxio-cluster.yaml
Check Alluxio configuration
The following steps will add a CSI FUSE volume in the application pod.
Add a sample pod in alluxio-operator/alluxio-cluster.yaml
Run the sample pod and check
Troubleshooting
Inspect and manipulate dataset credentials
Wipe the mount table on ETCD
Last updated