Spark on K8s
This guide describes how to configure Apache Spark to access Alluxio.
Applications using Spark 1.1 or later can access Alluxio through its HDFS-compatible interface. Using Alluxio as the data access layer, Spark applications can transparently access data in many different types of persistent storage services. Data can be actively fetched or transparently cached into Alluxio to speed up I/O performance especially when the Spark deployment is remote to the data. In addition, Alluxio can help simplify the architecture by decoupling compute and physical storage. When the data path in persistent under storage is hidden from Spark, changes to under storage can be independent from application logic; meanwhile, as a near-compute cache, Alluxio can still provide compute frameworks data-locality.
This guide describes how to integrate Apache Spark with Alluxio in a Kubernetes environment.
Prerequisites
This guide assumes that the Alluxio cluster is deployed on Kubernetes.
docker
is also required to build the custom Spark image.
Prepare image
To integrate with Spark, Alluxio jars and configuration files must be added within the Spark image. Spark containers need to be launched with this modified image in order to connect to the Alluxio cluster.
Among the files listed in the Alluxio installation instructions to download, locate the tarball named alluxio-enterprise-DA-3.2-8.0.0-release.tar.gz
. Extract the following Alluxio jars from the tarball:
client/alluxio-DA-3.2-8.0.0-client.jar
client/ufs/alluxio-underfs-s3a-shaded-DA-3.2-8.0.0.jar
if using a S3 bucket as an UFS
Prepare an empty directory as the working directory to build an image from. Within this directory, create the directory files/alluxio/
and copy the aforementioned jar files into it.
Create a Dockerfile
with the operations to modify the base Spark image. The following example defines arguments for:
SPARK_VERSION=3.5.2
as the Spark versionUFS_JAR=files/alluxio/alluxio-underfs-s3a-shaded-DA-3.2-8.0.0.jar
as the path to the UFS jar copied intofiles/alluxio/
CLIENT_JAR=files/alluxio/alluxio-DA-3.2-8.0.0-client.jar
as the path to the Alluxio client jar copied intofiles/alluxio/
Build the image by running, replacing <PRIVATE_REGISTRY>
with the URL of your private container registry and <SPARK_VERSION>
with the corresponding Spark version. In following examples, we will continue to utilize 3.5.2
as the Spark version as indicated by <SPARK_VERSION>
.
Push the image by running:
Deploy Spark
There are a few things we need to do before submitting a Spark job:
Install Spark operator
Set Alluxio config map
Create a service account for Alluxio (if you are using IAM)
Add additional parameters in the Spark job
Install Spark Operator
If you are using aws-samples/emr-on-eks-benchmark to create the EKS cluster, the spark-operator will be installed in the scripts, so you do not need to install it again.
The following instructions are derived from the spark-operator getting started guide.
Add the spark-operator repo in Helm.
To add custom configurations, you can create a spark-operator.yaml
file. For example, the following example sets the namespace to spark
(not required, but we will use this as example):
Install the spark operator with those configurations by running the command:
The webhook.enable
setting is needed to mount configmaps for Alluxio.
Check the status of the spark operator. If the status is Running
, it is ready for jobs to be submitted.
When complete with Spark, uninstall the Spark operator and its related components with the command:
Create a ConfigMap for Alluxio
This configmap is for the Spark jobs, as an Alluxio client, to understand the Alluxio configuration.
The configmap can be created from alluxio-site.properties
of the existing Alluxio cluster config map built by the Alluxio operator. To show alluxio-site.properties
from the Alluxio cluster config map, run:
If following the Install Alluxio on Kubernetes instructions, the value of <ALLUXIO_NAMESPACE>-<ALLUXIO_CLUSTER_NAME>-conf
would be default-alluxio-conf
.
Using the following command will generate a alluxio-config.yaml
file:
Note:
${alluxioNamespace}
and${alluxioClusterName}
should match the values for the existing Alluxio cluster. If you followed the Install Alluxio on Kubernetes instructions, it would bedefault
andalluxio
respectively.The
jq
command is used to parse JSON
Create the configmap by running the command:
Create a Service Account for Alluxio
An Alluxio service account is used if you are using IAM for authentication/authorization.
Create a spark-s3-access-sa.yaml
file, with the following contents:
where <YOUR_AWS_ACCOUNT_ID>
should be replaced with your AWS account ID.
Create the service account with the command:
Provide Alluxio Properties as Spark Configuration Calues for Job Submission
For the Spark cluster to properly communicate with the Alluxio cluster, certain properties must be aligned between the Alluxio client and Alluxio server.
In particular, the values for hadoopConf
should be set to match the values of your Alluxio deployment. Take a note of these properties under alluxio-site.properties
of the previously created alluxio-config.yaml
file:
alluxio.etcd.endpoints
alluxio.cluster.name
alluxio.k8s.env.deployment
alluxio.mount.table.source
alluxio.worker.membership.manager.type
Add the following in your spark application yaml file for job submission; see the next section for a full example of alluxio-sparkApplication.yaml
.
The above example assumes Alluxio was deployed following the Install Alluxio on Kubernetes instructions.
Examples
Using Spark to Read and Write a File
This section provides an example on how to use Spark to read and write a file. In this simple example, we will count words in an input file. To do that, we will need to do the following
Create an input file containing any text. We will count how many times each word appears in that text file.
Create an Scala program that contains the code to do the world count. It will take the input file above and output the result the specified location.
Package the Scala program so that Spark can launch and execute.
Submit the Spark job and validate the result
Create an input file
Create a input file input.txt
. The input file can have any text content. This Scala example will count how many times a word occur in this file. Here is some sample content that you can copy paste into the input file entirely.
Create a Scala program
We need to write a Scala file and generate a JAR, and then upload JAR to s3 for spark job to call.
Create a Scala file spark-scala-demo.scala
with the following example content:
Update the inputPath
to the S3 path where you put your input file, and the outputPath
path is the S3 path where you want your output file be. They should be accessible by the provided credentials.
Package the Scala application
First create file build.sbt
with the following contents:
Use the sbt
tool to build JAR under the folder with the scala file. If sbt
is not already installed, run $ brew install sbt
Find the file in ./target/scala-2.12/
directory, note its name (i.e. <SPARK_JOB_JAR_FILE>.jar
). Upload it to S3, rename it if you want. We will call it alluxioread_2.12-0.1.jar
in this example:
replacing <BUCKET_NAME>/<S3_PATH>
with an accessible S3 location.
Create Spark Application
Create a alluxio-sparkApplication.yaml
file with the following example content:
Note the following customizations:
Under
spec.image
, specify the location of the custom Spark imageSet the S3 path to the uploaded jar in
spec.mainApplicationFile
in place ofs3://<BUCKET_NAME>/<S3_PATH>/alluxioread_2.12-0.1.jar
Set the access credentials to S3 in the following locations:
javaOptions
for both the driver and executorAs
spark.hadoop.fs.s3a.*
properties insparkConf
Alluxio specific configurations for
sparkConf
andhadoopConf
, as previously described in provide Alluxio properties to Spark
Submit Spark job and see results
Deploy the spark application with the command:
Once it finish, you will see result in the output path, in our example it will be
Copy output to local and inspect the .csv
file. You should see word, count
pairs in the .csv
file.
Note
If you want to rerun the job, you may need to remove the output directory. It needs to be done in the Alluxio layer using
alluxio fs rm
. Here is an example
If you rebuild your Scala application or change the input file, you need to invalidate the cache, so the new version will be pulled into Alluxio and get the update. Here is an example
Last updated