MLPerf Storage Benchmark Overview
MLPerf Storage is a benchmark suite designed to characterize the performance of storage systems supporting machine learning workloads. This document describes how to conduct end-to-end testing of Alluxio using MLPerf Storage.
Results Summary
Model Accelerators (GPUs) Dataset AU Throughput (MB/sec) Throughput (samples/sec)
The test results are based on an Alluxio cluster configured as follows, with all server instances available on AWS:
Alluxio Cluster: One Alluxio Fuse node and two Alluxio Worker nodes.
Alluxio Worker Instance: i3en.metal : 96c + 768GB memory + 100Gb network + 8 nvme
Alluxio Fuse Instance c6in.metal : 128c + 256GB memory + 200Gb network
Preparing the Test Environment
Operating System Image: Ubuntu 22.02
Preparing MLPerf Storage Test Tools
Copy sudo apt-get install mpich
git clone -b v0.5 --recurse-submodules https://github.com/mlcommons/storage.git
cd storage
pip3 install -r dlio_benchmark/requirements.txt
Generating the Dataset
We recommend generating the dataset locally and then uploading it to remote storage. Determine the data size to generate:
Copy # Don't forget to replace the parameters with your own.
./benchmark.sh datasize --workload unet3d --num-accelerators 4 --host-memory-in-gb 32
workload: Options are unet3d and bert.
num-accelerators: The simulated number of GPUs. The larger the number, the more processes can run on a single machine. For datasets of the same size, training time is shorter. However, this increases the demands on storage I/O.
host-memory-in-gb: The simulated memory size, which can be freely specified, even exceeding the actual memory of your machine. Larger memory sizes generate larger datasets and require longer training times.
After this command, you will get a result like:
Copy ./benchmark.sh datasize --workload unet3d --num-accelerators 4 --host-memory-in-gb 32
The benchmark will run for approx 11 minutes (best case )
Minimum 1600 files are required, which will consume 218 GB of storage
----------------------------------------------
Set --param dataset.num_files_train= 1600 with ./benchmark.sh datagen/run commands
Next, you can generate the corresponding dataset with the following command:
Copy ./benchmark.sh datagen --workload unet3d --num-parallel ${num-parallel} --param dataset.num_files_train=1600 --param dataset.data_folder=${dataset.data_folder}
After generating the dataset locally, upload it to UFS.
Configuring Alluxio
We recommend using Alluxio version 3.1 or above for MLPerf testing. Additionally, we recommend setting the following configurations in alluxio-site.properties
for optimal read performance:
Copy alluxio.user.position.reader.streaming.async.prefetch.enable= true
alluxio.user.position.reader.streaming.async.prefetch.thread= 256
alluxio.user.position.reader.streaming.async.prefetch.part.length= 4MB
alluxio.user.position.reader.streaming.async.prefetch.max.part.number= 4
For other Alluxio-related configurations, refer to the Fio Tests section.
You can configure one or more Alluxio Workers as a cache cluster.
Additionally, each MLPerf test node needs to start the Alluxio Fuse process to read data.
Ensure that the dataset has been completely loaded into the Alluxio cache from UFS.
Running the Test
Copy ./benchmark.sh run --workload ${workload} --num-accelerators ${num-accelerators} --results-dir ${results-dir} --param dataset.data_folder=${dataset.data_folder} --param dataset.num_files_train=${dataset.num_files_train}
After completing the test, you can find the summary.json
file in the results-dir
, similar to:
Copy {
"model" : "unet3d" ,
"start" : "2024-05-27T14:46:24.458325" ,
"num_accelerators" : 20 ,
"hostname" : "ip-172-31-24-47" ,
"metric" : {
"train_au_percentage" : [
99.18125818824699 ,
99.01649117920554 ,
98.95473494676878 ,
98.31108303926722 ,
98.2658474647346
] ,
"train_au_mean_percentage" : 98.74588296364462 ,
"train_au_meet_expectation" : "success" ,
"train_au_stdev_percentage" : 0.38102089124716115 ,
"train_throughput_samples_per_second" : [
57.07382805038776 ,
57.1334916113455 ,
56.93601336110315 ,
56.72469392071424 ,
56.64526420320678
] ,
"train_throughput_mean_samples_per_second" : 56.90265822935148 ,
"train_throughput_stdev_samples_per_second" : 0.19058788132211907 ,
"train_io_mean_MB_per_second" : 7955.518180172248 ,
"train_io_stdev_MB_per_second" : 26.64594945050442
} ,
"num_files_train" : 28125 ,
"num_files_eval" : 0 ,
"num_samples_per_file" : 1 ,
"epochs" : 5 ,
"end" : "2024-05-27T15:27:39.203932"
}
The train_au_percentage
attribute represents GPU utilization.
Additionally, you can run the test multiple times and save the results in the following format:
Copy sample-results
|---run-1
|---host-1
|---summary.json
|---host-2
|---summary.json
....
|---host-n
|---summary.json
|---run-2
|---host-1
|---summary.json
|---host-2
|---summary.json
....
|---host-n
|---summary.json
.....
|---run-5
|---host-1
|---summary.json
|---host-2
|---summary.json
....
|---host-n
|---summary.json
Then, use the following command to aggregate the results of multiple tests:
Copy ./benchmark.sh reportgen --results-dir sample-results
The final aggregated result will look like this:
Copy {
"overall" : {
"model" : "unet3d" ,
"num_client_hosts" : 1 ,
"num_benchmark_runs" : 5 ,
"train_num_accelerators" : "20" ,
"num_files_train" : 28125 ,
"num_samples_per_file" : 1 ,
"train_throughput_mean_samples_per_second" : 56.587322998616344 ,
"train_throughput_stdev_samples_per_second" : 0.3842685544298719 ,
"train_throughput_mean_MB_per_second" : 7911.431396900177 ,
"train_throughput_stdev_MB_per_second" : 53.72429981238494
} ,
"runs" : {
"run-5" : {
"train_throughput_samples_per_second" : 57.06105089062497 ,
"train_throughput_MB_per_second" : 7977.662939935283 ,
"train_num_accelerators" : "20" ,
"model" : "unet3d" ,
"num_files_train" : 28125 ,
"num_samples_per_file" : 1
} ,
"run-2" : {
"train_throughput_samples_per_second" : 56.18386238258097 ,
"train_throughput_MB_per_second" : 7855.023869277903 ,
"train_num_accelerators" : "20" ,
"model" : "unet3d" ,
"num_files_train" : 28125 ,
"num_samples_per_file" : 1
} ,
"run-1" : {
"train_throughput_samples_per_second" : 56.90265822935148 ,
"train_throughput_MB_per_second" : 7955.518180172248 ,
"train_num_accelerators" : "20" ,
"model" : "unet3d" ,
"num_files_train" : 28125 ,
"num_samples_per_file" : 1
} ,
"run-3" : {
"train_throughput_samples_per_second" : 56.69229017116294 ,
"train_throughput_MB_per_second" : 7926.10677895614 ,
"train_num_accelerators" : "20" ,
"model" : "unet3d" ,
"num_files_train" : 28125 ,
"num_samples_per_file" : 1
} ,
"run-4" : {
"train_throughput_samples_per_second" : 56.09675331936137 ,
"train_throughput_MB_per_second" : 7842.845216159307 ,
"train_num_accelerators" : "20" ,
"model" : "unet3d" ,
"num_files_train" : 28125 ,
"num_samples_per_file" : 1
}
}
}