Performance Optimization
This document introduces features to improve Alluxio performance for specific scenarios.
Client Async Prefetch
If the current file is being read sequentially, the Alluxio client will prefetch a range of data after the current read position and start to cache this data on the client. When the current read reaches the cached data, the Alluxio client will return the cached data instead of sending an RPC to the worker.
The prefetch window is self-adjusting. If the reads always starts at the end of last read position, the prefetch window will increase. If the reads are not continuous, the prefetch window will decrease. In the case that the reads are completely random reads, the prefetch window will eventually be reduced to 0.
Async prefetch caches data in client direct memory. Performance can be improved by increasing the direct memory assigned to the jvm process.
The client async prefetch is always enabled. The following parameters allow the user to tune the feature.
Configuration item | Recommended value | Description |
---|---|---|
alluxio.user.position.reader.streaming.async.prefetch.thread | 64 | The overall async prefetch concurrency |
alluxio.user.position.reader.streaming.async.prefetch.part.length | 4MB | The size of the prefetch unit |
alluxio.user.position.reader.streaming.async.prefetch.max.part.number | 8 | The maximum number of units a single opened file can have. For example, if the prefetch unit size is 4MB and the max number of unit is 8, Alluxio will fetch at most 32MB data ahead for an opened file. |
alluxio.user.position.reader.streaming.async.prefetch.file.length.threshold | 4MB | If the file size is less than the specified threshold, Alluxio will max out the prefetch window immediately instead of starting with a small window. This configuration is used to improve small file read performance. |
Enable the slow async prefetch pool
Users may have different situations that require different async prefetch parameters, such as for cold reads vs cache filter reads. Cold reads usually require more concurrency to maximize the network bandwidth and achieve the best performance. Alluxio has a secondary async prefetch pool dedicated for alternative configurations, labeled as the slow thread pool. To enable and configure this secondary pool, set the following configuration:
Configuration item | Recommended value | Description |
---|---|---|
alluxio.user.position.reader.streaming.async.prefetch.use.slow.thread.pool | true | Set to true to enable the slow pool |
alluxio.user.position.reader.streaming.async.prefetch.use.slow.thread.pool.for.cold.read | true | If set to true, the slow pool will be used for cold read as well. Otherwise, the slow pool will only be used cache filter read. |
alluxio.user.position.reader.streaming.slow.async.prefetch.thread | 256 | The overall async prefetch concurrency for the slow pool |
alluxio.user.position.reader.streaming.slow.async.prefetch.part.length | 1MB | The size of the prefetch unit used by the slow pool |
alluxio.user.position.reader.streaming.slow.async.prefetch.max.part.number | 64 | The maximum number of units a single opened file can have for the slow pool |
Client Large File Preload
Large file preload is an optimization for the cold read of large files. If the feature is enabled, Alluxio will load the whole file concurrently into Alluxio workers as the file is read initially by the client. When running the FIO benchmark for a single 100GB file stored on S3, Alluxio's cold read performance with this feature achieves a comparable read performance as a fully cached hot read.
Deduplication is handled on both the client and worker side to avoid excessive RPC calls and redundant traffic to the UFS. Note that since Alluxio always fully loads the file, this feature can cause read amplification if the application does not need to read the whole file.
Enabling the Feature
Configuration item | Recommended value | Description |
---|---|---|
alluxio.user.position.reader.preload.data.enabled | true | Set to true to enable large file preloading |
alluxio.user.position.reader.preload.data.file.size.threshold.min | 1GB | The minimum file size to trigger the async preload |
alluxio.user.position.reader.preload.data.file.size.threshold.max | 200GB | The maximum file size to trigger the async preload. This is useful to avoid loading extremely large files that would completely fill up the page store capacity and trigger cache eviction. |
alluxio.worker.preload.data.thread.pool.size | 64 | The number of concurrent jobs on the worker to load the data of the file into UFS in parallel. Each job loads a page into Alluxio. For example, if the page size is 4MB and this config is set to 64, the worker will concurrently load 256M per iteration. |
Last updated