S3 API
Alluxio supports a RESTful API that is compatible with the basic operations of the Amazon S3 API.
The Alluxio S3 API should be used by applications designed to communicate with an S3-like storage and would benefit from the other features provided by Alluxio, such as data caching, data sharing with file system based applications, and storage system abstraction (e.g., using Ceph instead of S3 as the backing store). For example, a simple application that downloads reports generated by analytic tasks can use the S3 API instead of the more complex file system API.
Limitations and Disclaimers
Alluxio Filesystem Limitations
Only top-level Alluxio directories are treated as buckets by the S3 API.
Hence the root directory of the Alluxio filesystem is not treated as an S3 bucket. Any root-level objects (eg:
alluxio://file) will be inaccessible through the Alluxio S3 API.To treat sub-directories as a bucket, the separator
:must be used in the bucket name (eg:s3://sub:directory:bucket/file).Note that this is purely a convenience feature and hence is not returned by API Actions such as ListBuckets.
Alluxio uses / as a reserved separator. Therefore, any S3 paths with objects or folders named / (eg: s3://example-bucket//) will cause undefined behavior. For additional limitations on object key names please check this page: Alluxio limitations
No Bucket Virtual Hosting
Virtual hosting of buckets is not supported in the Alluxio S3 API. Therefore, S3 clients must utilize path-style requests (i.e: http://s3.amazonaws.com/{bucket}/{object} and NOT http://{bucket}.s3.amazonaws.com/{object}).
S3 Writes Implicitly Overwrite
As described in the AWS S3 docs for PutObject:
Amazon S3 is a distributed system. If it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. Amazon S3 does not provide object locking; if you need this, make sure to build it into your application layer or use versioning instead.
Note that at the moment the Alluxio S3 API does not support object versioning
Alluxio S3 will overwrite the existing key and the temporary directory for multipart upload.
Folders in ListObjects(V2)
All sub-directories in Alluxio will be returned in ListObjects(V2) as 0-byte folders. This behavior is in accordance with if you used the AWS S3 console to create all parent folders for each object.
Tagging & Metadata Limits
User-defined tags on buckets & objects are limited to 10 and obey the S3 tag restrictions.
Set the property key
alluxio.proxy.s3.tagging.restrictions.enabled=falseto disable this behavior.
The maximum size for user-defined metadata in PUT-requests is 2KB by default in accordance with S3 object metadata restrictions.
Set the property key
alluxio.proxy.s3.header.metadata.max.sizeto change this behavior.
Performance Implications
The S3 API leverages the Alluxio REST proxy , introducing an additional network hop for Alluxio clients. For optimal performance, it is recommended to run the proxy server and an Alluxio worker on each compute node. It is also recommended to put all the proxy servers behind a load balancer.
Global request headers
AWS4-HMAC-SHA256 Credential={user}/..., SignedHeaders=..., Signature=...
There is currently no support for access & secret keys in the Alluxio S3 API. The only supported authentication scheme is the SIMPLE authentication type. By default, the user that is used to perform any operations is the user that was used to launch the Alluxio proxy process. Therefore this header is used exclusively to specify an Alluxio ACL username to perform an operation with. In order to remain compatible with other S3 clients, the header is still expected to follow the AWS Signature Version 4 format. When supplying an access key to an S3 client, put the intended Alluxio ACL username. The secret key is unused so you may use any dummy value.
Supported S3 API Actions
The following table describes the support status for current S3 API Actions:
Content-Type,
x-amz-copy-source,
x-amz-metadata-directive,
x-amz-tagging-directive,
x-amz-tagging
N/A
Property Keys
The following table contains the configurable Alluxio property keys which pertain to the Alluxio S3 API.
alluxio.proxy.s3.bucket.naming.restrictions.enabled
false
Toggles whether or not the Alluxio S3 API will enforce AWS S3 bucket naming restrictions. See https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html.
alluxio.proxy.s3.bucketpathcache.timeout
0min
Expire bucket path statistics in cache for this time period. Set 0min to disable the cache. If enabling the cache, be careful that Alluxio S3 API will behave differently from AWS S3 API if bucket path cache entries become stale.
alluxio.proxy.s3.complete.multipart.upload.keepalive.enabled
false
Whether or not to enabled sending whitespace characters as a keepalive message during CompleteMultipartUpload. Enabling this will cause any errors to be silently ignored. However, the errors will appear in the Proxy logs.
alluxio.proxy.s3.complete.multipart.upload.keepalive.time.interval
30sec
The complete multipart upload maximum keepalive time. The keepalive whitespace characters will be sent after 1 second, exponentially increasing in duration up to the configured value.
alluxio.proxy.s3.complete.multipart.upload.min.part.size
5MB
The minimum required file size of parts for multipart uploads. Parts which are smaller than this limit aside from the final part will result in an EntityTooSmall error code. Set to 0 to disable size requirements.
alluxio.proxy.s3.complete.multipart.upload.pool.size
20
The complete multipart upload thread pool size.
alluxio.proxy.s3.deletetype
ALLUXIO_AND_UFS
Delete type when deleting buckets and objects through S3 API. Valid options are ALLUXIO_AND_UFS (delete both in Alluxio and UFS), ALLUXIO_ONLY (delete only the buckets or objects in Alluxio namespace).
alluxio.proxy.s3.global.read.rate.limit.mb
0
Limit the maximum read speed for all connections. Set value less than or equal to 0 to disable rate limits.
alluxio.proxy.s3.header.metadata.max.size
2KB
The maximum size to allow for user-defined metadata in S3 PUTrequest headers. Set to 0 to disable size limits.
alluxio.proxy.s3.multipart.upload.cleaner.enabled
false
Enable automatic cleanup of long-running multipart uploads.
alluxio.proxy.s3.multipart.upload.cleaner.pool.size
1
The abort multipart upload cleaner pool size.
alluxio.proxy.s3.multipart.upload.cleaner.retry.count
3
The retry count when aborting a multipart upload fails.
alluxio.proxy.s3.multipart.upload.cleaner.retry.delay
10sec
The retry delay time when aborting a multipart upload fails.
alluxio.proxy.s3.multipart.upload.cleaner.timeout
10min
The timeout for aborting proxy s3 multipart upload automatically.
alluxio.proxy.s3.single.connection.read.rate.limit.mb
0
Limit the maximum read speed for each connection. Set value less than or equal to 0 to disable rate limits.
alluxio.proxy.s3.tagging.restrictions.enabled
true
Toggles whether or not the Alluxio S3 API will enforce AWS S3 tagging restrictions (10 tags, 128 character keys, 256 character values) See https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-managing.html.
alluxio.proxy.s3.v2.async.heavy.pool.core.thread.number
8
Core thread number for async heavy thread pool.
alluxio.proxy.s3.v2.async.heavy.pool.maximum.thread.number
64
Maximum thread number for async heavy thread pool.
alluxio.proxy.s3.v2.async.heavy.pool.queue.size
65536
Queue size for async heavy thread pool.
alluxio.proxy.s3.v2.async.light.pool.core.thread.number
8
Core thread number for async light thread pool.
alluxio.proxy.s3.v2.async.light.pool.maximum.thread.number
64
Maximum thread number for async light thread pool.
alluxio.proxy.s3.v2.async.light.pool.queue.size
65536
Queue size for async light thread pool.
alluxio.proxy.s3.v2.async.processing.enabled
false
(Experimental) If enabled, handle S3 request in async mode when v2 version of Alluxio s3 proxy service is enabled.
alluxio.proxy.s3.v2.version.enabled
true
(Experimental) V2, an optimized version of Alluxio s3 proxy service.
alluxio.proxy.s3.writetype
CACHE_THROUGH
Write type when creating buckets and objects through S3 API. Valid options are MUST_CACHE (write will only go to Alluxio and must be stored in Alluxio), CACHE_THROUGH (try to cache, write to UnderFS synchronously), ASYNC_THROUGH (try to cache, write to UnderFS asynchronously), THROUGH (no cache, write to UnderFS synchronously).
Example Usage
S3 API Actions
Python S3 Client
Tested for Python 2.7.
Create a connection:
Please note you have to install boto package first.
Authenticating as a user:
By default, authenticating with no access_key_id uses the user that was used to launch the proxy as the user performing the file system actions.
Set the aws_access_key_id to a different username to perform the actions under a different user.
Create a bucket
List all buckets owned by the user
Authenticating as a user is necessary to have buckets returned by this operation.
PUT a small object
Get the small object
Upload a large object
Create a 8MB file on local file system.
Then use python S3 client to upload this as an object
Get the large object
Delete the objects
Initiate a multipart upload
Upload parts
Complete the multipart upload
Abort the multipart upload
Non-completed uploads can be aborted.
Delete the bucket
Last updated