Alluxio
ProductsLanguageHome
DA-3.2
DA-3.2
  • Overview
  • Getting Started with K8s
    • Resource Prerequisites and Compatibility
    • Install on Kubernetes
    • Monitoring and Metrics
    • Cluster Administration
    • System Health Check & Quick Recovery
    • Collecting Cluster Information
  • Storage Integrations
    • Storage Integrations Overview
    • Amazon AWS S3
    • HDFS
  • Compute Integrations
    • Trino on K8s
    • Spark on K8s
    • Data Lake Connectors
  • Client APIs
    • S3 API
    • Java HDFS-compatible API
  • Features
    • Alluxio Namespace and Under File System Namespaces
    • Cache Preloading
    • Cache Evicting
    • Cache Filtering
    • Cache Free
    • I/O Resiliency
  • Security
    • TLS Support
    • Apache Ranger Integration
  • Reference
    • User CLI
    • S3 API Usage
    • Third Party Licenses
  • Release Notes
Powered by GitBook
On this page
  1. Storage Integrations

Storage Integrations Overview

Last updated 1 month ago

This guide will cover general prerequisites and running Alluxio locally with your desired under storage system. To learn how to configure Alluxio with each individual storage system, please look at their respective pages. Refer to to learn about the commands to mount the storages into Alluxio's namespace.

Prerequisites

In preparation for using your chosen storage system with Alluxio, please be sure you have all the required location, credentials, and additional properties before you begin configuring Alluxio to your under storage system.

For the purposes of this guide, the following are placeholders.

Storage System
Location
Credentials
Additional Properties

S3_BUCKET, S3_DIRECTORY

S3_ACCESS_KEY_ID, S3_SECRET_KEY

HDFS_NAMENODE, HDFS_PORT

Specify Hadoop version: HADOOP_VERSION

Amazon AWS S3
HDFS
configuring mount points