Alluxio
ProductsLanguageHome
DA-3.2
DA-3.2
  • Overview
  • Getting Started with K8s
    • Resource Prerequisites and Compatibility
    • Install on Kubernetes
    • Monitoring and Metrics
    • Cluster Administration
    • System Health Check & Quick Recovery
    • Collecting Cluster Information
  • Storage Integrations
    • Storage Integrations Overview
    • Amazon AWS S3
    • HDFS
  • Compute Integrations
    • Trino on K8s
    • Spark on K8s
    • Data Lake Connectors
  • Client APIs
    • S3 API
    • Java HDFS-compatible API
  • Features
    • Alluxio Namespace and Under File System Namespaces
    • Cache Preloading
    • Cache Evicting
    • Cache Filtering
    • Cache Free
    • I/O Resiliency
  • Security
    • TLS Support
    • Apache Ranger Integration
  • Reference
    • User CLI
    • S3 API Usage
    • Third Party Licenses
  • Release Notes
Powered by GitBook
On this page
  1. Client APIs

Java HDFS-compatible API

Last updated 1 month ago

For Java applications that expose a HDFS compatible API, such as Spark and Trino, Alluxio can be set as to use for different schemes. By setting specific configurations and providing the required classes in the application classpath, it allows the application to transparently interface with Alluxio without the need to modify existing workflow code.

For specific instructions to integrate with a compatible application, see the corresponding installation documentation.

the implementation of org.apache.hadoop.fs.FileSystem
Trino
Spark