Alluxio
ProductsLanguageHome
DA-3.5 (stable)
DA-3.5 (stable)
  • Overview
  • Getting Started with K8s
    • Resource Prerequisites and Compatibility
    • Install on Kubernetes
    • Monitoring and Metrics
    • Cluster Administration
    • System Health Check & Quick Recovery
    • Collecting Cluster Information
  • Architecture
    • Alluxio Namespace and Under File System Namespaces
    • I/O Resiliency
    • Worker Management and Consistent Hashing
  • Storage Integrations
    • Amazon AWS S3
    • HDFS
    • Tencent COS
  • Compute Integrations
    • Trino on K8s
    • Spark on K8s
    • Data Lake Connectors
  • Client APIs
    • S3 API
    • Java HDFS-compatible API
  • Caching Operations
    • Cache Preloading
    • Cache Filter Policy
    • Cache Eviction
      • TTL Policy
      • Priority Policy
      • Free CLI Command
  • Resource Management
    • Directory-Based Cluster Quota
    • UFS Bandwidth Limiting
  • Performance Optimizations
    • Read Throughput Via Replicas
    • Reading Large Files
    • Metadata Listing
    • Data Prefetch
  • Security
    • TLS Support
    • Apache Ranger Integration
  • Reference
    • User CLI
    • Metrics
    • S3 API Usage
    • Third Party Licenses
  • Release Notes
Powered by GitBook
On this page
  1. Client APIs

Java HDFS-compatible API

Last updated 6 months ago

For Java applications that expose a HDFS compatible API, such as Spark and Trino, Alluxio can be set as to use for different schemes. By setting specific configurations and providing the required classes in the application classpath, it allows the application to transparently interface with Alluxio without the need to modify existing workflow code.

For specific instructions to integrate with a compatible application, see the corresponding installation documentation.

the implementation of org.apache.hadoop.fs.FileSystem
Trino
Spark