<@U02SNPQRGEM> is right (following some F2F): S3AF...
# dev
a
@Jonathan Rosenberg is right (following some F2F): S3AFileSystem reads (at least some) configuration from "fs.*s3a*" no matter what
fs.impl
loaded it. E.g. have a look at hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/InternalConstants.java in the hadoop repo. Even at tip of trunk it says
Copy code
/**
   * AccessPoint ARN for the bucket. When set as a bucket override the requests for that bucket
   * will go through the AccessPoint.
   */
  public static final String ARN_BUCKET_OPTION = "fs.s3a.bucket.%s.accesspoint.arn";
(and I don't see any code that will try to edit "s3a" in the configuration option name).
🤢 2
😕 2
o
time to contribute to hadoop then? 🙂
🔥 2
a
I'm still trying to figure out why it is that way. I don't understand, so instead I worry. On Hadoop 3.x you can at least (and instead) configure buckets separately. I'm trying to figure out how to access e.g. 2 S3 endpoints in a single Spark job, except by using S3A on one and S3N on the other.
👍 1
j
@Ariel Shaqed (Scolnicov) it's worth mentioning that it is possible to configure fs.<scheme> under the S3N filesystem… I wonder why they didn't copy this capability 🤔