Michael Gaebel
09/29/2023, 1:59 PMspark.hadoop.fs.s3a.endpoint
property to be the lakefs endpoint. This works great for data already in a lakefs repo, but is there any way to still access a "normal" s3 path within the same session? I'd like to create a dataframe of some data from a "normal" S3 path and use it alongside the lakefs-managed data.
If not, is the only reasonable path to first import the data at the normal S3 path into the lakefs repo? Thanks for any insight you have.Niro
09/29/2023, 2:47 PMAriel Shaqed (Scolnicov)
09/29/2023, 2:54 PMMichael Gaebel
09/29/2023, 2:56 PMAriel Shaqed (Scolnicov)
09/29/2023, 2:57 PMMichael Gaebel
09/29/2023, 2:58 PMAriel Shaqed (Scolnicov)
09/29/2023, 2:59 PMMichael Gaebel
09/29/2023, 3:09 PMNiro
09/29/2023, 3:14 PM