https://lakefs.io/ logo
#help
Title
o

Oliver Dain

08/15/2023, 7:40 PM
I'm trying to get
rclone
to work with the sample repo one gets by clicking "try without installing". I've setup my rclone config file like:
Copy code
[lakefs]
type = s3
provider = AWS
env_auth = false
access_key_id = <redacted>
secret_access_key = <redacted>
endpoint = <https://big-zebra.us-east-1.lakefscloud.io>
no_check_bucket = true
where
big-zebra.us-east-1...
is, I think, the correct endpoint for that sample repo (it's what shows in my address bar). But I get certificate errors:
Copy code
$ rclone -vv ls lakefs:sample-repo/testing/  
2023/08/15 12:35:09 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "-vv" "ls" "lakefs:sample-repo/testing/"]
2023/08/15 12:35:09 DEBUG : Creating backend with remote "lakefs:sample-repo/testing/"
2023/08/15 12:35:09 DEBUG : Using config file from "/home/oliver/.config/rclone/rclone.conf"
2023/08/15 12:35:09 DEBUG : fs cache: renaming cache item "lakefs:sample-repo/testing/" to be canonical "lakefs:sample-repo/testing"
2023/08/15 12:35:49 DEBUG : 2 go routines active
2023/08/15 12:35:49 Failed to ls: RequestError: send request failed
caused by: Get "<https://sample-repo.big-zebra.us-east-1.lakefscloud.io/?delimiter=&encoding-type=url&max-keys=1000&prefix=testing%2F>": x509: certificate is valid for *.<http://us-east-1.lakefscloud.io|us-east-1.lakefscloud.io>, <http://us-east-1.lakefscloud.io|us-east-1.lakefscloud.io>, not <http://sample-repo.big-zebra.us-east-1.lakefscloud.io|sample-repo.big-zebra.us-east-1.lakefscloud.io>
I see it's a wildcard cert for
*.<http://us-east-1.lakefscloud.io|us-east-1.lakefscloud.io>
which seems like it should work, but it also looks like
rclone
is adding the repo name (
sample-repo
) to the hostname which seems incorrect. What am I doing wrong?
Note: I tried a similar setup with
gsutil
and a
.boto
file and got the same certificate error.
o

Oz Katz

08/15/2023, 7:55 PM
Hey @Oliver Dain ! can you try setting
Copy code
provider = Other
instead of
AWS
?
o

Oliver Dain

08/15/2023, 7:56 PM
huh. That worked! But why?? How does rclone know which protocol to use to talk to it? Note: also contrary to the instructions here: https://docs.lakefs.io/howto/copying.html#using-rclone
Also: not sure but I think that may have messed up the semantics. The
ls
gave me a recursive listing. With
provider=Other
does it flatten the directory hierachy?
scratch the 2nd:
Copy code
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
o

Oz Katz

08/15/2023, 7:59 PM
Indeed that seems like a documentation bug in lakeFS. From the rclone docs:
> s3-force-path-style
>
If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info.
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting.
would you like to open an issue on GitHub? Iā€™d be happy to assist šŸ™‚ (no worries if not, I can get it done)
o

Oliver Dain

08/15/2023, 8:03 PM
I've got to run to a meeting so if you could open the bug that'd be great. Thanks!
šŸ‘ 1
o

Oz Katz

08/15/2023, 8:15 PM
issue opened