https://lakefs.io/ logo
Title
m

Monde Sinxi

03/02/2023, 3:25 PM
Hi all, Having some trouble getting rclone to work nicely with Lakefs. I've manged to spin up Lakefs in a container and joined it to a network I had created earlier. I have rclone in another container on that same network so service names are resolved internally in Docker.
[lakefs]
env_auth = false
type = s3
provider = Other
endpoint = <http://lakefs:8000>
region = use-east-1
secret_access_key = $SECRET_ACCESS_KEY
access_key_id = $ACCESS_KEY_ID
force_path_style = true
no_check_bucket = true
I have to use
Other
as a provider because rclone will override
force_path_style
with
false
if I use the
AWS
provider as suggested on the docs. If I then run:
rclone ls -vv --dump=bodies lakefs:repo/branch/path
I get the following error
2023/03/02 17:11:13 Failed to lsd with 2 errors: last error was: MissingFields: Missing fields in request.
        status code: 400, request id: , host id:
Anyone have any idea how I go about resolving this? ``````
👀 1
i

Itai Admi

03/02/2023, 3:40 PM
Hi @Monde Sinxi, welcome to the lake :lakefs: Out the top of my mind it seems like the lakeFS reference should be
<lakefs://repo/branch/path>
, isn’t it so?
m

Monde Sinxi

03/02/2023, 3:49 PM
Hi @Itai Admi. Thank you! Even if I change the reference to`lakefs://repo/branch/path` I get the same same error
i

Itai Admi

03/02/2023, 3:49 PM
Cool, trying to reproduce this locally..
👍 1
I see
region=use-east-1
in your config, maybe that typo makes the difference?
Here’s a local config file that works for me:
[lakefs]
type = s3
provider = s3
access_key_id = <ACCESS_KEY_ID>
secret_access_key = <SECRET_ACCESS_KEY>
region = us-east-1
endpoint = <http://127.0.0.1:8000>
m

Monde Sinxi

03/02/2023, 4:03 PM
Hmmm. I still get the same error. I wonder if it's with my
docker-compose.yml
file here are the relevant bits.. am I missing anything?
environment:
          LAKEFS_AUTH_ENCRYPT_SECRET_KEY: "ENCRYPT_KEY"
          LAKEFS_DATABASE_TYPE: "postgres"
          LAKEFS_DATABASE_POSTGRES_CONNECTION_STRING: "<postgres://postgres:password@postgres:5432/lakefs>"
          LAKEFS_BLOCKSTORE_TYPE: "s3"
          LAKEFS_BLOCKSTORE_S3_ENDPOINT: "<http://minio-prod:9000>"
          LAKEFS_BLOCKSTORE_S3_FORCE_PATH_STYLE: "true"
          LAKEFS_BLOCKSTORE_S3_CREDENTIALS_ACCESS_KEY_ID: "minioadmin"
          LAKEFS_BLOCKSTORE_S3_CREDENTIALS_SECRET_ACCESS_KEY: "minioadmin"
          LAKEFS_BLOCKSTORE_S3_DISCOVER_BUCKET_REGION: "false"
          LAKECTL_SERVER_ENDPOINT_URL: "<http://localhost:8000>"
          BLOCKSTORE_S3_SKIP_VERIFY_CERTIFICATE_TEST_ONLY: "false"
          LAKEFS_LOGGING_LEVEL: "INFO"
          LAKEFS_STATS_ENABLED: "true"
I also get "provider s3 unkown"
i

Itai Admi

03/02/2023, 4:19 PM
Can you switch the logging level to TRACE and share the lakeFS logs after running the rclone command?
m

Monde Sinxi

03/02/2023, 4:56 PM
Here's the error message
i

Itai Admi

03/02/2023, 4:59 PM
Cool - are you using the default v4 signature for the s3 rclone source?
m

Monde Sinxi

03/02/2023, 5:07 PM
I think I cut something out...
So the "log header does not match V2 structure" part suggests I'm using V2 signature?
i

Itai Admi

03/02/2023, 5:10 PM
I think so.
m

Monde Sinxi

03/02/2023, 5:34 PM
I'm actually not too sure what to do, I tried using the
--s3-v2-uath
flag and that did not work... I still get the same error
i

Itai Admi

03/02/2023, 5:36 PM
Can you use my rclone configuration, change the values to fit your env and share the result? I'll try later today again
m

Monde Sinxi

03/02/2023, 5:58 PM
ARGH! Fixed, I had my acces_key_id swapped with my secret_access_key. Although the error did not help drill it down. Thanks for the help!
i

Itai Admi

03/02/2023, 5:59 PM
It's always the simple things 😄