Hi! I'm trying to get started with lakefs by conn...
# help
h
Hi! I'm trying to get started with lakefs by connecting a local deploy to an s3 bucket (remote). I'm running
lakectl ingest --from <s3://secret-bucket/prefix/> --to <lakefs://my-repo/main/>
but I get the following error:
Copy code
physical address is not valid for block adapter: local
400 Bad Request
Does anyone know what I'm doing wrong? I confirmed I have read access to the bucket (aws s3 ls works)
o
Hi Houssam and welcome! Let me check
can you please run that with -v / --verbose flag?
h
Thank you for your help!
yes one sec
I still get the same error message
the flag didn't give me more info
o
thank you... let me try to run it myself to see if I can reproduce
h
I'm using latest release btw
oh
the latest release notes say I'm supposed to run a migration, does that also have to be the case if I'm running a fresh docker-compose locally?
o
you can try running migrate up, if you're on the latest version it'll show
I managed to run the same command so it's not syntax, I'm using the playground though
Copy code
$ lakectl ingest --from <s3://my-bucket/path/> --to <lakefs://my-repo/main/>
Staged 1 objects so far...

Staged 3 external objects (total of 20.3 kB)
did you already run migrate up to see if your database is on the latest version?
h
running migrate up ... looks like my config is not properly setup
o
are you able to list your repositories using
lakectl repo list
?
h
yes
Copy code
❯ lakectl repo list
+------------+-------------------------------+------------------+-------------------+
| REPOSITORY | CREATION DATE                 | DEFAULT REF NAME | STORAGE NAMESPACE |
+------------+-------------------------------+------------------+-------------------+
| my-repo    | 2022-05-04 16:27:02 -0400 EDT | main             | <local://my-repo/>  |
+------------+-------------------------------+------------------+-------------------+

ī…š  ~ ❯
i
h
I looked into that thread, the user is getting data from Azure
👍 1
and solved the issue once they switched to an s3 bucket ... which is where I'm having an issue
i
Got it
h
dumb questions but
Copy code
❯ lakefs migrate up
INFO   [2022-05-04T16:51:52-04:00]lakeFS/cmd/lakefs/cmd/root.go:103 cmd/lakefs/cmd.initConfig Config loaded                                 fields.file=/Users/houssamkherraz/.lakefs.yaml file=/Users/houssamkherraz/.lakefs.yaml phase=startup
FATAL  [2022-05-04T16:51:52-04:00]lakeFS/cmd/lakefs/cmd/root.go:108 cmd/lakefs/cmd.initConfig Invalid config                                error="bad configuration: missing required keys: [auth.encrypt.secret_key blockstore.type]" fields.file=/Users/houssamkherraz/.lakefs.yaml file=/Users/houssamkherraz/.lakefs.yaml phase=startup
o
let me try run lakefs locally, mind sharing your configuration? if so, please hide any credentials/secrets
h
I do have the secret_access_key field in there, what's
blockstore.type
? Do I need some other config there?
o
you do need blockstore.type, this is where lakefs store the objects, in your case it should be s3
h
my config looks like:
Copy code
❯ vim .lakectl.yaml

  1 # lakectl command line configuration - save under the filename $HOME/.lakectl.yaml                  
  2 credentials:                                                                                        
  3   access_key_id: X                                                               
  4   secret_access_key: Y                                       
  5 server:                                                                                             
  6   endpoint_url: <http://127.0.0.1:8000/api/v1>
where does that go in the config file?
o
this is the lakectl, can you share lakefs configuration? (the server's configuration)
h
oh how do I check that?
o
how did you run lakefs?
h
via docker-compose
curl <https://compose.lakefs.io> | docker-compose -f - up
as mentioned in the docs
o
I guess you following our docs, let me try and reproduce it
h
thanks for your help Or
o
happy to assist
ok, I managed to reproduce it. I'm checking
🙌 1
h
would it be easier for me to just use older releases to unblock myself?
o
did it work for you in older versions using the local storage?
h
No this is the first time I'm trying lakefs ^^
o
ingest can works when the source and destination has the same blocker adapter (in this case s3 to s3)
thanks @Itai Admi
h
oh so it doesn't/never works from s3 to local?
o
no
you can run lakefs using an s3 storage adapter and import from s3 to a the lakefs repo
you can see the requirements and steps here: https://docs.lakefs.io/setup/storage/s3.html
h
Got it, thanks!
o
or you can use our playground to play with lakeFS online (and also import from other s3 buckets or use our pre-loaded data to see value fast)
👍 1
a
Hi Houssam, You might see this page for our various data import options. That page also notes that "ingest" is a zero-copy option; lakeFS supports only a single block adaptor, so importing from a different block adaptor will require a copy. To copy, use any tool - options include aws s3 cp, or Rclone, or distcp.
158 Views