https://lakefs.io/ logo
#help
Title
# help
s

Steve Willard

07/14/2023, 8:14 PM
Feeling a bit confused. I’m running LakeFS locally in a container, and I can access it on port 8000. I have it configured it use my AWS credentials / backing it with S3. If I upload files via the UI, I can see things are working correctly, and data appears in the bucket. When I do this:
Copy code
aws --endpoint-url <http://localhost:8000> s3 cp ~/Desktop/test.csv <s3://764af7c3-d093-463b-9d57-74eda636a4cc/main/test.csv>
I see in the logs:
Copy code
2023-07-14 16:07:36 time="2023-07-14T20:07:35Z" level=warning msg="could not find access key" func=pkg/gateway.AuthenticationHandler.func1 file="build/pkg/gateway/middleware.go:53" error="credentials not found" key=xxxxxxxxx
How can the UI find keys, but uploading via the AWS CLI cannot?
a

Amit Kesarwani

07/14/2023, 8:18 PM
Did you configure aws cli to create lakefs profile: https://docs.lakefs.io/integrations/aws_cli.html
How are you passing lakeFS credentials?
s

Steve Willard

07/14/2023, 8:21 PM
I didn’t setup a specific aws profile. I just have my keys set under
[default]
in
~/.aws/credentials
On the container I tried setting
LAKEFS_BLOCKSTORE_S3_CREDENTIALS_ACCESS_KEY_ID
and
LAKEFS_BLOCKSTORE_S3_CREDENTIALS_SECRET_ACCESS_KEY
env vars. I also tried to create an
~/.aws/credentials
file on the container with a
lakefs/config.yaml
file
a

Amit Kesarwani

07/14/2023, 8:24 PM
Have you set lakeFS credentials or AWS credentials under
[default]
in
~/.aws/credentials
? It should be lakeFS credentials.
s

Steve Willard

07/14/2023, 8:25 PM
Oh! that does make more sense, let me try that
Ok that must have been it. I’m getting a NoSuchBucket, but that seems like just a me problem. Thank you! 🙏
a

Amit Kesarwani

07/14/2023, 8:45 PM
👍
It should be repo name instead of
764af7c3-d093-463b-9d57-74eda636a4cc
unless this is your repo name: s3://764af7c3-d093-463b-9d57-74eda636a4cc/main/test.csv
👍 1
3 Views