Title
w

Walter Johnson

09/07/2022, 2:35 PM
import {ListObjectsCommand, S3Client} from "@aws-sdk/client-s3";
import { fromIni } from "@aws-sdk/credential-providers";
const client = new S3Client({
credentials: fromIni({profile: 'local'}),
endpoint: "<http://localhost:8000>",
});
var input = {
Bucket:"call-center3"
};
const command = new ListObjectsCommand(input);
const response = await client.send(command);
console.log(response)
i

Itai Admi

09/07/2022, 2:38 PM
I see several potential issues: 1. Make sure that you pass lakeFS credentials to the SDK, I can’t tell from the example if
local
profile in the file is pointing to lakeFS. 2. Unlike S3, lakeFS has branches. When you try to list objects from a repository you need to pass a prefix as well for the branch (must) and a prefix under the branch (optional).
w

Walter Johnson

09/07/2022, 2:51 PM
@Itai Admi My credentials are fine as I am able to use the ListBucketsCommand just fine. Would my prefix be the same thing as my branch name?
i

Itai Admi

09/07/2022, 2:52 PM
Yes, followed by a
/
w

Walter Johnson

09/07/2022, 3:01 PM
@Itai Admi I have to apologize, I misstated what I was getting a a response. I am getting a 200 response with no data objects. Just for reference I will post the response from ListBuckets as well.
import {ListObjectsCommand,ListObjectsV2Command,ListBucketsCommand, S3Client} from "@aws-sdk/client-s3";
import { fromIni } from "@aws-sdk/credential-providers";
const client = new S3Client({
credentials: fromIni({profile: 'local'}),
endpoint: "<http://localhost:8000>",
bucketEndpoint: false,
});
var input = {
Bucket:"call-center3",
Prefix:"main/"
};
const command = new ListBucketsCommand({});
const response = await client.send(command);
console.log(response)
That code produces this response
{
'$metadata': {
httpStatusCode: 200,
requestId: undefined,
extendedRequestId: undefined,
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
Buckets: [
{ Name: 'call-center', CreationDate: 2022-09-06T14:45:37.661Z },
{ Name: 'call-center2', CreationDate: 2022-09-06T14:51:10.171Z },
{ Name: 'call-center3', CreationDate: 2022-09-06T15:06:38.831Z }
],
Owner: { DisplayName: '', ID: '' }
}
i

Itai Admi

09/07/2022, 3:02 PM
Just to make sure I understand - both ListBuckets and ListObjects work as expected?
w

Walter Johnson

09/07/2022, 3:02 PM
This code
import {ListObjectsCommand,ListObjectsV2Command,ListBucketsCommand, S3Client} from "@aws-sdk/client-s3";
import { fromIni } from "@aws-sdk/credential-providers";
const client = new S3Client({
credentials: fromIni({profile: 'local'}),
endpoint: "<http://localhost:8000>",
bucketEndpoint: false,
});
var input = {
Bucket:"call-center3",
Prefix:"main/"
};
const command = new ListObjectsV2Command(input);
const response = await client.send(command);
console.log(response)
Produces this response :
{
'$metadata': {
httpStatusCode: 200,
requestId: undefined,
extendedRequestId: undefined,
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
}
}
ListObjects returns no data
ListBuckets works perfect.
i

Itai Admi

09/07/2022, 3:03 PM
Silly question: Do you have objects under the
main
branch?
w

Walter Johnson

09/07/2022, 3:03 PM
Second set of code is ListObjects
Yes sir. I have a package.json file in there
i

Itai Admi

09/07/2022, 3:08 PM
That’s weird. Can you share lakeFS logs? You may be needing to change the level to TRACE for something useful to surface..
w

Walter Johnson

09/07/2022, 3:09 PM
Can you direct me to the logs?
i

Itai Admi

09/07/2022, 3:10 PM
How are you running lakeFS?
w

Walter Johnson

09/07/2022, 3:10 PM
I have it running in a docker container using the install instructions from the website.
I am in the container now.
i

Itai Admi

09/07/2022, 3:12 PM
you can get it from the outside too.
docker logs <container_id>
w

Walter Johnson

09/07/2022, 3:16 PM
I am not seeing logs that indicate I am making contact either way. How do I turn up the level to TRACE?
in my config.yaml? Do I need to restart my container once I make that change?
i

Itai Admi

09/07/2022, 3:19 PM
in config yaml https://docs.lakefs.io/reference/configuration.html:
logging:
  level: trace
Yes, container needs to restart. But not deleted.
w

Walter Johnson

09/07/2022, 3:51 PM
Inside of the docker container I have added a config.yaml in /home/lakefs with the following:
logging:
  format: json
  level: TRACE
  output: "-"
I then restarted my container and added a file through the UI. I don't see an increased level of logging. Do you have any idea what I have done wrong?
i

Itai Admi

09/07/2022, 3:52 PM
That’s unnecessary.. You can add it as env var to the container.
LAKEFS_LOGGING_LEVEL=TRACE
Are you using
docker-compose
?
w

Walter Johnson

09/07/2022, 3:56 PM
Yeah. I started the container with docker-compose. Do I have to make of a copy of this container and start that one with the log level set or can I restart my container and set the ENV variable at that time? Sorry for making you give me a mini course in docker.
i

Itai Admi

09/07/2022, 3:57 PM
No worries. No need to copy the container, you can use the same one with the env var
w

Walter Johnson

09/07/2022, 4:59 PM
I have gotten some log data but it doesn't indicate any errors. I ran the 2 commands simultaneously and here are the log from the 2 request.
time="2022-09-07T16:56:23Z" level=debug msg="performing S3 action" func=pkg/gateway.EnrichWithOperation.func1.1 file="build/pkg/gateway/middleware.go:116" action=list_repos message_type=action
time="2022-09-07T16:56:23Z" level=debug msg="HTTP call ended" func=net/http.HandlerFunc.ServeHTTP file="usr/local/go/src/net/http/server.go:2047" host=localhost log_audit=API method=GET path=/ request_id=62baec00-fc16-444a-9042-1b8a0027e927 sent_bytes=529 service_name=s3_gateway status_code=200 took=25.8603ms user=admin
time="2022-09-07T16:56:23Z" level=debug msg="performing S3 action" func=pkg/gateway.EnrichWithOperation.func1.1 file="build/pkg/gateway/middleware.go:116" action=list_repos message_type=action
time="2022-09-07T16:56:23Z" level=debug msg="HTTP call ended" func=net/http.HandlerFunc.ServeHTTP file="usr/local/go/src/net/http/server.go:2047" host=call-center2.localhost log_audit=API method=GET path="/?prefix=main%2F" request_id=a98659cc-ae3a-44de-a5ef-ce01f248c4f4 sent_bytes=529 service_name=s3_gateway status_code=200 took=2.7925ms user=admin
i

Itai Admi

09/07/2022, 5:09 PM
It seems like both requests are ListBuckets requests (or
list_repos
as lakeFS writes it)
The second request should be list objects
Notice the prefix there
w

Walter Johnson

09/07/2022, 5:14 PM
I am using the S3 client from AWS so where do you think the break down is occurring? I am using the ListObjects function but it is running the list_repos action on the LakeFS side.
i

Itai Admi

09/07/2022, 5:18 PM
Can you try using path style access for the S3 client, something like
AmazonS3 s3client = AmazonS3Client.builder()
            .withCredentials((new AWSStaticCredentialsProvider(credentials)))
            .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("host", "region"))
            .withPathStyleAccessEnabled(true)
            .build();
w

Walter Johnson

09/07/2022, 5:25 PM
FORCEPATHSTYLE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! I just had to add that option when building my s3 client.
const client = new S3Client({
    credentials: fromIni({profile: 'local'}),
    endpoint: "<http://localhost:8000>",
    bucketEndpoint: false,
    forcePathStyle:true

  });
I only spent 10 hours on that one. 😁
😿 1
i

Itai Admi

09/07/2022, 5:26 PM
I'm glad it's working, let us know if there's anything else we can do to help
w

Walter Johnson

09/07/2022, 5:27 PM
now the action is list objects
🙌 1