Hi all i wsa trying to use lakefs free version pai...
# help
y
Hi all i wsa trying to use lakefs free version paired with an existing s3 bucket, I had a bucket setup in the following way
Copy code
blockstore:
      type: s3
      s3:
        endpoint: "<https://na-s3.somehost.com/bucket>"
        discover_bucket_region: false
        credentials:
          access_key_id: "######"
          secret_access_key: "########"
the s3 bucket interface i am using is the following (https://docs.netapp.com/us-en/ontap/s3-config/ontap-s3-supported-actions-reference.html#bucket-operations) I was wondering if anyone here is using netapp s3 buckets as storage backend for lakefs? I was getting errors like this when i was using it
Copy code
error="operation error S3: GetObject, https response error StatusCode: 501, RequestID: , HostID: , api error NotImplemented: A header or query you provided implies functionality that is not implemented."
which i believe is coming from my netapp s3 instance, but was wondering what headers were being sent by lakefs ?
i
Hey @Yaphet Kebede, I’m not sure if lakeFS supports NetApps, but I can try to help. At which point do you get errors? Is the log provided from lakeFS? Can you provide the full logs of lakeFS during that time? Overall it doesn’t seem like lakeFS is using any advanced headers as part of the request to GetObject.
y
it's when i try to create a new repo , specifcally the example repo
πŸ‘ 1
let me grab the fill log
Copy code
time="2024-01-09T22:05:17Z" level=info msg="initialize blockstore adapter" func=pkg/block/factory.BuildBlockAdapter file="build/pkg/block/factory/build.go:32" type=s3
time="2024-01-09T22:05:17Z" level=info msg="initialized blockstore adapter" func=pkg/block/factory.buildS3Adapter file="build/pkg/block/factory/build.go:111" type=s3
time="2024-01-09T22:05:18Z" level=info msg="initialize blockstore adapter" func=pkg/block/factory.BuildBlockAdapter file="build/pkg/block/factory/build.go:32" type=s3
time="2024-01-09T22:05:18Z" level=info msg="initialized blockstore adapter" func=pkg/block/factory.buildS3Adapter file="build/pkg/block/factory/build.go:111" type=s3
time="2024-01-09T22:05:18Z" level=info msg="initialize OpenAPI server" func=pkg/api.Serve file="build/pkg/api/serve.go:38" service=api_gateway
time="2024-01-09T22:05:18Z" level=info msg="initialized S3 Gateway handler" func=pkg/gateway.NewHandler file="build/pkg/gateway/handler.go:124" s3_bare_domain="[<http://s3.local.lakefs.io|s3.local.lakefs.io>]" s3_region=us-east-1
time="2024-01-09T22:05:18Z" level=info msg="starting HTTP server" func=cmd/lakefs/cmd.glob..func8 file="cmd/run.go:307" listen_address="0.0.0.0:8000"

lakeFS 1.1.0 - Up and running (^C to shutdown)...


     β–ˆβ–ˆβ•—      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
     β–ˆβ–ˆβ•‘     β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•
     β–ˆβ–ˆβ•‘     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
     β–ˆβ–ˆβ•‘     β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β•  β•šβ•β•β•β•β–ˆβ–ˆβ•‘
     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘
     β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β•     β•šβ•β•β•β•β•β•β•

β”‚
β”‚ If you're running lakeFS locally for the first time,
β”‚     complete the setup process at <http://127.0.0.1:8000/setup>
β”‚
β”‚
β”‚ For more information on how to use lakeFS,
β”‚     check out the docs at <https://docs.lakefs.io/quickstart/>
β”‚

β”‚
β”‚ For support or any other question,                            >(.οΌΏ.)<
β”‚     join our Slack channel <https://docs.lakefs.io/slack>         (  )_
β”‚

Version 1.1.0

time="2024-01-09T22:06:34Z" level=error msg="failed to get S3 object bucket *******/example key dummy" func="pkg/logging.(*logrusEntryWrapper).Errorf" file="build/pkg/logging/logger.go:284" error="operation error S3: GetObject, https response error StatusCode: 501, RequestID: , HostID: , api error NotImplemented: A header or query you provided implies functionality that is not implemented." host=lakefs.local method=POST operation=GetObject operation_id=CreateRepository path=/api/v1/repositories request_id=4dd67131-c2ca-4773-ac99-ba25f0587dfa service_name=rest_api user=admin
time="2024-01-09T22:06:34Z" level=warning msg="Could not access storage namespace" func="pkg/api.(*Controller).CreateRepository" file="build/pkg/api/controller.go:1601" error="operation error S3: GetObject, https response error StatusCode: 501, RequestID: , HostID: , api error NotImplemented: A header or query you provided implies functionality that is not implemented." reason=unknown service=api_gateway storage_namespace="s3://*******/example"
time="2024-01-09T22:06:39Z" level=error msg="failed to get S3 object bucket *******/example key dummy" func="pkg/logging.(*logrusEntryWrapper).Errorf" file="build/pkg/logging/logger.go:284" error="operation error S3: GetObject, https response error StatusCode: 501, RequestID: , HostID: , api error NotImplemented: A header or query you provided implies functionality that is not implemented." host=lakefs.local method=POST operation=GetObject operation_id=CreateRepository path=/api/v1/repositories request_id=dd6419a6-71d8-4176-adaf-ba251ae2e07d service_name=rest_api user=admin
time="2024-01-09T22:06:39Z" level=warning msg="Could not access storage namespace" func="pkg/api.(*Controller).CreateRepository" file="build/pkg/api/controller.go:1601" error="operation error S3: GetObject, https response error StatusCode: 501, RequestID: , HostID: , api error NotImplemented: A header or query you provided implies functionality that is not implemented." reason=unknown service=api_gateway storage_namespace="s3://*******/example"
o
Shooting in the dark, but for other S3-compatible storages (namely MinIO) it is typically required to also set
force_path_style
to
true
to avoid DNS and certificate mismatches.. In your example, the config would look something like this:
Copy code
blockstore:
  type: s3
  s3:
    endpoint: "<https://na-s3.somehost.com/bucket>"
    discover_bucket_region: false
    force_path_style: true  # <-- Add this
    credentials:
      access_key_id: "######"
      secret_access_key: "########"
gratitude thank you 1
y
hmm let me try that thanks!
i am wondering if it the way i am specifying
Storage Namespace
on the UI ? i did
Copy code
Storage Namespace = "<s3://na-s3.somehost.com/bucket/test-repo>"
got the aforementioned error tried
Copy code
Storage Namespace = "<s3://test-repo>"
i got the same error
Copy code
time="2024-01-10T16:48:37Z" level=warning msg="Could not access storage namespace" func="pkg/api.(*Controller).CreateRepository" file="build/pkg/api/controller.go:1601" error="operation error S3: GetObject, https response error StatusCode: 501, RequestID: , HostID: , api error NotImplemented: A header or query you provided implies functionality that is not implemented." reason=unknown service=api_gateway storage_namespace="<s3://test-repo/>"
wondering if the real issue is that i don't understand what this value should be ?
o
the value should be in the form of
s3://<bucket name>/<path in the bucket>
In your second example, it is expected that `
Copy code
test-repo
is a bucket that exists in your S3 (or S3-compatible) storage
Also, looking at the configuration file you sent,
/bucket
should be removed - the endpoint should not contain a bucket name
y
In your second example, it is expected that `
Copy code
test-repo
is a bucket that exists in your S3 (or S3-compatible) storage
No this is the new repo path that i wanted to create, and my s3 has a bucket called
bucket
so you are saying
Copy code
<s3://bucket/test-repo>
should work?
in lakefs would know to contact the s3 host (na-s3.somehost.com) ?
i
Yes to both questions.
gratitude thank you 1
a
FYI: I had tested lakeFS with NetApp’s StorageGRID with following configurations and it worked fine out-of-the-box:
Copy code
docker run -d --pull always -p 8000:8000 \
--name lakefs-netapp \
-e LAKEFS_BLOCKSTORE_TYPE='s3' \
-e LAKEFS_BLOCKSTORE_S3_FORCE_PATH_STYLE="true" \
-e LAKEFS_BLOCKSTORE_S3_ENDPOINT="<https://xxx.netapp.com:443>" \
-e LAKEFS_BLOCKSTORE_S3_CREDENTIALS_ACCESS_KEY_ID="AAAAAAAAA" \
-e LAKEFS_BLOCKSTORE_S3_CREDENTIALS_SECRET_ACCESS_KEY="bbbbb" \
treeverse/lakefs run --local-settings
gratitude thank you 1
πŸ™ 1