Title
w

Walter Johnson

11/29/2022, 7:44 PM
I have lakeFS instance running at https://lakefs.quecall.biz/ . I have created a repo but I can't figure out how to make files public. Can anyone help with that? It is my test instance so I have no issue providing credentials for it.
b

Barak Amar

11/29/2022, 7:45 PM
Hi Walter, when you access the repository with credentials you don't have access to files? or you are asking if there is a way to access lakeFS repository without creds?
w

Walter Johnson

11/29/2022, 7:46 PM
I would like to allow access without creds.
like a public s3 bucket
b

Barak Amar

11/29/2022, 7:50 PM
Currently all the operations requires credentials and associate the request to a user based on the key/secret. If is it ok with you to open an issue and describe the feature / use-case? or I can help with that.
w

Walter Johnson

11/29/2022, 7:56 PM
Sure. I would love to assist in anyway possible with the growth of this product. A public s3 bucket is very useful feature. I could work around it. Does lakefs allow for integration with other authentication methods?
I want to use the repo as a public image and audio file repository.
b

Barak Amar

11/29/2022, 7:57 PM
It will accept JWT token based on the login. The token can be part of the header or cookie.
But it will not enable you to have public access - as you describe it
I can suggest a workaround - not tested - but may work
w

Walter Johnson

11/29/2022, 7:59 PM
I am using keycloak for authentication and authorization. It provides each authenticated user with a token but I don't see a clear path of how to give lakefs knowledge of that token.
b

Barak Amar

11/29/2022, 7:59 PM
token will expire
but another question first - the public access s3 provides is for s3 protocol
and there is an option to enable http (non secure) too - for secure you need LB with certificate - as far as I remember.
Do you want to serve the content though s3 protocol - or access the lakefs UI?
One workaround you can do to give read only access to lakeFS without passing credentials (or any other service that supports http user auth) is using nginx pass proxy that will pass the credentials for you. Create a lakeFS with read-only access first. Encode Authorization header
curl -v -u key:secret <http://www.google.com|www.google.com> 2>&1 | grep Authorization
Will hill dump the value you need. On the nginx root location you can redirect the traffic to lakeFS and pass the above creds:
proxy_pass <http://127.0.0.1:8000>;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Authorization "Basic ...";
This is sample configuration based on the fact that both running on the same machine.
Note that this is a generic workaround and not something designed to work with lakeFS. It is a general way to open access with specific credentials.
And the s3 gateway will not work with the workaround.
w

Walter Johnson

11/29/2022, 10:07 PM
Thank you. I will try implementing this.