Hi Team, I am seeing below warning. is there any l...
# help
r
Hi Team, I am seeing below warning. is there any limitation with the "local" block adaptor? why it is not suited for production?
Copy code
WARNING!

Using the "local" block adapter.  This is suitable only for testing, but not
for production.
b
The local filesystem doesn't gives the same guaranties as an objectstore - the local adapter was implemented to have a quick way to show what lakefs can do. But design to work above your data lake.
r
ok, thanks
@Barak Amar I am trying to connect lakeFS from Dremio. lakeFS is using local block adapter. what is the value for "fs.s3a.endpoint" (lakeFS S3 endpoint to the value) ?
b
you can set it to your lakefs host:port
r
ok
b
You can set DNS record that will look like s3 endpoint and configure lakeFS. This is relevant for cases where the client can't work with path style access. Having the bucket as subdomain in the request to lakefs. Dremio can work with path sytle access - so any address that lead to lakefs should work.
👍 1
^ the extended version.
r
I can make connection from Dremio to lakeFS. I can see lakeFS repo from Dremio UI. However, I can't browse/see the contents of the repo. getting below error
Copy code
org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on demorepo: com.amazonaws.SdkClientException: Unable to execute HTTP request: http: Unable to execute HTTP request: http
Caused by: java.net.UnknownHostException: http
        at java.net.InetAddress.getAllByName0(Unknown Source)
        at java.net.InetAddress.getAllByName(Unknown Source)
        at java.net.InetAddress.getAllByName(Unknown Source)
        at com.amazonaws.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:27)
        at com.amazonaws.http.DelegatingDnsResolver.resolve(DelegatingDnsResolver.java:38)
        at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112)
        at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
        at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
        at com.amazonaws.http.conn.$Proxy99.connect(Unknown Source)
        at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
        at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1323)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)
👀 1
Any idea if this error related to lakeFS or Dremio?
b
from the log it looks like it is trying to access http. can you verify that you didn't include http:// in lakefs endpoint?
r
my bad! I just removed "http://" it is working now :)
lakefs 1
b
Evaluating working with OCI S3 interface (the above bug https://github.com/treeverse/lakeFS/issues/2531) it is something we currently can't address. Oracle S3 client can work without chunked encoding as a client, but lakeFS as a gateway which serve concurrent request will have hard time to scale in order to work this way. Because OCI have also specific API I've opened a feature request https://github.com/treeverse/lakeFS/issues/2532
It will enable the community to contribute and will be a better alternative to support this use-case.
If you can tell me about your use-case and how you are using lakeFS it may also help us evaluate if this feature can go into our roadmap
r
@Barak Amar Thanks for your inputs. I will discuss with the team and get back to you.
@Barak Amar Data lake implementation is on hold now. I will discuss with you once we resume the work.
b
Thanks for the update @Raghavendra Hegde