can lakefs expose s3 path, so i can use `juicefs`o...
# help
б
can lakefs expose s3 path, so i can use `juicefs`or `s3fs`mount local path to lakefs s3 path? like this:
s3fs <s3bucket> <localpath> -o url=<http://127.0.0.1:8391>
juicefs format --storage s3 --bucket <http://127.0.0.1:8391/test> --access-key admin --secret-key seaweedfs <redis://127.0.0.1:6379/1> data
t
@Борис Антипов that’s very interesting and we will be happy to look into it, can you please open github issues with your feature request description?
б
Don't you have such a way to mount it? How can we read data from lakefs in other ways?
t
How can we read data from lakefs in other ways?
I recommend that you explore our supported integrations, lakeFS integrates with many applications that read and right from an object storage.
Don’t you have such a way to mount it?
Can you please elaborate on your use case? what are you trying to achieve? most productive would be to open a github issue with your feature request and we will review it and do the needed research. Will this work for you?
б
We want to do data version management through lakefs, and then store use seaweadfs, and read data in the way of mount data flow like this:
seaweadfs->lakefs->s3fs
I want to ask, does lakefs support data reading now? It seems to be just a data version management tool, like git
In this way, we can randomly switch the data version and read the data what we want
t
Great question! Versioning is one of the capabilities lakeFS provides, but what lakeFS enable is data life cycle management. You may want to read our recent blog post on that topic. To be more specific, lakeFS integrates with many tools that read data from the object store (seaweadfs in your case). As for s3fs, i’m not familiar with it and we will be happy to test it out and help you get what you need lakeFS to do for you.
I opened https://github.com/treeverse/lakeFS/issues/3034 so that we can look at it offline and get back to you with more details
It will be very helpful if you add more details to the issue and tell us as much as you would want to share about your use case
In addition to that, i’m happy to schedule a meeting and try to help planning how to integrate lakeFS into your architecture. let me know if you are up for it.
б
Now there is no problem integrating seaweedfs, but only the data version management is achieved. We still need to read data quickly through lakefs
Don't you expose s3 path like this:`s3fs <s3bucket> <localpath> -o url=http://127.0.0.1:8391` , if has the path expose , we can easily mount on it
t
We haven’t tried and as I mentioned we are happy to try it out and get back to you 🙂
We will use this issue to track your ask, I will be able to answer your question early next week, and will make sure to update you 🙂