Hello there,
it's a few days i'm discovering LakeFS, i liked many features in it, i'm using it with Minio as an S3 backend, and i see that when i commit a new file, let's name it "data.csv", LakeFS store it as an object in minio with a size of 100Mib, but when i update the csv file and i add 30 Mib of data (for example) and i commit again, it creates another obejct with size of 130Mib, and it keeps the old one in minio. I'm wondering if LakeFS has a mechanism to store only the delta in the second object, and point on the 2 object at once to read (incremental restore)?
Or there is any other mechanism to optimize the storage in the backend?
Thank you in advance.