i had an instance where an aws client would finish...
# help
y
i had an instance where an aws client would finish "uploading" with out errors, and the lakefs storage (data storage) was full , and the client was supposed to upload data much larger than the available storage. The client finishes, the files show up on the webpage, but most of them are 0bytes, and parts . When trying to download them, for eg the python lakefs sdk reports that it had expected some amount of data but it recieve 0 bytes or how ever is actually written on the disk .... and the metadata values are different from actual disk usage for some of the files... I was wondering if this was all connected (?)
o
that sounds like a bug! could you please open an issue? Testing for these kind of edge cases is pretty hard, but it seems like the local storage adapter might be missing a few important checks to make sure files are properly persisted: 1. it doesn’t fsync after it’s done writing 2. it doesn’t handle errors raised when closing the output file both of which should be done in a write path.
y
i
Thanks for creating the issue!