I looked at the capabilities of LakeFS, and it is very impressive for data storage, addressing exactly what we need in an MLOps pipeline. However, out of curiosity, and knowing that LakeFS is not specifically built for this purpose, will it still be performant if we store and version ML and LLM models through LakeFS (such as safetensor, pickle files, or binaries, each being around ~5GB)? For example, modern LLMs can reach about 160GB, meaning the repository would consist of around 40 files of ~4GB each.