Hi @Carlton Ramsey!
lakeFS itself doesn't really care about data - only metadata. But... The underlying storage and network layers usually do!
• S3 really wants you to upload complete objects that are <5GiB. Any more than that and you need to use multipart uploads.
• Many data processing frameworks work nicely with large numbers of objects. I usually find splitting objects into pieces of sizes around 10MiB works nicely. (Anything between 2MiB and 50MiB will probably give similar results.) You want a large number of pieces to distribute with, but you also don't want very small pieces.
So I would probably go with multiple objects. Of course this will also depend on what software framework you use, and on specifics of your architecture.