The preferred option is to use presign urls using lakeFS API and avoid the operational overhead of sending the data through lakeFS. If you’re using lakeFS Hadoop FS, data will not flow through lakeFS, just the metadata. If you’re using the S3 gw, then it will.
Per your question, lakeFS will not load the entire file into memory, it will stream it to S3. Meaning, data will flow through lakeFS but the memory requirements won’t have to increase linearly with it.