It seems like `lakectl fs upload` is causing an OO...
# help
a
It seems like
lakectl fs upload
is causing an OOM error and is being killed by the kernel, or sometimes it freezes the OS when uploading large files from the local system. I currently have 16GB of memory. I can increase my memory if needed, but I am curious why it requires so much memory for the upload.
o
Hi @Andrij David Please refer to https://lakefs.io/blog/3-ways-to-add-data-to-lakefs/ for recommended ways to add data to lakeFS
a
I already went through this and it doesn't answer our use case. We have many folders with multiple small files. Each folder is about 200gb. And it is currently hosted locally instead of being in a bucket.
This motivated the choice of 'lakectl fs upload`
I am not sure however how the first solution translates to google cloud storage.
o
Are you using --pre-sign as described here?
I am not sure however how the first solution translates to google cloud storage.
Please elaborate on this
a
Yes, I am using --pre-sign
I don't really understand what cause the issue.
The first solution here copies only one file and uses AWS. Can I do the same thing with folders using Google Cloud Storage? https://lakefs.io/blog/3-ways-to-add-data-to-lakefs/
For context
Copy code
22352 Killed      lakectl fs upload --source . --recursive "lakefs://${LAKEFS_REPO_NAME}/${DEFAULT_BRANCH}/" --pre-sign -p 8
o
@Andrij David For the context of things, is your repository located on GCS?
a
Yes
o
Thanks. We'll look into this
@Andrij David Please open an issue in the lakeFS GitHub project
a