Hi , i was uploading a very large(182Gb) file on ...
# help
y
Hi , i was uploading a very large(182Gb) file on my lakefs instance, via s3 and upload looks fine but at the very last step after all the parts are assembled
Copy code
time="2024-02-01T18:02:35Z" level=error msg="could not update metadata" func="pkg/gateway/operations.(*PathOperation).finishUpload" file="build/pkg/gateway/operations/operation_utils.go:51" error="postgres get: context canceled" host=**** matched_host=false method=POST operation_id=post_object path=wikidata-all.hdt physical_address=data/gi9jfn6d5a9ko68osg30/cmtsm96d5a9ko68osg3g ref=main repository=wikidata request_id=ddd64e7a-93e2-4a0f-aba1-36a84986da3e service_name=s3_gateway upload_id=cc4834cf2223468ebd5c0e6e97cacf11 user=admin
i get the above error, i updated postgres max connection lifetime to 3hrs
Copy code
database:
  type: postgres
  postgres:
    connection_max_lifetime: 3h
but i am not sure how to make sure the context is still intact,
b
Usually context cancellation at this point cause by client disconnection. Any chance you have more logs from the request context on the server side or capture an error in the client side? The message about failing to write to the database mentions 'context canceled,' but this doesn't mean the database itself had an issue. Instead, the request triggered a stop, which caused the write operation to be interrupted.
Also, if you are using aws s3 cli on the client side to upload, run the above with --deubg will provide us more information in parallel to the lakefs logs, to identify the source of the problem.