Hi guys, I'm facing a problem that I cannot upload...
# help
c
Hi guys, I'm facing a problem that I cannot upload any objects through the web UI. The error which is logged is: time="2024-09-02T080437Z" level=error msg="Audit check failed" func="pkg/version.(*AuditChecker).CheckAndLog" file="build/pkg/version/audit.go:113" check_url="https://audit.lakefs.io/audit" error="Get \"https://audit.lakefs.io/audit?installation_id=xxxxxxxx&version=1.28.1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" version=1.28.1 Actually, I don't want that LakeFS sends audit logs externally. My configuration is:
Copy code
environment = [
  { name  = "LAKEFS_DATABASE_TYPE", value = "dynamodb" },  # Specify the internal database to be DynamoDB
  { name  = "LAKEFS_BLOCKSTORE_TYPE", value = "s3" },  # Specify the internal storage type to be S3
  { name  = "LAKEFS_DATABASE_DYNAMODB_TABLE_NAME", value="${var.project_name}-${var.lakefs_dynamodb_table_name}" },
  { name  = "LAKEFS_BLOCKSTORE_S3_ENDPOINT", value = "<http://s3>.${var.aws_region}.amazonaws.com/${aws_s3_bucket.lakefs_bucket.bucket}" },
  { name  = "AWS_REGION", value = var.aws_region },
  { name  = "LAKEFS_BLOCKSTORE_S3_REGION", value = var.aws_region },
  { name  = "LAKEFS_GATEWAYS_S3_REGION", value = var.aws_region },
  { name  = "LAKEFS_AUTH_ENCRYPT_SECRET_KEY", value = random_password.lakefs_encryption_secret.result },
  { name  = "LAKEFS_LISTEN_ADDRESS", value = ":8000" },
  { name  = "LAKEFS_STATS_ENABLED", value = "false" },  # Disable sending statistics to treeverse
  { name  = "LAKEFS_LOGGING_AUDIT_LOG_LEVEL", value = "NONE"},
  { name  = "LAKEFS_EMAIL_SUBSCRIPTION_ENABLED", value = "false" }

even with disabling the audit logs, the error persists.

Any ideas?

EDIT:
The error might not come from this error exactly. But still I'd like to disable the audit log to be send externally.
πŸ‘€ 1
o
Hi @Christoph Jud Sorry for your experience. Can you please share the full log extract? Also, was this working before or is it a new setup?
c
I'm still setting it up. Actually, I have not tested the file uploading before. Strange is, that, as I can create the test repo including the test data, I indeed can write data to s3. but uploading fails. The try to send audits externally might not cause this problem, but still I'd like to either disable it or store the audit log internally.
2024-09-02T085431.726Z time="2024-09-02T085431Z" level=info msg="Config loaded" func=cmd/lakefs/cmd.initConfig file="cmd/root.go:151" fields.file=/home/lakefs/.lakefs.yaml file="cmd/root.go:151" phase=startup 2024-09-02T085431.726Z time="2024-09-02T085431Z" level=info msg=Config func=cmd/lakefs/cmd.initConfig file="cmd/root.go:159" actions.enabled=true actions.env.enabled=true actions.env.prefix=LAKEFSACTION_ actions.lua.net_http_enabled=false auth.api.endpoint="" auth.api.health_check_timeout=20s auth.api.skip_health_check=false auth.api.supports_invites=false auth.api.token=------ auth.authentication_api.endpoint="" auth.authentication_api.external_principals_enabled=false auth.cache.enabled=true auth.cache.jitter=3s auth.cache.size=1024 auth.cache.ttl=20s auth.cookie_auth_verification.auth_source="" auth.cookie_auth_verification.default_initial_groups="[]" auth.cookie_auth_verification.external_user_id_claim_name="" auth.cookie_auth_verification.friendly_name_claim_name="" auth.cookie_auth_verification.initial_groups_claim_name="" auth.cookie_auth_verification.persist_friendly_name=false auth.cookie_auth_verification.validate_id_token_claims="map[]" auth.encrypt.secret_key="******" auth.login_duration=168h0m0s auth.login_max_duration=336h0m0s auth.logout_redirect_url=/auth/login auth.oidc.default_initial_groups="[]" auth.oidc.friendly_name_claim_name="" auth.oidc.initial_groups_claim_name="" auth.oidc.persist_friendly_name=false auth.oidc.validate_id_token_claims="map[]" auth.remote_authenticator.default_user_group=Viewers auth.remote_authenticator.enabled=false auth.remote_authenticator.endpoint="" auth.remote_authenticator.request_timeout=10s auth.ui_config.login_cookie_names="[internal_auth_session]" auth.ui_config.login_failed_message="The credentials don't match." auth.ui_config.login_url="" auth.ui_config.logout_url="" auth.ui_config.rbac=simplified blockstore.azure.auth_method="" blockstore.azure.china_cloud=false blockstore.azure.disable_pre_signed=false blockstore.azure.disable_pre_signed_ui=true blockstore.azure.domain="" blockstore.azure.pre_signed_expiry=15m0s blockstore.azure.storage_access_key="" blockstore.azure.storage_account="" blockstore.azure.test_endpoint_url="" blockstore.azure.try_timeout=10m0s blockstore.gs.credentials_file="" blockstore.gs.credentials_json="" blockstore.gs.disable_pre_signed=false blockstore.gs.disable_pre_signed_ui=true blockstore.gs.pre_signed_expiry=15m0s blockstore.gs.s3_endpoint="https://storage.googleapis.com" blockstore.gs.server_side_encryption_customer_supplied="" blockstore.gs.server_side_encryption_kms_key_id="" blockstore.local.allowed_external_prefixes="[]" blockstore.local.import_enabled=false blockstore.local.import_hidden=false blockstore.local.path="~/lakefs/data/block" blockstore.s3.client_log_request=false blockstore.s3.client_log_retries=false blockstore.s3.credentials_file="" blockstore.s3.disable_pre_signed=false blockstore.s3.disable_pre_signed_multipart=false blockstore.s3.disable_pre_signed_ui=true blockstore.s3.discover_bucket_region=true blockstore.s3.endpoint="https://s3.eu-central-1.amazonaws.com" blockstore.s3.force_path_style=false blockstore.s3.max_retries=5 blockstore.s3.pre_signed_expiry=15m0s blockstore.s3.profile="" blockstore.s3.region=eu-central-1 blockstore.s3.server_side_encryption="" blockstore.s3.server_side_encryption_kms_key_id="" blockstore.s3.skip_verify_certificate_test_only=false blockstore.s3.web_identity.session_duration=0s blockstore.s3.web_identity.session_expiry_window=5m0s blockstore.signing.secret_key="******" blockstore.type=s3 committed.block_storage_prefix=_lakefs committed.local_cache.dir="~/lakefs/data/cache" committed.local_cache.max_uploaders_per_writer=10 committed.local_cache.metarange_proportion=0.1 committed.local_cache.range_proportion=0.9 committed.local_cache.size_bytes=1073741824 committed.permanent.max_range_size_bytes=20971520 committed.permanent.min_range_size_bytes=0 committed.permanent.range_raggedness_entries=50000 committed.sstable.memory.cache_size_bytes=400000000 database.drop_tables=false database.dynamodb.aws_access_key_id=------ database.dynamodb.aws_profile="" database.dynamodb.aws_region="" database.dynamodb.aws_secret_access_key=------ database.dynamodb.endpoint="" database.dynamodb.health_check_interval=0s database.dynamodb.max_attempts=10 database.dynamodb.max_connections=0 database.dynamodb.scan_limit=1024 database.dynamodb.table_name=specto-ai-lakefs-kvstore database.local.enable_logging=false database.local.path="~/lakefs/metadata" database.local.prefetch_size=256 database.local.sync_writes=true database.postgres.connection_max_lifetime=5m0s database.postgres.connection_string=------ database.postgres.max_idle_connections=25 database.postgres.max_open_connections=25 database.postgres.metrics=false database.postgres.scan_page_size=0 database.type=dynamodb email_subscription.enabled=false fields.file=/home/lakefs/.lakefs.yaml file="cmd/root.go:159" gateways.s3.domain_name="[s3.local.lakefs.io]" gateways.s3.fallback_url="" gateways.s3.region=eu-central-1 gateways.s3.verify_unsupported=true graveler.background.rate_limit=0 graveler.batch_dbio_transaction_markers=false graveler.commit_cache.expiry=10m0s graveler.commit_cache.jitter=2s graveler.commit_cache.size=50000 graveler.compaction_sensor_threshold=0 graveler.ensure_readable_root_namespace=true graveler.max_batch_delay=3ms graveler.repository_cache.expiry=5s graveler.repository_cache.jitter=2s graveler.repository_cache.size=1000 installation.access_key_id=------ installation.allow_inter_region_storage=true installation.fixed_id="" installation.secret_access_key=------ installation.user_name="" listen_address=":8000" logging.audit_log_level=NONE logging.file_max_size_mb=102400 logging.files_keep=100 logging.format=text logging.level=INFO logging.output="[-]" logging.trace_request_headers=false phase=startup security.audit_check_interval=24h0m0s security.audit_check_url="https://audit.lakefs.io/audit" security.check_latest_version=true security.check_latest_version_cache=1h0m0s stats.address="https://stats.lakefs.io" stats.enabled=false stats.extended=false stats.flush_interval=30s stats.flush_size=100 tls.cert_file="" tls.enabled=false tls.key_file="" ugc.prepare_interval=1m0s ugc.prepare_max_file_size=20971520 ui.enabled=true ui.snippets="[]" usage_report.enabled=false usage_report.flush_interval=5m0s 2024-09-02T085431.728Z time="2024-09-02T085431Z" level=info msg="lakeFS run" func=cmd/lakefs/cmd.glob..func9 file="cmd/run.go:123" version=1.28.1 2024-09-02T085431.778Z time="2024-09-02T085431Z" level=info msg="KV valid" func=pkg/kv.ValidateSchemaVersion file="build/pkg/kv/migration.go:68" version=4 2024-09-02T085431.778Z time="2024-09-02T085431Z" level=info msg="initialized Auth service" func=pkg/auth.NewAuthService file="build/pkg/auth/service.go:217" service=auth_service 2024-09-02T085449.039Z time="2024-09-02T085449Z" level=warning msg="Tried to to get AWS account ID for BI" func="pkg/cloud/aws.(*MetadataProvider).GetMetadata.func1" file="build/pkg/cloud/aws/metadata.go:81" error="operation error STS: GetCallerIdentity, exceeded maximum number of attempts, 3, https response error StatusCode: 0, RequestID: , request send failed, Post \"https://sts.eu-central-1.amazonaws.com/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" 2024-09-02T085449.039Z time="2024-09-02T085449Z" level=warning msg="Failed to to get AWS account ID for BI" func="pkg/cloud/aws.(*MetadataProvider).GetMetadata" file="build/pkg/cloud/aws/metadata.go:87" error="operation error STS: GetCallerIdentity, exceeded maximum number of attempts, 3, https response error StatusCode: 0, RequestID: , request send failed, Post \"https://sts.eu-central-1.amazonaws.com/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" 2024-09-02T085449.042Z time="2024-09-02T085449Z" level=info msg="initialize blockstore adapter" func=pkg/block/factory.buildBlockAdapter file="build/pkg/block/factory/build.go:41" type=s3 2024-09-02T085449.042Z time="2024-09-02T085449Z" level=info msg="initialized blockstore adapter" func=pkg/block/factory.buildS3Adapter file="build/pkg/block/factory/build.go:121" type=s3 2024-09-02T085449.042Z time="2024-09-02T085449Z" level=info msg="initialize blockstore adapter" func=pkg/block/factory.buildBlockAdapter file="build/pkg/block/factory/build.go:41" type=s3 2024-09-02T085449.042Z time="2024-09-02T085449Z" level=info msg="initialized blockstore adapter" func=pkg/block/factory.buildS3Adapter file="build/pkg/block/factory/build.go:121" type=s3 2024-09-02T085449.083Z time="2024-09-02T085449Z" level=info msg="initialize OpenAPI server" func=pkg/api.Serve file="build/pkg/api/serve.go:37" service=api_gateway 2024-09-02T085449.256Z time="2024-09-02T085449Z" level=info msg="initialized S3 Gateway handler" func=pkg/gateway.NewHandler file="build/pkg/gateway/handler.go:128" s3_bare_domain="[s3.local.lakefs.io]" s3_region=eu-central-1 2024-09-02T085449.256Z time="2024-09-02T085449Z" level=info msg="starting HTTP server" func=cmd/lakefs/cmd.glob..func9 file="cmd/run.go:329" listen_address=":8000" 2024-09-02T085449.256Z lakeFS 1.28.1 - Up and running (^C to shutdown)... 2024-09-02T085449.256Z β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— 2024-09-02T085449.256Z β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β• 2024-09-02T085449.256Z β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— 2024-09-02T085449.256Z β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β•β• β–ˆβ–ˆβ•”β•β•β• β•šβ•β•β•β•β–ˆβ–ˆβ•‘ 2024-09-02T085449.256Z β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘ 2024-09-02T085449.256Z β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•šβ•β• β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•β•β• 2024-09-02T085449.256Z β”‚ 2024-09-02T085449.256Z β”‚ If you're running lakeFS locally for the first time, 2024-09-02T085449.256Z β”‚ complete the setup process at http://127.0.0.1:8000/setup 2024-09-02T085449.256Z β”‚ 2024-09-02T085449.256Z β”‚ 2024-09-02T085449.256Z β”‚ For more information on how to use lakeFS, 2024-09-02T085449.256Z β”‚ check out the docs at https://docs.lakefs.io/quickstart/ 2024-09-02T085449.256Z β”‚ 2024-09-02T085449.256Z β”‚ 2024-09-02T085449.256Z β”‚ For support or any other question, >(.οΌΏ.)< 2024-09-02T085449.256Z β”‚ join our Slack channel https://docs.lakefs.io/slack ( )_ 2024-09-02T085449.256Z β”‚ 2024-09-02T085449.256Z Version 1.28.1 2024-09-02T085519.052Z time="2024-09-02T085519Z" level=error msg="Audit check failed" func="pkg/version.(*AuditChecker).CheckAndLog" file="build/pkg/version/audit.go:113" check_url="https://audit.lakefs.io/audit" error="Get \
what is: security β€’
security.audit_check_interval
(duration : 24h)
- Duration in which we check for security audit. can I disable this? or set it to infinity?
and as it is recommended in the documentation, the S3 access policy is:
Copy code
resource "aws_iam_policy" "lakefs_s3_policy" {
  name        = "lakefs-s3-policy"
  description = "Policy for LakeFS to access S3"
  policy      = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Sid = "lakeFSObjects",
        Action = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:AbortMultipartUpload",
          "s3:ListMultipartUploadParts"
        ],
        Effect = "Allow",
        Resource = "${aws_s3_bucket.lakefs_bucket.arn}/*"
      },
      {
        Sid = "lakeFSBucket",
        Action = [
          "s3:ListBucket",
          "s3:GetBucketLocation",
          "s3:ListBucketMultipartUploads"
        ],
        Effect = "Allow",
        Resource = aws_s3_bucket.lakefs_bucket.arn
      }
    ]
  })
}
I've also tried to grant all access to s3. but seems not to be the issue
ah, and by the way: <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> </body> </html> <!-- a padding to disable MSIE and Chrome friendly error page --> <!-- a padding to disable MSIE and Chrome friendly error page --> <!-- a padding to disable MSIE and Chrome friendly error page --> <!-- a padding to disable MSIE and Chrome friendly error page --> <!-- a padding to disable MSIE and Chrome friendly error page --> <!-- a padding to disable MSIE and Chrome friendly error page --> this is the error message in the web UI
o
Can you share the UI screenshot of the forbidden message and when it appears?
Also, when you claim you can write data to s3, it is through lakeFS?
c
image.png
I do can create repositories with the example data contained
image.png
o
Thanks. Looking into this
In order to disable the audit messages, please set security.check_latest_version as false (or use SECURITY_CHECK_LATEST_VERSION env var)
However, by setting this, you wont get any alerts on new lakeFS versions or security updates
πŸ‘ 1
c
what about the other error. any ideas where it might come from?
still got the audit log error:
2024-09-02T101645.745Z time="2024-09-02T101645Z" level=error msg="Audit check failed" func="pkg/version.(*AuditChecker).CheckAndLog" file="build/pkg/version/audit.go:113" check_url="https://audit.lakefs.io/audit" error="Get \"https://audit.lakefs.io/audit?installation_id=xxxxxxxxx&version=1.28.1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" version=1.28.1
o
regarding the upload, can you please try uploading through lakectl or any other client other than the UI and send the output?
c
just uploaded some files over lakectl lakectl fs upload --recursive --source /mnt/c/Users/ChristophJud/Data/BraTS2023/BraTS-MEN-Train/BraTS-MEN-00034-000 lakefs://test-repos itory2/main/test_dataset diff 'local:///mnt/c/Users/ChristophJud/Data/BraTS2023/BraTS-MEN-Train/BraTS-MEN-00034-000/' <--> 'lakefs://test-repository2/main/test_dataset'... upload BraTS-MEN-00034-000-seg.nii.gz ... done! [95.74KB in 111ms] upload BraTS-MEN-00034-000-t1n.nii.gz ... done! [4.67MB in 238ms] upload BraTS-MEN-00034-000-t2w.nii.gz ... done! [4.77MB in 277ms] upload BraTS-MEN-00034-000-t1c.nii.gz ... done! [4.67MB in 305ms] upload BraTS-MEN-00034-000-t2f.nii.gz ... done! [4.70MB in 867ms] Upload Summary: Downloaded: 0 Uploaded: 5 Removed: 0
so, seems to be an issue of the web ui?
o
Which OS do you use?
Can you please generate a HAR file from your browser while trying to upload and getting the 403?
Also, please try a different browser
c
I'm using Windows 11 pro and tried the Brave and Edge browser.
πŸ‘€ 1
o
I'm checking this with the team
j
Hi @Christoph Jud I can see that you’ve configure your
LAKEFS_BLOCKSTORE_S3_ENDPOINT
variable. You use a path-style URL, yet, it’s not specified in the configurations (the default style is virtual-hosting style). Can you try and add:
LAKEFS_BLOCKSTORE_S3_FORCE_PATH_STYLE
with the value
true
and try again? In addition, I can see that you didn’t configure any credentials for S3. That means that your default
~/.aws/credentials
with the default profile will be used, unless you’ve configured the env vars with different values. That’s perfectly fine, but you need to make sure that they allow you to access your bucket. It actually doesn’t explain how you’ve succeeded with the lakectl calls. I’ll have another look.
c
Hi Jon, thank's for your response! I'll try that with the path style. I re-deployed the server and have now also the forbidden error from within the console (lakectl). I will troubleshood a bit and come back if I've got any news.
what I see in the logs as well is: time="2024-09-02T133101Z" level=debug msg="failed to check latest version in releases" func="pkg/api.(*Controller).getVersionConfig" file="build/pkg/api/controller.go:5073" error="Get \"https://api.github.com/repos/treeverse/lakeFS/releases/latest\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" service=api_gateway time="2024-09-02T132915Z" level=error msg="Audit check failed" func="pkg/version.(*AuditChecker).CheckAndLog" file="build/pkg/version/audit.go:113" check_url="https://audit.lakefs.io/audit" error="Get \"https://audit.lakefs.io/audit?installation_id=6a6e55e5-c2a3-4d45-8d89-eb3e43cb4944&amp;version=1.25.0\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" version=1.25.0 time="2024-09-02T133421Z" level=error msg="Failed to get region for bucket, falling back to default region" func="pkg/block/s3.(*ClientCache).refreshBucketRegion" file="build/pkg/block/s3/client_cache.go:151" default_region=eu-central-1 error="bucket not found" host=lakefs.spectomedical.com method=GET operation_id=ListObjects path="/api/v1/repositories/test-repository2/refs/main/objects/ls?prefix=&amount=100&after=&delimiter=%2F&presign=false" request_id=eb475810-9b06-4a96-92b7-a0b37b92b830 service_name=rest_api user=admin the first two might not related to my problem, but still, I'd be happy to resolve them. the last one I think is the relevant one. I will double check, that the s3 is reachable from the lakefs server.
j
regarding the last one, did you try to add the suggested env var?
c
yes. this error was when the path style was enabled
I've again created a test repo. And I see on AWS, that the repo has been created: I can also look at a random image item in the test repo data: but accessing the README.md of the test data repo there is an error:
my current config:
Copy code
{ name  = "LAKEFS_DATABASE_TYPE", value = "dynamodb" },  # Specify the internal database to be DynamoDB
{ name  = "LAKEFS_DATABASE_DYNAMODB_TABLE_NAME", value="${var.project_name}-${var.lakefs_dynamodb_table_name}" },
{ name  = "LAKEFS_BLOCKSTORE_TYPE", value = "s3" },  # Specify the internal storage type to be S3
{ name  = "LAKEFS_BLOCKSTORE_S3_REGION", value = var.aws_region },
{ name  = "LAKEFS_BLOCKSTORE_S3_ENDPOINT", value = "<https://s3>.${var.aws_region}.<http://amazonaws.com/|amazonaws.com/>" },
{ name  = "LAKEFS_BLOCKSTORE_S3_FORCE_PATH_STYLE", value = "true"},
{ name  = "LAKEFS_BLOCKSTORE_s3_DISCOVER_BUCKET_REGION", value = "false"},
{ name  = "AWS_REGION", value = var.aws_region },
{ name  = "LAKEFS_GATEWAYS_S3_REGION", value = var.aws_region },
{ name  = "LAKEFS_AUTH_ENCRYPT_SECRET_KEY", value = random_password.lakefs_encryption_secret.result },
{ name  = "LAKEFS_LISTEN_ADDRESS", value = ":8000" },
{ name  = "LAKEFS_STATS_ENABLED", value = "false" },  # Disable sending statistics to treeverse
{ name = "LAKEFS_LOGGING_LEVEL", value = "TRACE"},
{ name  = "LAKEFS_EMAIL_SUBSCRIPTION_ENABLED", value = "false" }
interestingly, after re-deployment, the upload over lakectl does not work anymore: lakectl fs upload --recursive --source /mnt/c/Users/ChristophJud/Data/BraTS2023/BraTS-MEN-Train/BraTS-MEN-00034-000 lakefs://test-repository4/main/test_dataset --verbose diff 'local:///mnt/c/Users/ChristophJud/Data/BraTS2023/BraTS-MEN-Train/BraTS-MEN-00034-000/' <--> 'lakefs://test-repository4/main/test_dataset'... upload BraTS-MEN-00034-000-seg.nii.gz ... done! [95.74KB in 61ms] upload BraTS-MEN-00034-000-t2f.nii.gz ... fail! [3.64MB in 188ms] upload BraTS-MEN-00034-000-t2w.nii.gz ... fail! [3.70MB in 188ms] upload BraTS-MEN-00034-000-t1c.nii.gz ... fail! [4.46MB in 188ms] upload BraTS-MEN-00034-000-t1n.nii.gz ... done! [4.67MB in 164ms] upload BraTS-MEN-00034-000-seg.nii.gz failed: link object to backing store: request failed (403 Forbidden)
accessing the README.md file on the console with: lakectl fs download lakefs://<repository>/<branch>/<file-path> <destination-path> works
i
Hey @Christoph Jud, would you mind trying again after removing the following env vars? 1. LAKEFS_BLOCKSTORE_s3_DISCOVER_BUCKET_REGION 2. LAKEFS_BLOCKSTORE_S3_ENDPOINT 3. LAKEFS_BLOCKSTORE_S3_REGION 4. AWS_REGION 5. LAKEFS_GATEWAYS_S3_REGION
c
ok, without the endpoint the s3 is not reachable anymore: β€’ repository creation does not work β€’ downloading a file does not work either: lakectl fs download lakefs://test-repository4/main/README.md /tmp β—¦ download failed: request failed: 504 Gateway Timeout
i
Got it, can you try adding just the
LAKEFS_BLOCKSTORE_S3_ENDPOINT
then? Sorry for all the back and forth, I have some assumption that some of these values collide.
c
no problem, hold on
ok, now I can again create a repository, I can download a file and randomly look at an image in the test repo data. but again, if I browsed the README.md which I can download with lakectl, I'm getting an error:
i
Can you open the network tools and share the HTTP request/response please?
c
image.png,image.png,image.png
ah, and the upload does not work as well: lakectl fs upload --recursive --source /mnt/c/Users/ChristophJud/Data/BraTS2023/BraTS-MEN-Train/BraTS-MEN-00034-000 lakefs://test-repository5/main/test_dataset --verbose diff 'local:///mnt/c/Users/ChristophJud/Data/BraTS2023/BraTS-MEN-Train/BraTS-MEN-00034-000/' <--> 'lakefs://test-repository5/main/test_dataset'... upload BraTS-MEN-00034-000-seg.nii.gz ... done! [95.74KB in 66ms] upload BraTS-MEN-00034-000-t1c.nii.gz ... done! [4.67MB in 168ms] upload BraTS-MEN-00034-000-t2w.nii.gz ... done! [4.77MB in 168ms] upload BraTS-MEN-00034-000-t1n.nii.gz ... fail! [819.20KB in 209ms] upload BraTS-MEN-00034-000-t2f.nii.gz ... fail! [3.38MB in 209ms] upload BraTS-MEN-00034-000-seg.nii.gz failed: link object to backing store: request failed (403 Forbidden) Error executing command.
I wonder if it was a permission issue. however, as creation of the repo works and downloading (with lakectl) as well I don't see where the problem comes from
i
Does the Readme.md 403 response have a body?
c
image.png
i
That should be in the β€œResponse” tab
c
this is empty
i
Can you share the lakeFS logs of the request?
c
image.png
i
Please share as text/file. It would be much easier for me to assist
c
is there any sensitive data written to the log?
i
No
c
output.txt
i
Did you turn on branch protection for
main
? I think that’s why your writes are bring blocked with
403
s. I’m not sure if the sample repo does that by default
c
yep. thank's. now the upload works again
I forgot that it is enabled by default
but still, the README.md does not show up
πŸ‘€ 1
this works: but the README.md not. I tried it with Brave and Edge.
this works: lakectl fs download lakefs://test-repository5/main/README.md /tmp download: lakefs://test-repository5/main/README.md to /tmp/README.md
i
Cool. Can you try it again with
lakectl fs download <lakefs://test-repository5/main/README.md> /tmp --pre-sign=false
c
works. ah, let me check if it is the firewall
i
Do you run lakectl and the browser from different networks?
c
no, from my local laptop
i
Do you have access to the underlying S3 bucket from your local laptop?
c
yes
I figured it out!!
I disabled the whole firewall
I mean the application firewall
i
Great find! Do you know what exactly in the firewall rules blocked lakeFS requests?
c
I had: β€’ AWSManagedRulesCommonRuleSet β€’ AWSManagedRulesKnownBadInputsRuleSet β€’ AWSManagedRulesSQLiRuleSet β€’ AWSManagedRulesLinuxRuleSet
I have to investigate
i
Bare in mind that this Readme request returns 302 with a presigned url that redirects you to s3
c
what does this mean?
i
Maybe it doesn't like the cross sites redirect
c
as soon as the firewall is enabled, also the preview of the readme is not shown when going to the repository
i
Yap, that makes sense
c
image.png
πŸ‘ 2
it is the: LFI_QUERYSTRING rule
thank's anyways for your help!
lakefs 1