:wave: Hello, team! I am trying to set up lakefs ...
# help
k
👋 Hello, team! I am trying to set up lakefs on-premises locally with postge, minio and ACL. However, lakefs fails with the following logs and keeps restarting
Copy code
{"file":"_build/pkg/auth/basic_service.go:33","func":"pkg/auth.NewBasicAuthService","level":"info","msg":"initialized Auth service","service":"auth_service","time":"2025-07-22T08:49:39Z"}
{"error":"no users configured: auth migration not possible","file":"_build/pkg/auth/factory/build.go:50","func":"pkg/auth/factory.NewAuthService","level":"fatal","msg":"\ncannot migrate existing user to basic auth mode!\nPlease run \"lakefs superuser -h\" and follow the instructions on how to migrate an existing user\n","time":"2025-07-22T08:49:39Z"}
How do I fix it? Here is my docker-compose.yml
Copy code
services:
  postgres:
    container_name: pg-lakefs
    image: postgres:13
    ports:
      - "5432:5432"
    secrets:
      - postgres_user
      - postgres_password
    environment:
      POSTGRES_DB: lakefs_db
      POSTGRES_USER_FILE: /run/secrets/postgres_user
      POSTGRES_PASSWORD_FILE: /run/secrets/postgres_password
    volumes:
      - pg_lakefs_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $(cat /run/secrets/postgres_user)"]
      interval: 1s
      timeout: 5s
      retries: 5
    restart: always

  minio:
    container_name: minio
    image: <http://quay.io/minio/minio:RELEASE.2025-06-13T11-33-47Z|quay.io/minio/minio:RELEASE.2025-06-13T11-33-47Z>
    ports:
      - "9000:9000"
      - "9001:9001"
    volumes: 
      - minio_data:/data
    secrets:
      - minio_root_user
      - minio_root_password
    restart: always
    environment:
      MINIO_ROOT_USER_FILE: /run/secrets/minio_root_user
      MINIO_ROOT_PASSWORD_FILE: /run/secrets/minio_root_password
    command: ["server", "/data", "--console-address", ":9001"]

  lakefs:
    container_name: lakefs
    build:
      context: .
      dockerfile: Dockerfile.lakefs
    ports:
      - "8000:8000"
    volumes:
      - lakefs_data:/data
    secrets:
      - lakefs_config
    depends_on:
      postgres:
        condition: service_healthy
      minio:
        condition: service_started
      acl:
        condition: service_started
    restart: always
    command: sh -c "cp /run/secrets/lakefs_config /app/lakefs_config.yaml && /app/lakefs run --config /app/lakefs_config.yaml"

  acl:
    container_name: acl
    build:
      context: .
      dockerfile: Dockerfile.acl
    ports:
      - "8001:8001"
    secrets:
      - acl_config
    depends_on:
      postgres:
        condition: service_healthy
    restart: always
    command: sh -c "cp /run/secrets/acl_config /app/acl_config.yaml && /app/acl run --config /app/acl_config.yaml"

volumes:
  pg_lakefs_data:
  minio_data:
  lakefs_data:

secrets:
  postgres_user:
    file: .secrets/postgres_user.txt
  postgres_password:
    file: .secrets/postgres_password.txt
  minio_root_user:
    file: .secrets/minio_root_user.txt
  minio_root_password:
    file: .secrets/minio_root_password.txt
  lakefs_config:
    file: .secrets/.lakefs.yaml
  acl_config:
    file: .secrets/.aclserver.yaml
.aclserver.yaml
Copy code
listen_address: ":8001"

database:
  type: "postgres"
  postgres:
      connection_string: "<postgres://user:pass@postgres:5432/db?sslmode=disable>"

encrypt:
  secret_key: "secret"
.lakefs.yaml
Copy code
logging:
  format: json
  level: INFO
  output: "-"

auth:
  encrypt:
    secret_key: "secret"

blockstore:
  type: s3
  s3:
    force_path_style: true
    endpoint: <http://minio:9000>
    discover_bucket_region: false
    credentials:
      access_key_id: key_id
      secret_access_key: secret

listen_address: "0.0.0.0:8000"

database:
  type: "postgres"
  postgres:
    connection_string: "<postgres://user:pass@postgres:5432/db?sslmode=disable>"
Please help 🙂
i
Hi @Kungim. Is this or this helpful?
k
@Iddo Avneri it’s the same problem, yes, however there’s no solution apart from going enterprise :) I am doing this on my own for academic purpose, so I don’t need enterprise features at the moment. I already have the contrib ACL server up and running, and I see no errors on its side. The lakefs container fails due to that migration error.
b
Hi @Kungim, exec into lakefs and add superuser. Example:
Copy code
lakefs superuser --user-name admin --access-key-id=AKIAIOSFODNN7EXAMPLE --secret-access-key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
You should be able to login with it.
k
@Barak Amar Hello! Thank you for your answer. I have tried that, but unfortunately it fails with exact same error. Perhaps it went to some error state? How do I purge the local lakefs instance with all its components completely? It logs the following:
Copy code
/app $ ./lakefs superuser --config ./lakefs_config.yaml  --user-name admin --access-key-id
=AKIAIOSFODNN7EXAMPLE --secret-access-key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
INFO[0000]/tmp/lakefs_build/cmd/lakefs/cmd/root.go:130 <http://github.com/treeverse/lakefs/cmd/lakefs/cmd.initConfig()|github.com/treeverse/lakefs/cmd/lakefs/cmd.initConfig()> Configuration file                            fields.file=./lakefs_config.yaml file=./lakefs_config.yaml phase=startup
{"file":"_build/cmd/lakefs/cmd/root.go:116","func":"cmd/lakefs/cmd.LoadConfig","level":"info","msg":"Config loaded","phase":"startup","time":"2025-07-23T16:15:41Z"}
{"actions.enabled":{},"actions.env.enabled":{},"actions.env.prefix":{},"actions.lua.net_http_enabled":{},"auth.api.endpoint":{},"auth.api.health_check_timeout":{},"auth.api.skip_health_check":{},"auth.api.supports_invites":{},"auth.api.token":"------","auth.authentication_api.endpoint":{},"auth.authentication_api.external_principals_enabled":{},"auth.cache.enabled":{},"auth.cache.jitter":{},"auth.cache.size":{},"auth.cache.ttl":{},"auth.cookie_auth_verification.auth_source":{},"auth.cookie_auth_verification.default_initial_groups":{},"auth.cookie_auth_verification.external_user_id_claim_name":{},"auth.cookie_auth_verification.friendly_name_claim_name":{},"auth.cookie_auth_verification.initial_groups_claim_name":{},"auth.cookie_auth_verification.persist_friendly_name":{},"auth.cookie_auth_verification.validate_id_token_claims":{},"auth.encrypt.secret_key":"******","auth.login_duration":{},"auth.login_max_duration":{},"auth.logout_redirect_url":{},"auth.oidc.default_initial_groups":{},"auth.oidc.friendly_name_claim_name":{},"auth.oidc.initial_groups_claim_name":{},"auth.oidc.persist_friendly_name":{},"auth.oidc.validate_id_token_claims":{},"auth.remote_authenticator.default_user_group":{},"auth.remote_authenticator.enabled":{},"auth.remote_authenticator.endpoint":{},"auth.remote_authenticator.request_timeout":{},"auth.ui_config.login_cookie_names":{},"auth.ui_config.login_failed_message":{},"auth.ui_config.login_url":{},"auth.ui_config.logout_url":{},"auth.ui_config.rbac":{},"auth.ui_config.use_login_placeholders":{},"blockstore.azure.auth_method":{},"blockstore.azure.china_cloud":{},"blockstore.azure.disable_pre_signed":{},"blockstore.azure.disable_pre_signed_ui":{},"blockstore.azure.domain":{},"blockstore.azure.pre_signed_expiry":{},"blockstore.azure.storage_access_key":{},"blockstore.azure.storage_account":{},"blockstore.azure.test_endpoint_url":{},"blockstore.azure.try_timeout":{},"blockstore.gs.credentials_file":{},"blockstore.gs.credentials_json":{},"blockstore.gs.disable_pre_signed":{},"blockstore.gs.disable_pre_signed_ui":{},"blockstore.gs.pre_signed_expiry":{},"blockstore.gs.s3_endpoint":{},"blockstore.gs.server_side_encryption_customer_supplied":{},"blockstore.gs.server_side_encryption_kms_key_id":{},"blockstore.local.allowed_external_prefixes":{},"blockstore.local.import_enabled":{},"blockstore.local.import_hidden":{},"blockstore.local.path":{},"blockstore.s3.client_log_request":{},"blockstore.s3.client_log_retries":{},"blockstore.s3.credentials.access_key_id":"******","blockstore.s3.credentials.secret_access_key":"******","blockstore.s3.credentials.session_token":"------","blockstore.s3.credentials_file":{},"blockstore.s3.disable_pre_signed":{},"blockstore.s3.disable_pre_signed_multipart":{},"blockstore.s3.disable_pre_signed_ui":{},"blockstore.s3.discover_bucket_region":{},"blockstore.s3.endpoint":{},"blockstore.s3.force_path_style":{},"blockstore.s3.max_retries":{},"blockstore.s3.pre_signed_endpoint":{},"blockstore.s3.pre_signed_expiry":{},"blockstore.s3.profile":{},"blockstore.s3.region":{},"blockstore.s3.server_side_encryption":{},"blockstore.s3.server_side_encryption_kms_key_id":{},"blockstore.s3.skip_verify_certificate_test_only":{},"blockstore.s3.web_identity.session_duration":{},"blockstore.s3.web_identity.session_expiry_window":{},"blockstore.signing.secret_key":"******","blockstore.type":{},"committed.block_storage_prefix":{},"committed.local_cache.dir":{},"committed.local_cache.max_uploaders_per_writer":{},"committed.local_cache.metarange_proportion":{},"committed.local_cache.range_proportion":{},"committed.local_cache.size_bytes":{},"committed.permanent.max_range_size_bytes":{},"committed.permanent.min_range_size_bytes":{},"committed.permanent.range_raggedness_entries":{},"committed.sstable.memory.cache_size_bytes":{},"database.drop_tables":{},"database.dynamodb.aws_access_key_id":"------","database.dynamodb.aws_profile":{},"database.dynamodb.aws_region":{},"database.dynamodb.aws_secret_access_key":"------","database.dynamodb.endpoint":{},"database.dynamodb.health_check_interval":{},"database.dynamodb.max_attempts":{},"database.dynamodb.max_connections":{},"database.dynamodb.scan_limit":{},"database.dynamodb.table_name":{},"database.local.enable_logging":{},"database.local.path":{},"database.local.prefetch_size":{},"database.local.sync_writes":{},"database.postgres.connection_max_lifetime":{},"database.postgres.connection_string":"******","database.postgres.max_idle_connections":{},"database.postgres.max_open_connections":{},"database.postgres.metrics":{},"database.postgres.scan_page_size":{},"database.type":{},"email_subscription.enabled":{},"file":"_build/cmd/lakefs/cmd/root.go:119","func":"cmd/lakefs/cmd.LoadConfig","gateways.s3.domain_name":{},"gateways.s3.fallback_url":{},"gateways.s3.region":{},"gateways.s3.verify_unsupported":{},"graveler.background.rate_limit":{},"graveler.batch_dbio_transaction_markers":{},"graveler.branch_ownership.acquire":{},"graveler.branch_ownership.enabled":{},"graveler.branch_ownership.refresh":{},"graveler.commit_cache.expiry":{},"graveler.commit_cache.jitter":{},"graveler.commit_cache.size":{},"graveler.compaction_sensor_threshold":{},"graveler.ensure_readable_root_namespace":{},"graveler.max_batch_delay":{},"graveler.repository_cache.expiry":{},"graveler.repository_cache.jitter":{},"graveler.repository_cache.size":{},"installation.access_key_id":"------","installation.allow_inter_region_storage":{},"installation.fixed_id":{},"installation.secret_access_key":"------","installation.user_name":{},"level":"info","listen_address":{},"logging.audit_log_level":{},"logging.file_max_size_mb":{},"logging.files_keep":{},"logging.format":{},"logging.level":{},"logging.output":{},"logging.trace_request_headers":{},"msg":"Config","phase":"startup","security.audit_check_interval":{},"security.audit_check_url":{},"security.check_latest_version":{},"security.check_latest_version_cache":{},"stats.address":{},"stats.enabled":{},"stats.extended":{},"stats.flush_interval":{},"stats.flush_size":{},"time":"2025-07-23T16:15:41Z","tls.cert_file":{},"tls.enabled":{},"tls.key_file":{},"ugc.prepare_interval":{},"ugc.prepare_max_file_size":{},"ui.enabled":{},"ui.snippets":{},"usage_report.enabled":{},"usage_report.flush_interval":{}}
{"file":"_build/pkg/auth/basic_service.go:33","func":"pkg/auth.NewBasicAuthService","level":"info","msg":"initialized Auth service","service":"auth_service","time":"2025-07-23T16:15:48Z"}
{"error":"no users configured: auth migration not possible","file":"_build/pkg/auth/factory/build.go:50","func":"pkg/auth/factory.NewAuthService","level":"fatal","msg":"\ncannot migrate existing user to basic auth mode!\nPlease run \"lakefs superuser -h\" and follow the instructions on how to migrate an existing user\n","time":"2025-07-23T16:15:48Z"}
b
In my test I used your docker compose and fresh database, where the acl server initialized on startup.
are you looking to migrate?
k
No, I don't have any data there, so I can start from scratch. It seems to me though that some state is left from my previous attempts and now it doesn't work at all. If there's an option to discard all data and volumes, it would be cool. But deleting volumes and all containers doesn't help me for some reason
b
@Kungim here is my version you can extract and start fresh. Note that I used my image of lakefs and acl, so you will probably will need to update the compose.yaml to start working.
k
@Barak Amar Thank you so much! Your version works! I think the reason was not adding
Copy code
ui_config:
    rbac: "simplified"
  api:
    endpoint: <http://acl:8001/api/v1>
to lakefs.yaml I am trying to change access_key_id and secret_access_key now for lakefs and it appears that it doesn't support just any random sequence. What's the proper way to generate it? It also seems like the length is limited - I tried adding characters to you values and it failed. I get errors like:
Copy code
{"file":"_build/pkg/auth/service.go:998","func":"pkg/auth.(*APIAuthService).CheckHealth","level":"info","msg":"auth API server version: dev","time":"2025-07-25T07:39:35Z"}
{"error":"unexpected status code - got 500 expected 201","file":"_build/pkg/auth/setup/setup.go:181","func":"pkg/auth/setup.AddAdminUser.func1","level":"warning","msg":"Failed to create admin user, deleting user","time":"2025-07-25T07:39:35Z"}
Failed to setup admin user: add credentials for oigfds: unexpected status code - got 500 expected 201
and
Copy code
{"file":"_build/pkg/auth/service.go:971","func":"pkg/auth.(*APIAuthService).CheckHealth","level":"info","msg":"Performing health check, this can take up to 20s","time":"2025-07-25T07:34:25Z"}
{"file":"_build/pkg/auth/service.go:998","func":"pkg/auth.(*APIAuthService).CheckHealth","level":"info","msg":"auth API server version: dev","time":"2025-07-25T07:34:25Z"}
Failed to setup admin user: create user - invalid request
I got it - the length is limited by 20 for access_key_id and by 40 for secret_access_key. It accepts any safe random sequence
👍 1
Thank you very much for your help. And thanks to the whole team for a beautiful project
b
also for the above command you can leave it empty - as far as I remember it will generate for you. (key/secret pair)