• s

    Sid Senthilnathan

    4 months ago
    How do we configure auth so that only one user can write to a branch? In AWS, we would set a bucket resource policy to deny everyone except one user, but I don't think there are resource policies in Lakefs.
    s
    Eden Ohana
    3 replies
    Copy to Clipboard
  • s

    Sid Senthilnathan

    4 months ago
    Is there a way to only allow one specified branch to be merged into master?
    s
    1 replies
    Copy to Clipboard
  • Prabhat Singh

    Prabhat Singh

    4 months ago
    Hi Everyone, Use case: We need to merge the main branch to another branch which has some uncommitted changes. As of now when we try we get the error "uncommitted changes (dirty branch)". Even though we are not working on the same file we get this error and the merge fails. Do we have any possibilities to merge this successfully if the files merged and the files in the uncommitted changes are different? Thank you in advance!!
    Prabhat Singh
    Jonathan Rosenberg
    +2
    6 replies
    Copy to Clipboard
  • Raman Kharche

    Raman Kharche

    4 months ago
    Hello folks, If we want to commit any specific file from uncommitted changes. How can we do this? For example: If we want to commit only file
    1.png
    but not 3.html (please refer the screenshot)
    Raman Kharche
    Oz Katz
    +1
    9 replies
    Copy to Clipboard
  • Raman Kharche

    Raman Kharche

    4 months ago
    Hello folks, I am able ingest data using lakectl as
    lakectl ingest --from <s3://bucket-name/template> --to <lakefs://repo1/main/>
    it works When I try to do it via java API it gives
    Internal Server Error
    Following is the sample code:
    StageRangeCreation stageRangeCreation = new StageRangeCreation();
    stageRangeCreation.setFromSourceURI("<s3://bucket-name/template>");
    stageRangeCreation.setAfter("main/");
    stageRangeCreation.setPrepend("main/");
    importApi.ingestRange('repo1', stageRangeCreation);
    Is
    ingestRange
    API is correct to ingest? If yes then what I'm missing here?
    Raman Kharche
    Itai David
    +1
    39 replies
    Copy to Clipboard
  • Gal Bachar

    Gal Bachar

    4 months ago
    Hey all, Example taken from: https://docs.lakefs.io/setup/hooks.html How can I address all available branches (instead of just "main")?
    name: Good files check
    description: set of checks to verify that branch is good
    on:
      pre-commit:
      pre-merge:
        branches:
          - main
    hooks:
      - id: no_temp
        type: webhook
        description: checking no temporary files found
        properties:
          url: "<https://your.domain.io/webhook?notmp=true?t=1za2PbkZK1bd4prMuTDr6BeEQwWYcX2R>"
      - id: no_freeze
        type: webhook
        description: check production is not in dev freeze
        properties:
          url: "<https://your.domain.io/webhook?nofreeze=true?t=1za2PbkZK1bd4prMuTDr6BeEQwWYcX2R>"
    Gal Bachar
    n
    +1
    6 replies
    Copy to Clipboard
  • Marvellous

    Marvellous

    3 months ago
    Hi Everyone. Please, I need help. I suddenly started get this error(images below) whenever I try to access a repo, create a new repo, or upload an object. It seems to me like a credential error. But, I didn't change the credential I used previously. What could be the possible problem and solution?
    Marvellous
    Tal Sofer
    5 replies
    Copy to Clipboard
  • Gal Bachar

    Gal Bachar

    3 months ago
    Hey all, Given the next lakeFS action:
    name: Post merge
    description: My Description
    on:
      post-merge:
        branches:
          - master
    hooks:
      - id: increment_docgen_version
        type: airflow
        description: Increment repo version
        properties:
           url: "<http://airflow-webserver:8080>"
           dag_id: "increment_tag"
           username: "LakeFSService"
           password: "{{ ENV.AIRFLOW_PASSWORD }}"
      - id: sync_to_s3
        type: airflow
        description: Sync changes to S3
        properties:
           url: "<http://airflow-webserver:8080>"
           dag_id: "s3_backup_sync"
           username: "LakeFSService"
           password: "{{ ENV.AIRFLOW_PASSWORD }}"
    How can I run the hooks synchronously? ("increment_docgen_version" and then "sync_to_s3") I want "sync_to_s3" to run only after "increment_docgen_version" finishes (and "increment_tag" DAG result returns). In this example I ask about 2 two hooks within this post-merge action, but if it matters, please assume I would like to run 5, 6, etc..
    Gal Bachar
    Tal Sofer
    +1
    4 replies
    Copy to Clipboard
  • Raman Kharche

    Raman Kharche

    3 months ago
    Hello guys, I was testing the Java Client upload API via JMeter. The Number of Threads(Users) 10 it works fine. When increased to 20 I got
    io.lakefs.clients.api.ApiException: java.net.SocketTimeoutException: timeout
    And in lakefs server logs are:
    DEBUG  [2022-06-02T11:11:59+05:30]lakeFS/pkg/httputil/logging.go:78 pkg/httputil.DebugLoggingMiddleware.func1.1 HTTP call ended                               host="localhost:8000" method=POST path="/api/v1/repositories/btest/branches/68873464/objects?path=containerd.gz&storageClass=" request_id=4f5fdba2-5fa1-436f-aa64-d0003ca6146a sent_bytes=31 service_name=rest_api status_code=500 took=1m30.3184134s
    Do I need to increase the timeout? OkHttpClient default value is 10 second?
    Raman Kharche
    Tal Sofer
    32 replies
    Copy to Clipboard
  • Gal Bachar

    Gal Bachar

    3 months ago
    Hey all, I couldn't find
    ci:*
    actions in the "Actions and Permissions" table: https://docs.lakefs.io/reference/authorization.html#authorization. What does this action stand for (e.g.
    ci:*
    action in
    RepoManagementFullAccess
    policy)?
    Gal Bachar
    Guy Hardonag
    6 replies
    Copy to Clipboard