• mishraprafful

    mishraprafful

    5 months ago
    Hey, On a fresh installation of LakeFS, I am getting this error while trying to setup the initial admin user.
    │ Failed to setup DB: Dirty database version 1. Fix and force version.                                     │
    The
    postgres
    version is
    aurora-postgres:12.8
    and the error on the setup UI says
    Unknown
    Could you please help me with what I might be missing here? Thanks
    mishraprafful
    Ariel Shaqed (Scolnicov)
    7 replies
    Copy to Clipboard
  • Jennifer Cristina Evangelista Da Silva

    Jennifer Cristina Evangelista Da Silva

    5 months ago
    Hi everyone , help please =[ web | time="2022-04-20T19:42:44Z" level=warning msg="Could not access storage namespace" func="pkg/api.(*Controller).CreateRepository" file="build/pkg/api/controller.go:1231" error="EmptyStaticCreds: static credentials are empty" reason=unknown service=api_gateway storage_namespace="xyz"
    Jennifer Cristina Evangelista Da Silva
    Ariel Shaqed (Scolnicov)
    5 replies
    Copy to Clipboard
  • mishraprafful

    mishraprafful

    5 months ago
    Hi, Quick Question: As it is suggested here (ref: https://docs.lakefs.io/reference/configuration.html#reference), the connection string should have
    ssl-disabled
    when connecting to the DB. Is ssl enable not supported right now or this is just a recommendation and not a necessity ? Thanks
    mishraprafful
    Ariel Shaqed (Scolnicov)
    3 replies
    Copy to Clipboard
  • Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    5 months ago
    Here are the rules. So S3 variables start with LAKEFS_BLOCKSTORE_S3_..., and your example doesn't include that "S3".
    Ariel Shaqed (Scolnicov)
    mishraprafful
    2 replies
    Copy to Clipboard
  • Jennifer Cristina Evangelista Da Silva

    Jennifer Cristina Evangelista Da Silva

    5 months ago
    I configured the credentials with environment variables and it insists on trying to fetch this file, how can I remove this configuration please?
    error="SharedCredsLoad: failed to load shared credentials file\ncaused by: FailedRead: unable to open file\ncaused by: open /tmp/.aws/credentials: no such file or directory" reason=unknown service=api_gateway
    Jennifer Cristina Evangelista Da Silva
    Shimi Wieder
    +1
    12 replies
    Copy to Clipboard
  • s

    Sid Senthilnathan

    5 months ago
    Hello, does anyone have experience using Iceberg with Lakefs? We are trying to use the Iceberg Hadoop catalog instead of the regular Hive metastore to avoid the complexities of having to manage table metadata separate from the underlying filesystem. We are following the iceberg documentation and we've set these additional configurations:
    --conf "spark.sql.catalog.feature=org.apache.iceberg.spark.SparkCatalog" \
    --conf "spark.sql.catalog.feature.type=hadoop" \
    --conf "spark.sql.catalog.feature.warehouse=<lakefs://origin/>" \
    We expect that this will create a metastore with its root at
    <lakefs://origin/>
    and then when we create feature branches we will be able to reference the iceberg tables within them like
    feature.branch_name.schema_name.table_name
    (`feature.
    /branch_name
    .schema_name.table_name in Spark syntax). And indeedSHOW tables in feature.
    /sid-test
    .commonreturns the tables we want, but attempting to query from the tableSELECT * FROM feature.
    /sid-test
    .common.paige_dimension_ai_module LIMIT 10` returns a
    table or view not found
    error. This all appears to work fine when we set the warehouse path to a location on S3. Any ideas what could be going on? cc @Oz Katz @Sander Hartlage
    s
    Ariel Shaqed (Scolnicov)
    +1
    24 replies
    Copy to Clipboard
  • t

    Temilola Onaneye

    4 months ago
    Hello, so when I try to start lakefs with
    lakefs -c config.yaml run
    , it fails to start with the following error
    could not connect to DB: failed to connect to host: ...
    . Image below contains more details These keeps reoccurring in the environment. Sometimes reboot the env solves it but is this a known issue and is the anything that can be done to curtail this permanently.
    t
    Itai David
    5 replies
    Copy to Clipboard
  • Edmondo Porcu

    Edmondo Porcu

    4 months ago
    @Iddo Avneri is there a tutorial on how to use Spark tables with LakeFS? I was wondering what's the right approach. Should one create a clone of a database but pointing it to a different branch?
    Edmondo Porcu
    Iddo Avneri
    +2
    17 replies
    Copy to Clipboard
  • Jennifer Cristina Evangelista Da Silva

    Jennifer Cristina Evangelista Da Silva

    4 months ago
    Hi, There are more exemple in pyspark? Similar this
    import io.treeverse.clients.LakeFSContext
        
    val commitID = "a1b2c3d4"
    val df = LakeFSContext.newDF(spark, "example-repo", commitID)
    Jennifer Cristina Evangelista Da Silva
    1 replies
    Copy to Clipboard
  • Houssam Kherraz

    Houssam Kherraz

    4 months ago
    Hi! I'm trying to get started with lakefs by connecting a local deploy to an s3 bucket (remote). I'm running
    lakectl ingest --from <s3://secret-bucket/prefix/> --to <lakefs://my-repo/main/>
    but I get the following error:
    physical address is not valid for block adapter: local
    400 Bad Request
    Does anyone know what I'm doing wrong? I confirmed I have read access to the bucket (aws s3 ls works)
    Houssam Kherraz
    Or Tzabary
    +1
    50 replies
    Copy to Clipboard