Hey folks! I am trying out lakefs for the first ti...
# help
a
Hey folks! I am trying out lakefs for the first time and have a docker container running against s3. On the first run I can easily upload data and see the _lakefs/ and data/ folders in my bucket. However if shut down the container and try to rerun, the existing repo is not showing in the UI, but if I try to create a new repo under the same path it gives me an error with
Copy code
failed to create repository: found lakeFS objects in the storage namespace(<s3://test-lakefs-1>) key(_lakefs/dummy): storage namespace already in use
Is there something I am missing to configure properly when starting up the docker container the second time for lakefs to reckognize the previously created repo? I am using this code from your quickstart
Copy code
docker run --pull always -p 8000:8000 \
   -e LAKEFS_BLOCKSTORE_TYPE='s3' \
   -e AWS_ACCESS_KEY_ID='YourAccessKeyValue' \
   -e AWS_SECRET_ACCESS_KEY='YourSecretKeyValue' \
   treeverse/lakefs run --local-settings
Thanks in advance from a lakefs-noob 🫶
o
Hi Andreas Thanks for trying lakeFS Our quickstart is intended for one time run and demonstration of our product. It involves an internal DB deployment that gets erased with the container shut down. When trying to re-add a previously deployed s3 bucket, you get the error as the folders you mentioned already exist there. In order for the product to survive a redeployment or restart of the container, you need a persistent DB, as detailed in our deployment guide
gratitude thank you 1