Hi everybody, I'm trying to deploy a lakeFS pod on...
# help
t
Hi everybody, I'm trying to deploy a lakeFS pod on a kubernetes cluster. So, to start I have deployed a postgreSQL pod which is up and I also have an s3. I pulled lakeFS image to my registry and I created a dockerfile to add some env variables like s3 endpoint and credentials. After deployment using helm chart model, I don't have fatal level error message but just warning:
Failed to to get AWS account ID for BI
. My pod is running but it isn't ready to use and I don't have more logs... I think the problem comes from my configuration in values.yaml file but I've tested several others without success. Have any of you had a similar failed deployment?
b
Hi @Thomas our helm values include the lakeFS configmap that supports passing any configuration to the lakeFS instance. Also you can add extra env and secrets to be used by lakeFS without building extra docker.
If lakeFS is not in ready state - does it restart? can you share the log or any specific configuration. I assume it is trying to access your DB and without it doesn't serve any request at startup.
t
About the dockerfile, I first tried to add a
secret_access_key
variable in the blockstore part of the lakefsConfig but I got an incorrect key error. But I'm interested if you have the correct syntax for adding s3-related variables directly to the chart. And for the lakeFS pod, it tried to restart yes but I've this error:
Back-off restarting failed container
b
Sure, I can get a working example here later today. About the container restart. if you can post the logs, in case there is any. To have more information for the next time, it will be great.
t
Thanks for your quick reply. I look forward to receiving your example. And about the logs, there is nothing more than the two error messages I sent...
b
Secrets file that populate the env with the required secrets - `lfs-pgs-secrets.yaml`:
Copy code
apiVersion: v1
kind: Secret
metadata:
  name: barak-lakefs-env-secrets
type: Opaque
data:
  LAKEFS_BLOCKSTORE_S3_CREDENTIALS_ACCESS_KEY_ID: "<base64 s3 key>"
  LAKEFS_BLOCKSTORE_S3_CREDENTIALS_SECRET_ACCESS_KEY: "<base64 s3 secret>"
  LAKEFS_DATABASE_POSTGRES_CONNECTION_STRING: "<base64 database connection string>"
Values file for helm that include the lakefs configuration and our secrets - `lfs-pgs-values.yaml`:
Copy code
secrets:
  authEncryptSecretKey: "<encryption secret key>"

extraEnvVarsSecret: barak-lakefs-env-secrets
  
lakefsConfig: |
  database:
    type: postgres
  blockstore:
    type: s3
    s3:
      region: us-east-1
You can setup additional configuration for lakeFS in the above map - any secrets just assign it to the associated environment variable in the secrets file. The env var name are the for each lakefs configuration is LAKEFS_<uppercase + '_' instead of '.' to address the key>.
Copy code
# first apply the secrets
$ kubectl apply -f lfs-pgs-secrets.yaml

# install our helm chart
$ helm install -f lfs-pgs-values.yaml my-lakefs lakefs/lakefs
The above assume you have a working postgres that lakeFS can connect to. Alt. you can use helm to install one too to try it out.