API call is made to the RDS service and then the returned value (valid for 15 minutes) is used as a password. Peeking at
(and the corresponding pgx library) it seems like I might need to tweak that method to get the token…curious if others have run into this before or have thoughts. Context: I’m trying to deploy a full stack using AWS CDK and didn’t want to create an environment variable for my ECS task definition with a hard-coded password as it would show up in the CloudFormation template. 😅
so quite possibly an option. Continuing down the IAM road for now, though.
ParseConfig currently recognizes the following environment variable
environment variable and connection string together end up working for me. (With ECS I can populate an environment “secret” from Secrets Manager so I don’t have to expose the password) • I wasn’t able to get IAM auth to work at all (kept getting an obtuse “PAM authentication failed” error). I’m assuming it’s something on my side, but am giving up for now. 😄
time="2022-08-19T06:15:45Z" level=error msg="Failed to migrate" func="pkg/db.(*DatabaseMigrator).Migrate" file="build/pkg/db/migration.go:48" direction=up error="get migrate database driver: pq: empty password returned by client" host=lakefs-[REDACTED].<http://us-west-2.elb.amazonaws.com|us-west-2.elb.amazonaws.com> log_audit=API method=POST path=/api/v1/setup_lakefs request_id=b91296d3-e0ea-4a6a-b418-80730e1c25cb service_name=rest_api
(notice no password) and then I set
in my secrets to the ARN of my database password secret in Secrets Manager. And lakefs/pgx magically merges the password into my connection string. While this allows me to not hard-code my password in my CloudFormation/CDK stack, it does not necessarily support rolling credentials…but I suppose one could figure out a way to restart the container or similar on a credential change. Or maybe existing connections would be fine even if the password changes. 🤔