Vaibhav Kumar
10/19/2022, 12:56 PMAriel Shaqed (Scolnicov)
10/19/2022, 1:10 PM$ airflow connections add conn_lakefs --conn-type=HTTP --conn-host=http://<LAKEFS_ENDPOINT> --conn-extra='{"access_key_id":"<LAKEFS_ACCESS_KEY_ID>","secret_access_key":"<LAKEFS_SECRET_ACCESS_KEY>"}'
Vaibhav Kumar
10/19/2022, 1:12 PMIddo Avneri
10/19/2022, 1:14 PMAriel Shaqed (Scolnicov)
10/19/2022, 1:28 PMVaibhav Kumar
10/19/2022, 5:11 PMAriel Shaqed (Scolnicov)
10/19/2022, 5:22 PMVaibhav Kumar
10/19/2022, 6:07 PMAriel Shaqed (Scolnicov)
10/19/2022, 6:16 PMVaibhav Kumar
10/19/2022, 6:27 PMAriel Shaqed (Scolnicov)
10/19/2022, 6:30 PMVaibhav Kumar
10/19/2022, 6:57 PMAmit Kesarwani
10/19/2022, 9:49 PMAriel Shaqed (Scolnicov)
10/20/2022, 7:35 AMroot
in l. 239 user: "0:0"
, so I will not run it on my machine).
Instead I investigated. Fundamentally your problem is that Docker isolates different docker-compose files by default. @Amit Kesarwani explains one way to hit a container on another network, or you could make them run in the same network. I would probably merge the two Docker compose files (so all the services
will live under the same services:
key at the top level. If you want to keep them separate, you can add a network to your Airflow containers -- lakeFS will be on lakefs_default
. See this StackOverflow answer (as well as the other answers on that question) for how to do it.Vaibhav Kumar
10/20/2022, 5:57 PM