Aayush Bhasin
10/10/2024, 9:27 PMlakectl local
, i.e. my local directory has a .lakefs_ref.yaml
file in a directory that's linked to a corresponding lakefs repo, branch and commit. Sometimes i want to run lakectl local checkout <path>
but I do not want it to overwrite untracked files in that directory. Is this something that is supported / on the roadmap? Similar to how git does not remove untracked paths when doing a git pull. I was able to do this by using lakectl fs download <lakefs url> <path>
, so just wondering if its possible to implement with lakectl local
as well. Thanks in advance!mpn mbn
10/12/2024, 11:54 AMservices:
postgres:
container_name: pg-lakefs
image: postgres:13
ports:
- "5432:5432"
environment:
POSTGRES_DB: lakefs
POSTGRES_USER: lakefs
POSTGRES_PASSWORD: lakefs
volumes:
- pg_data:/var/lib/postgresql/data
volumes:
pg_data:
Here is my ACL `config.yaml`:
listen_address: ":8001"
database:
type: "postgres"
postgres:
connection_string: "<postgres://lakefs:lakefs@localhost:5432/lakefs?sslmode=disable>"
encrypt:
secret_key: "secret"
Here is my `.lakefs.yaml`:
database:
type: "postgres"
postgres:
connection_string: "<postgres://lakefs:lakefs@localhost:5432/lakefs?sslmode=disable>"
blockstore:
type: local
local:
path: ~/Code/lakefs/data
auth:
remote_authenticator:
enabled: true
endpoint: <http://localhost:8001/api/v1/auth>
default_user_group: "Developers"
ui_config:
RBAC: simplified
encrypt:
secret_key: "secret"
The logs I get when I `lakefs run`:
INFO [2024-10-12T14:51:19+03:00]pkg/auth/service.go:953 pkg/auth.(*APIAuthService).CheckHealth Performing health check, this can take up to 20s
FATAL [2024-10-12T14:51:37+03:00]cmd/lakefs/cmd/run.go:123 cmd/lakefs/cmd.NewAuthService Auth API health check failed error="Get \"/healthcheck\": unsupported protocol scheme \"\""
Vibhath
10/14/2024, 8:45 AMGard Drasil
10/15/2024, 8:03 AMtime="2024-10-15T07:59:08Z" level=warning msg="Could not access storage namespace" func="pkg/api.(*Controller).CreateRepository" file="build/pkg/api/controller.go:2007" error="operation error S3: GetObject, https response error StatusCode: 400, RequestID: 0, HostID: , api error InvalidArgument: S3 API Requests must be made to API port." reason=unknown service=api_gateway storage_namespace="<s3://test>"
time="2024-10-15T08:01:40Z" level=error msg="Failed to get region for bucket, falling back to default region" func="pkg/block/s3.(*ClientCache).refreshBucketRegion" file="build/pkg/block/s3/client_cache.go:151" default_region=us-east-1 error="operation error S3: HeadBucket, https response error StatusCode: 400, RequestID: , HostID: , api error BadRequest: Bad Request" host="localhost:47098" method=POST operation_id=CreateRepository path=/api/v1/repositories user=everything-bagel
time="2024-10-15T08:01:40Z" level=error msg="failed to get S3 object bucket test-repo key dummy" func="pkg/logging.(*logrusEntryWrapper).Errorf" file="build/pkg/logging/logger.go:339" error="operation error S3: GetObject, https response error StatusCode: 400, RequestID: 0, HostID: , api error InvalidArgument: S3 API Requests must be made to API port." host="localhost:47098" method=POST operation=GetObject operation_id=CreateRepository path=/api/v1/repositories user=everything-bagel
time="2024-10-15T08:01:40Z" level=warning msg="Could not access storage namespace" func="pkg/api.(*Controller).CreateRepository" file="build/pkg/api/controller.go:2007" error="operation error S3: GetObject, https response error StatusCode: 400, RequestID: 0, HostID: , api error InvalidArgument: S3 API Requests must be made to API port." reason=unknown service=api_gateway storage_namespace="<s3://test-repo>"
Vibhath
10/15/2024, 5:03 PMtaylor schneider
10/15/2024, 7:59 PMAaron Taylor
10/17/2024, 7:03 PMDavi Gomes
10/18/2024, 3:35 PM<lakefs://data-platform-silver/main/customers/>
is not reflected in <s3://data-platform-silver/main/customers>
Jérôme Viveret
10/21/2024, 12:32 PMMatthew Butler
10/21/2024, 6:27 PMaccess_key_id
and secret_access_key
) are changing somehow without my knowledge. I'll set up LakeFS and successfully log in one day, then a few days or weeks later the creds no longer work. I'm deploying LakeFS on KubernetesAkshar Barot
10/22/2024, 7:59 AMVibhath
10/22/2024, 1:43 PMBenoit Putzeys
10/23/2024, 9:29 AMconfig.yaml
and ran the lakefs
command.
However, I get an error:
WARNING[2024-10-23T09:07:01Z]lakeFS/pkg/kv/dynamodb/store.go:199 pkg/kv/dynamodb.setupKeyValueDatabase Failed to create or detect KV table error="operation error DynamoDB: CreateTable, https response error StatusCode: 0, RequestID: , request send failed, Post \"<https://dynamodb>..<http://amazonaws.com/\|amazonaws.com/\>": dial tcp: lookup dynamodb..<http://amazonaws.com|amazonaws.com>: no such host" table_name=kvstore
INFO [2024-10-23T09:07:01Z]lakeFS/pkg/kv/dynamodb/store.go:165 pkg/kv/dynamodb.setupKeyValueDatabase.func1 Setup time table_name=kvstore took=7.253785ms
FATAL [2024-10-23T09:07:01Z]lakeFS/cmd/lakefs/cmd/run.go:159 cmd/lakefs/cmd.init.func9 Failed to open KV store error="setup failed: operation error DynamoDB: CreateTable, https response error StatusCode: 0, RequestID: , request send failed, Post \"<https://dynamodb>..<http://amazonaws.com/\|amazonaws.com/\>": dial tcp: lookup dynamodb..<http://amazonaws.com|amazonaws.com>: no such host"
I wanted to ask if you can reproduce it and help me resolve this?
Thanks in advance!Vibhath
10/23/2024, 7:39 PMBenoit Putzeys
10/24/2024, 12:12 PMmpn mbn
10/24/2024, 6:41 PMHaoming Jiang
10/24/2024, 9:37 PMmpn mbn
10/25/2024, 8:03 AMVincent Caldwell
10/26/2024, 5:13 AMRudy Cortembert
10/27/2024, 9:18 PMParth Ghinaiya
10/28/2024, 8:14 PMAndrij David
10/29/2024, 8:33 PMAndrij David
10/29/2024, 8:46 PMHaoming Jiang
10/30/2024, 1:11 AMlakefs/get_object(repository_id, reference_id, path)
, but I dont see how to write datampn mbn
10/31/2024, 12:05 PMlakectl fs rm -r <lakefs://repo/branch/A>
lakectl fs upload -r <lakefs://repo/branch/A> -s A
My question is:
How can I do this using Python lakefs package?Ocean Chang
11/06/2024, 8:12 AMauth:
remote_authenticator:
enabled: true
endpoint: <https://testendpoint.com>
default_user_group: "Developers"
ui_config:
logout_url: /logout
login_cookie_names:
- Authorization
Boris
11/07/2024, 1:01 PMOcean Chang
11/08/2024, 2:23 AMv1/auth/login
API call or the Client
from SDK. They are successful with 200. Login API call returns the token
and token_expiration
However, when subsequently trying to call /api/v1/repositories
, I m getting 401 error authenticating request
Question: Do I need to attach the login token being returned in order to make subsequent calls? If so, how?Mike Fang
11/08/2024, 7:09 PMMike Fang
11/09/2024, 1:42 AMtime="2024-11-09T01:33:57Z"
level=warning msg="Could not access storage namespace"
func="pkg/api.(*Controller).CreateRepository"
file="lakeFS/pkg/api/controller.go:2016" error="operation error S3:
PutObject, https response error StatusCode: 400, RequestID:
GV2RCD8F49KSN5K3, HostID:
P2Te8QubRyKCczc2nt/cJ3YnGfIJFDD2vJRKYoKC7JuDkMkEgN6woYVtsfChFfRhkO2HvM10uYE=,
api error InvalidRequest: Content-MD5 OR x-amz-checksum- HTTP header is
required for Put Object requests with Object Lock parameters"
reason=unknown service=api_gateway
storage_namespace="<s3://nile-data-catalog-storefangmik-406016533510-dev/test-lakefs/>"
is there something i am missing with setting up s3 wiht lakeFS? I believe the bucket permissions should be set up correctly
object lock is usualy default for s3 buckets, do they need ot be turned off now for lakefs?