``` blobURL := s.ContainerUrl.NewBlockBlobURL(key....
# help
g
Copy code
blobURL := s.ContainerUrl.NewBlockBlobURL(key.String())

	get, err := blobURL.Download(ctx, 0, 0,
		azblob.BlobAccessConditions{}, false)

	if err != nil {
		return Object{}, err
	}

	contentMd5String := base64.StdEncoding.EncodeToString(get.ContentMD5())

	res := Object{
		ObjectMetadata: ObjectMetadata{
			UserMetadata:       get.NewMetadata(),
			CacheControl:       aws.String(get.CacheControl()),
			ContentDisposition: aws.String(get.ContentDisposition()),
			ContentEncoding:    aws.String(get.ContentEncoding()),
			ContentLanguage:    aws.String(get.ContentLanguage()),
			ContentMD5:         &contentMd5String,
			ContentType:        aws.String(get.ContentType()),
			ContentLength:      aws.Int64(get.ContentLength()),
			ETag:               aws.String(string(get.ETag())),
			// Expires: not supported
			// Tagging: not supported
		},
		ReadCloser: get.Body(azblob.RetryReaderOptions{}),
		Key:        key,
	}
b
Authentication is done in lakeFS level startup - each adapter got it own set of key/secret, file and etc...
g
it is peculiar since the accessid is the name of the container and not as in aws an user value. However for @miko should be easy:
1. creating a folder azure in block
2. adding an adapter.go in the same block
4. changing constant BlockstoreType to azure
5. Reimplementing the method needed in adapter.go
a
Sure! Also, it looks like you have a design for this. Would you like to write up a few lines (wherever would work for you, an issue or Google Drive or anywhere) and e-meet one or two of us? Not compulsory (of course), but it can sometimes help to agree the broad technical details before coding the PR. Thanks (and good luck)!
g
now i am working, after work i write down some code
and then i come to you
a
Sure, whatever works for you is best!