The lakeFS instructions state that it doesn’t supp...
# help
k
The lakeFS instructions state that it doesn’t support storage classes? Is that JUST on the PutObject operation? I ran a cp s3://lakefs-testing/main s3://lakefs-testing/main —recursive —storage-class STANDARD-IA and it seems to have copied the data to a different storage class. what is actually not supported?
👀 2
a
Hi Kevin, It's been a while since I last touched that one. Sorry to hear you've problems with it. The lakeFS putObject API supports a storageClass query param. This should be a useful workaround. Another workaround would be to use the linkPhysicalAddress API. When you use that API you deal directly with S3 for the data upload. So naturally you can set whatever storage class you prefer. Are you using the lakeFS S3 gateway? If so and it does not support storage class, that's a bug - please open an issue. Similarly, if you used lakectl and that didn't work -- it's a bug. THANKS!
k
I guess that might be the issue. What is considered the “lakeFS S3 gateway”? I have been syncing data like normal to s3 so everything is in the STANDARD storage class. We have a directory in a project that’s archival data so we are moving it to a long term storage tier. I ran the command ‘aws s3 —endpoint-url http://mylakefs.example.com:8000 cp s3://lakefs-testing-bucket/main/testdata s3://lakefs-testing-bucket/main/testdata —recursive —storage-class STANDARD_IA’
that seems to have copied the file and in the raw aws console i see a file as standard_ai storage class
a
Okay, so I understand that worked? By "using the s3 gateway" we mean "talking to lakeFS using the s3 protocol". I guess I'm not sure what is not supported. Shame on me, this is the first thing I implemented on lakeFS when I joined Treeverse. But can you point me at the docs, please?
k
Yeah, I can see a file with the name like ‘ab71242dabcs7529abd3738’ and it’s storage class is “Standard-ai”
that’s the docs i see
j
@Ariel Shaqed (Scolnicov) I think that it means that the storage classes functionality in our S3 compatible endpoint (S3 gateway) is not implemented by lakeFS for all cloud providers, e.g. if the backing store is on Azure, we don’t support storage classes for it (don’t know if it’s a thing on Azure), but if the request has headers with that info, it will be passed to the cloud provider such as the case above.
a
Ouch, I see. I'll find a way to fix it at the beginning of next week. Thanks!