• Yoni Augarten

    Yoni Augarten

    1 year ago
    TIL you can use the UI to create a branch from an arbitrary commit! (see gif in comment)
    Yoni Augarten
    1 replies
    Copy to Clipboard
  • Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    1 year ago
    Just released
    io.lakefs:hadoop-lakefs-assembly:0.1.0-RC.0
    (for directly including in Spark) and
    io.lakefs:hadoop-lakefs:0.1.0-RC.0
    (for using to build further libraries) so we can kick the tyres. Without the RC tags this will be https://docs.lakefs.io/integrations/spark.html#access-lakefs-using-the-lakefs-specific-hadoop-filesystem later today 🙂 🎉
    Ariel Shaqed (Scolnicov)
    1 replies
    Copy to Clipboard
  • Barak Amar

    Barak Amar

    1 year ago
    TIL: File clone over full copy (depends on your filesystem). MacOS's APFS supports fileclone using
    cp -c
    . In case copy-on-write is required, not like symbolic link where modification will effect the source, data is kept and new one will be created if we try to update. Example of how to use and the time of copy/clone: create large file
    $ dd bs=1024 count=10485760 </dev/urandom > data
    copy the file
    time cp data data-cp
    cp data data-cp  0.01s user 4.17s system 79% cpu 5.269 total
    clone the file
    time cp -c data data-clone
    cp -c data data-clone  0.00s user 0.00s system 70% cpu 0.004 total
    Barak Amar
    Ariel Shaqed (Scolnicov)
    2 replies
    Copy to Clipboard
  • sumesh kanayi

    sumesh kanayi

    1 year ago
    hi folks i am back with another question.I am trying to implement multi part uploading usingmanta custom adapter (https://apidocs.joyent.com/manta/api.html) .Manta creates upload id similar to this
    e1565962-d5b6-43ff-83f6-63dfeab21a83
    and while trying to upload by running something like this
    aws --endpoint-url <http://s3.local.lakefs.io:8000> s3api upload-part --bucket demo1 --key main/large_test_file --part-number 1 --body temp_10MB_file --upload-id e1565962-d5b6-43ff-83f6-63dfeab21a83 --content-md5 exampleaAmjr+4sRXUwf0w==
    ends up with this error
    ERROR  [2021-06-25T22:15:50+05:30]pkg/gateway/operations/putobject.go:168 pkg/gateway/operations.handleUploadPart part 1 upload failed                          error="invalid upload id format: encoding/hex: invalid byte: U+002D '-'" host="<http://s3.local.lakefs.io:8000|s3.local.lakefs.io:8000>" method=PUT part_number=1 path=large_test_file ref=main repository=demo1 request_id=af90d942-20ee-42d8-aed8-098191675364 service_name=s3_gateway upload_id=e1565962-d5b6-43ff-83f6-63dfeab21a83 user=admin
    is this by design? or is there a way i can work around this issue ?
    sumesh kanayi
    Oz Katz
    2 replies
    Copy to Clipboard
  • Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    1 year ago
    Another option for managing and displaying our backlog. Recommend taking a look. https://devclass.com/2021/06/26/github-issues-tables-vs-boards/
    Ariel Shaqed (Scolnicov)
    1 replies
    Copy to Clipboard
  • sumesh kanayi

    sumesh kanayi

    1 year ago
    Hi, i am back few more questions .Can some one tell me what
    type WalkFunc func(id string) error
    really does ? i am calling it under
    walk
    for each object ,but was unsure when it really comes in to play .I read the comments but got confused 😦 Another question around copy method
    Copy(ctx context.Context, sourceObj, destinationObj ObjectPointer) error
    .During s3 copy object does this really gets triggered or at the backend does it just do a
    Get(ctx context.Context, obj ObjectPointer, expectedSize int64) (io.ReadCloser, error)
    followed by a
    Put(ctx context.Context, obj ObjectPointer, sizeBytes int64, reader io.Reader, opts PutOpts) error
    ?
    sumesh kanayi
    Barak Amar
    5 replies
    Copy to Clipboard
  • Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    1 year ago
    Happy for any input on https://github.com/treeverse/lakeFS/issues/1847#issuecomment-871944539! Particularly if you have other options or can rule out any of the options I put up there. (Given the original level of support for using RocksDB I find it disappointing that there has been so little involvement on this issue.)
    Ariel Shaqed (Scolnicov)
    1 replies
    Copy to Clipboard
  • sumesh kanayi

    sumesh kanayi

    1 year ago
    Thanks team .Want to personally thank @Oz Katz @Guy Hardonag @Barak Amar @Yoni Augarten @Itai Admi for helping me to write a custom adapter for https://github.com/sumeshkanayi/lakeFS/tree/manta-prev1.0.0/pkg/block/manta to use with https://www.joyent.com/triton/object-storage .With all your support I got this working and currently running some POC on it (https://github.com/sumeshkanayi/jupyter-lakefs-manta/blob/main/docker-compose.yml) .
    sumesh kanayi
    Ariel Shaqed (Scolnicov)
    2 replies
    Copy to Clipboard
  • Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    1 year ago
    Created a new label "customer-request". Hope it sticks!
    Ariel Shaqed (Scolnicov)
    1 replies
    Copy to Clipboard
  • Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    1 year ago
    Hi fellow devs, This is about making the lakeFS metadata client work on DataBricks (https://github.com/treeverse/lakeFS/issues/1847), and will likely make less sense if you're unfamiliar with the bug. My work on loading a second copy of org.rocksdb.rocksdbjni has stalled. While there is a path forward, it is both more expensive than anticipated and so far seems very unlikely to give us the desired advantage of an easy upgrade path for the rocksdbjni package. So instead I've decided with @Oz Katz to change course on this and implement a (very) minimal RocksDB reader on the JVM in Scala, with no JNI needed. I'm looking for someone to join me in writing this; please let me know!
    Ariel Shaqed (Scolnicov)
    4 replies
    Copy to Clipboard