Thread
#dev
    Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    5 days ago
    Hi devs, Something I've been thinking about. We see Spark running a very large number of "statObject" calls and "listObject" - and I would like to speed those up. (This is not an immediate suggestion, or even a suggestion, I'm just trying to wrap my head around things and understand the feasible solution space!) Now almost all of these calls seem very racy: delete a file, see if its "directory" is empty; create a file, see if we want to give it a directory marker - stuff like that. The point about these calls is that the calling code does not expect a consistent answer! Supposewe could identify on the Spark side these calls (because some calls really need consistency - it's just that others don't...). Then we could ask lakeFS for an inconsistent ("eventually" consistent) answer! Would it be possible? How much faster would it be, say on DynamoDB and on PostgreSQL?
    Itai Admi

    Itai Admi

    5 days ago
    I may be missing some details on the Spark side of things - but if it’s ok with a non-consistent answer can it be resolved from the client side somehow? For example, cache the latest answer, or assume deletion worked, or anything else.. The reason I’m asking for a client side is that many of the DB calls are merely repo/branch/commit calls, so saving the last one (get on the staging token for the key) won’t save that much time.
    Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    5 days ago
    Agree that getRepo and getBranch are easily cached. I'm thinking more of statObject - harder to cache because the storage of keys can be so large. Could we make that more efficient when Spark can say "I'm okay with eventual consistency here"? I know it will be 2x cheaper on DynamoDB... but if it is so slow that we end up retrying then I've gained nothing. But if it's fast & cheap, maybe we can afford up some of these calls?
    Barak Amar

    Barak Amar

    5 days ago
    I as see it at the logical level in spark there are several operation that can ask eventually consist - but will require a point in time, bookmark saying I want to wait at this point to have all my writes/deletes in place. The problem that for a single statObject request we don't know if this is the point in time that matters.
    Ariel Shaqed (Scolnicov)

    Ariel Shaqed (Scolnicov)

    5 days ago
    But perhaps we can figure it out on the Spark side using some extra information (say another hook from OutputCommitter, or wrapping another OutputCommitter, or writing our own). Let's say that we can add such a bit to every stat call. Can lakeFS use that bit to speed up that call?