Having read previous posts, I still feel I need clarification in a particular use case: constant data change in s3 bucket
I have data uploaded to my s3 bucket, and I used lakeFS to import this data into my repository.
Connected to Pyspark, I performed some data transformation on this data which I finally commit back to my repository.
Now there is a change in data uploaded to my s3 bucket, how do I read this new data and compare it with the historical data already committed to my repository.
Will lakeFS take note of this data change that took place in my s3 bucket ?