Juarez Rudsatz12/22/2022, 1:01 PM
Jonathan Rosenberg12/22/2022, 1:02 PM
Juarez Rudsatz12/22/2022, 1:16 PM
Jonathan Rosenberg12/22/2022, 2:14 PM
. Then you can use your own Spark job to read the Parquet file and delete the object marked addresses specified in it (they will be relative to your storage namespace, i.e. the location that you initialized your repo at).
how much percent it will increase the size of the data lake?I’m not too sure of that as it depends on the amount of data you write and change…
Iddo Avneri12/22/2022, 3:13 PM