Jay Kataria
02/26/2023, 2:59 AMmake build
cd webui && /usr/local/bin/npm run build
> lakefs-ui@0.1.0 build /Users/jay/lakeFS/lakeFS/webui
> cross-env NODE_OPTIONS=--max-old-space-size=4096 vite build
(node:65479) ExperimentalWarning: The ESM module loader is experimental.
file:///Users/jay/lakeFS/lakeFS/webui/node_modules/vite/bin/vite.js:7
await import('source-map-support').then((r) => r.default.install())
^^^^^
SyntaxError: Unexpected reserved word
at Loader.moduleStrategy (internal/modules/esm/translators.js:81:18)
at async link (internal/modules/esm/module_job.js:37:21)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! lakefs-ui@0.1.0 build: `cross-env NODE_OPTIONS=--max-old-space-size=4096 vite build`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the lakefs-ui@0.1.0 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/jay/.npm/_logs/2023-02-26T02_49_34_631Z-debug.log
make: *** [gen-ui] Error 1
Any suggestions on this as well π Also I am using a mac for the purpose
thanks a lot looking forward to working on the issue.Barak Amar
Ariel Shaqed (Scolnicov)
03/05/2023, 8:04 AMYoni Augarten
03/05/2023, 11:13 AMAdi Polak
Idan Novogroder
03/09/2023, 9:49 AMAriel Shaqed (Scolnicov)
03/10/2023, 10:42 AMchecks-validator
failing on CI but not locally. This PR, branch feature/5371-rbac-to-acl-upgrade
. Strange behaviour: Since yesterday I have regular CI failures in "Run Linters and Checkers", part of the "Test" action. make checks-validator
fails on GH (example) but succeeds locally ("it works on my machine"). Each time GH complains that some file is not updated. Some debugging runs yesterday showed that the recent saml_auth
change to swagger.yml was missing.
Things I've tried:
β’ Compare digest of my branch to origin. Equal.
β’ Re-run make gen
, re-run make checks-validator
. Still happens on GH.
β’ Re-pull the openapi-generator image on my machine (in case a new image was re-tagged). No changes.
Yesterday I ignored it and eventually it "went away" after rebase. Not sure this hoping this happens again is a strategy.
Any clues?Vaibhav Kumar
03/10/2023, 12:35 PMRobin Moffatt
03/10/2023, 5:34 PMAriel Shaqed (Scolnicov)
03/12/2023, 1:47 PMRobin Moffatt
03/13/2023, 5:45 AMReference
section I was wondering if we ought to split out a Security
section? Unless anyone objects I'll set up a PRRobin Moffatt
03/13/2023, 11:19 AMVaibhav Kumar
03/14/2023, 6:06 PMRobin Moffatt
03/14/2023, 6:37 PM<https://docs.lakefs.io/setup/>
which seems to be a stale page based on the current nav that you see at docs.lakeFS.io root.
a) Should I add a redirect_from
to deploy/index.md
to catch this and subpages on /setup
?
b) Should we be deleting stale pages from our docs site so that users get a 404 (and Google un-indexes them) than having this situation where nothing's "broken" but users see stale pages and a confusing nav that changes depending on which page you're on. (am I right to think it's GitHub Pages hosted with this as the relevant page in this case? https://github.com/treeverse/docs-lakeFS/tree/main/setup)GitHub
03/14/2023, 8:46 PMRobin Moffatt
03/15/2023, 11:00 AMRobin Moffatt
03/15/2023, 11:53 AMcommands.html
already exists and is being served stale https://github.com/treeverse/docs-lakeFS/blob/main/reference/cli.html / https://github.com/treeverse/docs-lakeFS/blob/main/reference/commands.html
Probably time I bump up the priority of https://github.com/treeverse/lakeFS/issues/5484 πRobin Moffatt
03/15/2023, 7:09 PM$ lakectl fs rm <lakefs://drones02/main/drone-registrations/Registations-RecFlyer-Active-2015.parquet>
cannot write to protected branch
I get that a deletion is not a read operation and so write
kinda makes sense - but it also kinda doesn't. I wonder if we should adopt something more akin to what linux says:
$ rm /tmp/foo
rm: remove write-protected regular empty file '/tmp/foo'? y
rm: cannot remove '/tmp/foo': Operation not permitted
So then the lakeFS error would be something like operation not permitted. branch is protected.
WDYT?Vaibhav Kumar
03/16/2023, 5:07 PMVaibhav Kumar
03/17/2023, 7:29 PMfrom pyspark.sql import SparkSession
spark = SparkSession\
.builder\
.appName("PythonPageRank")\
.getOrCreate()
df = spark.read.csv("<lakefs://test-repo/main/sample.csv>")
df.show()
Spark-submit
spark-submit --conf spark.hadoop.fs.lakefs.impl=io.lakefs.LakeFSFileSystem --conf spark.hadoop.fs.lakefs.access.key='my_key' --conf spark.hadoop.fs.lakefs.secret.key='my_sec_key' --conf spark.hadoop.fs.lakefs.endpoint='http:localhost:8000' --packages io.lakefs:hadoop-lakefs-assembly:0.1.12 main.py
error
Caused by: io.lakefs.hadoop.shade.api.ApiException: Content type "text/html; charset=utf-8" is not supported for type: class io.lakefs.hadoop.shade.api.model.ObjectStats
at io.lakefs.hadoop.shade.api.ApiClient.deserialize(ApiClient.java:822)
at io.lakefs.hadoop.shade.api.ApiClient.handleResponse(ApiClient.java:1020)
at io.lakefs.hadoop.shade.api.ApiClient.execute(ApiClient.java:944)
at io.lakefs.hadoop.shade.api.ObjectsApi.statObjectWithHttpInfo(ObjectsApi.java:1478)
at io.lakefs.hadoop.shade.api.ObjectsApi.statObject(ObjectsApi.java:1451)
at io.lakefs.LakeFSFileSystem.getFileStatus(LakeFSFileSystem.java:775)
... 19 more
Traceback (most recent call last):
File "/Users/vaibhav/Desktop/project/sparksample/main.py", line 6, in <module>
df = spark.read.json("<lakefs://test-repo/main/sample.json>")
File "/usr/local/Cellar/apache-spark/3.3.2/libexec/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 284, in json
File "/usr/local/Cellar/apache-spark/3.3.2/libexec/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1321, in __call__
File "/usr/local/Cellar/apache-spark/3.3.2/libexec/python/lib/pyspark.zip/pyspark/sql/utils.py", line 190, in deco
File "/usr/local/Cellar/apache-spark/3.3.2/libexec/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o31.json.
: java.io.IOException: exists
at io.lakefs.LakeFSFileSystem.exists(LakeFSFileSystem.java:906)
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:784)
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:782)
at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:372)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1311)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1841)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1806)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
Caused by: io.lakefs.hadoop.shade.api.ApiException: Content type "text/html; charset=utf-8" is not supported for type: class io.lakefs.hadoop.shade.api.model.ObjectStatsList
at io.lakefs.hadoop.shade.api.ApiClient.deserialize(ApiClient.java:822)
at io.lakefs.hadoop.shade.api.ApiClient.handleResponse(ApiClient.java:1020)
at io.lakefs.hadoop.shade.api.ApiClient.execute(ApiClient.java:944)
at io.lakefs.hadoop.shade.api.ObjectsApi.listObjectsWithHttpInfo(ObjectsApi.java:1149)
at io.lakefs.hadoop.shade.api.ObjectsApi.listObjects(ObjectsApi.java:1120)
at io.lakefs.LakeFSFileSystem.exists(LakeFSFileSystem.java:873)
... 16 more
Itai Admi
03/20/2023, 8:14 AMJonathan Rosenberg
03/21/2023, 4:07 PMRobin Moffatt
03/22/2023, 11:09 AMlicense/cla
check hasn't run. Is there a way to manually kick it off?Robin Moffatt
03/23/2023, 4:12 PMAriel Shaqed (Scolnicov)
03/24/2023, 1:12 PMWe have no reason to believe that the exposed key was abused and took this action out of an abundance of caution.
Also includes instructions how to tell your SSH client to drop the compromised old public key so you can verify and start trusting the new key. AFAIK your SSH client should not normally be using this key, you are most likely using an EC* key instead.
Niro
03/28/2023, 12:30 PMAriel Shaqed (Scolnicov)
04/03/2023, 7:57 AM.../_lakefs/logs/gc/expired_objects
.
THANKS!Ariel Shaqed (Scolnicov)
04/03/2023, 10:52 AM_"With great scale comes big data_"And we're getting there. I think it's time for pack files like Git uses, so I made up this issue to design how we can do them. Packs serve as snapshots in big data (or cubes if anyone still remembers OLAP): in this case, I would like us to hold many commits together in a single "file". The issue has a sketch, and I think we need to think about this direction. I believe it can help us with: β’ GC on large repositories with many commits β’ lakectl blame and other log-by-path operations β’ Even FindMergeBase when the base is distant. It won't be a short sprint, but I'd like us to think about this as a possible next step for lakeFS scalability.
Robin Moffatt
04/05/2023, 6:32 AM<https://github.com/treeverse/lakeFS/blob/master/docker-compose.yaml>
is for these days? It looks like it was added a while ago.
I'd like to [re]move it so that new users aren't confused between it and the quickstart Docker Compose, but wanted to check first before doing so.Guy Hardonag
04/05/2023, 6:42 AM