Hello, I am trying to access files in LakeFs repo...
# help
a
Hello, I am trying to access files in LakeFs repository or MinIO bucket. I have a CSV file in that, repo name is testrepo and branch name is 'main' connected to s3 bucket 'test' in minio all these are part of bagel multi container. I am following the spark integration document https://docs.lakefs.io/integrations/spark.html#two-tiered-spark-support Here is the code i am trying in jupyter notebook from bagel multi container(pythoon 3 ) import findspark #to install pyspark findspark.init() import pyspark from pyspark.sql import SparkSession park = SparkSession.builder.master("local").appName("CsvReader").getOrCreate() spark.conf.set("fs.s3a.access.key", "AKIAIOSFODNN7EXAMPLE") spark.conf.set("fs.s3a.secret.key", "mykey") spark.conf.set("fs.s3a.endpoint", "http://lakefs:8000") spark.conf.set("fs.s3a.path.style.access", "true") df = spark.sql("select 'spark' as hello ") #this statement works path = "s3a://testrepo/main/Sample-Spreadsheet-10-rows.csv" df2 = spark.read.csv(path) #this is throwing the error message. Below is the complete error log ------------------------------------------------------------------------------------------------------------------------------
Copy code
---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
/tmp/ipykernel_32/1433648092.py in <module>
     14 df = spark.sql("select 'spark' as hello ")
     15 path = "<s3a://testrepo/main/Sample-Spreadsheet-10-rows.csv>"
---> 16 df2 = spark.read.csv(path)
     17 
     18 df2.show()

/usr/local/spark/python/pyspark/sql/readwriter.py in csv(self, path, schema, sep, encoding, quote, escape, comment, header, inferSchema, ignoreLeadingWhiteSpace, ignoreTrailingWhiteSpace, nullValue, nanValue, positiveInf, negativeInf, dateFormat, timestampFormat, maxColumns, maxCharsPerColumn, maxMalformedLogPerPartition, mode, columnNameOfCorruptRecord, multiLine, charToEscapeQuoteEscaping, samplingRatio, enforceSchema, emptyValue, locale, lineSep, pathGlobFilter, recursiveFileLookup, modifiedBefore, modifiedAfter, unescapedQuoteHandling)
    408             path = [path]
    409         if type(path) == list:
--> 410             return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path)))
    411         elif isinstance(path, RDD):
    412             def func(iterator):

/usr/local/spark/python/lib/py4j-0.10.9.2-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1307 
   1308         answer = self.gateway_client.send_command(command)
-> 1309         return_value = get_return_value(
   1310             answer, self.gateway_client, self.target_id, self.name)
   1311 

/usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
    109     def deco(*a, **kw):
    110         try:
--> 111             return f(*a, **kw)
    112         except py4j.protocol.Py4JJavaError as e:
    113             converted = convert_exception(e.java_exception)

/usr/local/spark/python/lib/py4j-0.10.9.2-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    324             value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325             if answer[1] == REFERENCE_TYPE:
--> 326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
    328                     format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o325.csv.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2667)
	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3431)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466)
	at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$1(DataSource.scala:747)
	at scala.collection.immutable.List.map(List.scala:293)
	at org.apache.spark.sql.execution.datasources.DataSource$.checkAndGlobPathIfNecessary(DataSource.scala:745)
	at org.apache.spark.sql.execution.datasources.DataSource.checkAndGlobPathIfNecessary(DataSource.scala:577)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:408)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274)
	at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245)
	at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:571)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
	at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2571)
	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2665)
	... 29 more
df2.show()
a
Hi Ashwath!
a
I updated my most 🙂. could you help me here
👍🏼 1
a
Getting Spark to start running often frustrates me. So apologies if I ask many silly questions -- we will get get there. :-) I suspect this line:
Copy code
java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
It seems that Spark fails to load the S3A filesystem. Can you please share how you are running pyspark?
a
I opened cli for notebook container with in bagel and then ran below to install spark pip install findspark then started running above codes
a
I think pyspark by default might not load the hadoop-aws libraries, and may require some additional CLI magic. But I am not an expert, so let's do these 2 things (I've already started on the first!): 1. I am running a compose and will load pyspark inside it to access S3A. 2. You might try to add
--jars /path/to/wherever/hadoop-aws.jar
(it should be inside the container) to your pyspark commandline. See e.g. this blog post for why running PySpark on AWS can be tricky.
a
getting error as jar not found
$ --jars /path/to/wherever/hadoop-aws.jar /bin/sh: 1: --jars: not found
@Ariel Shaqed (Scolnicov) any other option, or am i doing wrong ?
a
Indeed. You will need hadoop-aws.jar on the container in use -- and I cannot find it there. I'm opening an issue, because I am quite confused about what is going on.
a
okie
a
Opened https://github.com/treeverse/lakefs/issues/2947 (and very happy if you could go over it and comment, possibly subscribe to updates too).
a
okie, good
j
Hi @Ashwath , the issue has been resolved. You're welcome to pull from master 🙂
👍🏼 1
a
Thanks for the report, @Ashwath! And thanks for the rapid fix @Jonathan Rosenberg! sunglasses lakefs
a
@Ariel Shaqed (Scolnicov) Thank for working on FIX and update. I pulled changes and rebuilt the docker without cache. I still see same error. Am i missing something in my code ? or any installation ?
a
I am sorry about that. If you pulled, you should see commit
566efee
in your history. Please give me a few minutes to try it out.
a
566efee779ee7e41e9889ae7ed89e670e73b839f commit id
a
I can confirm that it does not appear to work for me, either.
a
ok, good that you were able to reproduce the issue
@Ariel Shaqed (Scolnicov) i added below jar manually and it worked
a
Thanks! https://github.com/treeverse/lakeFS/pull/2951 does pretty much the same thing. We pulled it so
master
should be good now.
a
@Ariel Shaqed (Scolnicov) the jar files pulled from master did not work for me. and also when I rebuilt from scratch(delete complete folder and started with new folder) it did not bring me those jar files. I saw before rebuild jars are aws-java-sdk-bundle and aws-hadoop. but I have used 3 different jar files. i am not sure.
a
Hi Ashwath, Really sorry you are having all these difficulties. The Jupyter notebook in the sample docker-compose is a really heavily-anticipated feature, and we should make sure it works. I'm taking another look at it.
I am no longer able to reproduce your issue on HEAD. I did have to change the configuration variables (hadoop configuration variable keys in Spark need to start with
spark.hadoop
), but once I did that and set the example password everything works. Please see this comment I added on the issue, with code that now works.
a
@Ariel Shaqed (Scolnicov) It works since i have added 3 jars files. I will check with new instance and update you.
a
Thanks!
a
@Ariel Shaqed (Scolnicov) I did from the scratch and it is working perfectly. May be without rebuilding it, it might have worked. I was looking for jar files to be shown in notebook Home page. it was not there, so i thought my instance is not sync with git.
@Ariel Shaqed (Scolnicov) Thank you for quick fix and support
a
Great to hear this! Thanks for reporting the original bug, and thanks @Barak Amar and @Jonathan Rosenberg for your work fixing it!
👍 1