sparklyr中的spark_read_csv提供IllegalArgumentException异常

时间:2019-02-14 00:13:03

标签: r sparklyr

我是Sparklyr的新手。我正在尝试使用sparklyr api在R中使用spark_read_csv读取csv文件。我发现很难理解错误消息。请指教。非常感谢您的帮助。

我正在Ubuntu上使用openjdk 9.0运行spark 2.3.2。

代码:

options(sparklyr.java9 = TRUE`)
sc <- spark_connect(master = "local")
tweets_df <- spark_read_csv(sc = sc, name='df', path = 'all_sorted.csv'),header = TRUE, delimiter = ",") 

错误:

Error: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
Traceback:
1. spark_read_csv(sc = sc, name = "df", path = spark_normalize_path("all_sorted.csv"), header = TRUE, delimiter = ",")
2. spark_remove_table_if_exists(sc, name)
3. name %in% src_tbls(sc)
4. src_tbls(sc)
5. src_tbls.spark_connection(sc)
6. sdf_read_column(tbls, "tableName")
7. sc %>% invoke_static("sparklyr.Utils", "collectColumn", sdf, column, colType, separator$regexp) %>% sdf_deserialize_column(sc)
8. withVisible(eval(quote(`_fseq`(`_lhs`)), env, env))

0 个答案:

没有答案