使用sqlContext.read在Spark中读取.csv文件时出错

时间:2016-04-07 20:04:30

标签: apache-spark

我正在尝试将一个csv文件读入Spark中的数据框,如下所示:

  1. 我运行火花壳,如:

    spark-shell --jars。\ spark-csv_2.11-1.4.0.jar;。\ commons-csv-1.2.jar (我无法直接下载那些依赖于我使用-jars的原因)

  2. 使用以下命令读取csv文件:

  3.   

    val df_1 =   sqlContext.read.format(" com.databricks.spark.csv&#34)。选项("头&#34 ;,   "真&#34)载荷(" 2008.csv&#34)

    但是,这是我收到的错误消息:

    scala> val df_1 = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load("2008.csv")
    java.lang.NoClassDefFoundError: org/apache/commons/csv/CSVFormat
            at com.databricks.spark.csv.package$.<init>(package.scala:27)
            at com.databricks.spark.csv.package$.<clinit>(package.scala)
            at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:235)
            at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:73)
            at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:162)
            at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:44)
            at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
            at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
            at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
            at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
            at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
            at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37)
            at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39)
            at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
            at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
            at $iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
            at $iwC$$iwC$$iwC.<init>(<console>:47)
            at $iwC$$iwC.<init>(<console>:49)
            at $iwC.<init>(<console>:51)
            at <init>(<console>:53)
            at .<init>(<console>:57)
            at .<clinit>(<console>)
            at .<init>(<console>:7)
            at .<clinit>(<console>)
            at $print(<console>)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:497)
            at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
            at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
            at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
            at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
            at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
            at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
            at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
            at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
            at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
            at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
            at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop
    .scala:997)
            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:
    945)
            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:
    945)
            at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
            at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
            at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
            at org.apache.spark.repl.Main$.main(Main.scala:31)
            at org.apache.spark.repl.Main.main(Main.scala)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:497)
            at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
            at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
            at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
            at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
            at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.lang.ClassNotFoundException: org.apache.commons.csv.CSVFormat
            at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
            ... 57 more
    

    在完成第一个提议的解决方案之后:

    PS C:\Users\319413696\Desktop\graphX> spark-shell --packages com.databricks:spark-csv_2.11:1.4.0
    Ivy Default Cache set to: C:\Users\319413696\.ivy2\cache
    The jars for the packages stored in: C:\Users\319413696\.ivy2\jars
    :: loading settings :: url = jar:file:/C:/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar!/org/apache
    /ivy/core/settings/ivysettings.xml
    com.databricks#spark-csv_2.11 added as a dependency
    :: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
            confs: [default]
            found com.databricks#spark-csv_2.11;1.4.0 in local-m2-cache
            found org.apache.commons#commons-csv;1.1 in local-m2-cache
            found com.univocity#univocity-parsers;1.5.1 in local-m2-cache
    downloading file:/C:/Users/319413696/.m2/repository/com/databricks/spark-csv_2.11/1.4.0/spark-csv_2.11-1.4.0.jar ...
            [SUCCESSFUL ] com.databricks#spark-csv_2.11;1.4.0!spark-csv_2.11.jar (0ms)
    downloading file:/C:/Users/319413696/.m2/repository/org/apache/commons/commons-csv/1.1/commons-csv-1.1.jar ...
            [SUCCESSFUL ] org.apache.commons#commons-csv;1.1!commons-csv.jar (0ms)
    downloading file:/C:/Users/319413696/.m2/repository/com/univocity/univocity-parsers/1.5.1/univocity-parsers-1.5.1.jar ..
    .
            [SUCCESSFUL ] com.univocity#univocity-parsers;1.5.1!univocity-parsers.jar (15ms)
    :: resolution report :: resolve 671ms :: artifacts dl 31ms
            :: modules in use:
            com.databricks#spark-csv_2.11;1.4.0 from local-m2-cache in [default]
            com.univocity#univocity-parsers;1.5.1 from local-m2-cache in [default]
            org.apache.commons#commons-csv;1.1 from local-m2-cache in [default]
            ---------------------------------------------------------------------
            |                  |            modules            ||   artifacts   |
            |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
            ---------------------------------------------------------------------
            |      default     |   3   |   3   |   3   |   0   ||   3   |   3   |
            ---------------------------------------------------------------------
    
    :: problems summary ::
    :::: ERRORS
            Server access error at url https://repo1.maven.org/maven2/com/databricks/spark-csv_2.11/1.4.0/spark-csv_2.11-1.4
    .0-sources.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/com/databricks/spark-csv_2.11/1.4.0/spark-
    csv_2.11-1.4.0-sources.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/com/databricks/spark-csv_2.11/1.4.0/spark-csv_2.11-1.4
    .0-src.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/com/databricks/spark-csv_2.11/1.4.0/spark-
    csv_2.11-1.4.0-src.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/com/databricks/spark-csv_2.11/1.4.0/spark-csv_2.11-1.4
    .0-javadoc.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/com/databricks/spark-csv_2.11/1.4.0/spark-
    csv_2.11-1.4.0-javadoc.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/org/apache/apache/15/apache-15.jar (java.net.SocketExc
    eption: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/org/apache/apache/15/apache-15.jar (java.n
    et.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/org/apache/commons/commons-parent/35/commons-parent-35
    .jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/org/apache/commons/commons-parent/35/commo
    ns-parent-35.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/org/apache/commons/commons-csv/1.1/commons-csv-1.1-sou
    rces.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/org/apache/commons/commons-csv/1.1/commons
    -csv-1.1-sources.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/org/apache/commons/commons-csv/1.1/commons-csv-1.1-src
    .jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/org/apache/commons/commons-csv/1.1/commons
    -csv-1.1-src.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/org/apache/commons/commons-csv/1.1/commons-csv-1.1-jav
    adoc.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/org/apache/commons/commons-csv/1.1/commons
    -csv-1.1-javadoc.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/com/univocity/univocity-parsers/1.5.1/univocity-parser
    s-1.5.1-sources.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/com/univocity/univocity-parsers/1.5.1/univ
    ocity-parsers-1.5.1-sources.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/com/univocity/univocity-parsers/1.5.1/univocity-parser
    s-1.5.1-src.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/com/univocity/univocity-parsers/1.5.1/univ
    ocity-parsers-1.5.1-src.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url https://repo1.maven.org/maven2/com/univocity/univocity-parsers/1.5.1/univocity-parser
    s-1.5.1-javadoc.jar (java.net.SocketException: Permission denied: connect)
    
            Server access error at url http://dl.bintray.com/spark-packages/maven/com/univocity/univocity-parsers/1.5.1/univ
    ocity-parsers-1.5.1-javadoc.jar (java.net.SocketException: Permission denied: connect)
    

5 个答案:

答案 0 :(得分:2)

  1. 给出罐子的完整路径并用它们分开,而不是;

    spark-shell --jars spark-shell --jars fullpath \ spark-csv_2.11-1.4.0.jar,fullpath \ commons-csv-1.2.jar

  2. 确保您在文件夹(DFS)中具有将写入临时文件的权限。

答案 1 :(得分:0)

将spark-csv下载到您的.m2 directiory,然后使用spark-shell --packages com.databricks:spark-csv_2.11:1.4.0

如果您无法直接下载spark-csv,请将其下载到其他系统并将所有.m2目录复制到您的计算机。

答案 2 :(得分:0)

我没有使用sqlContext.read,而是使用以下代码将我的.csv文件转换为数据帧。假设.csv文件有5列,如下所示:

case class Flight(arrDelay: Int, depDelay: Int, origin: String, dest: String, distance: Int)

然后:

val flights=sc.textFile("2008.csv").map(_.split(",")).map(p => Flight(p(0).trim.toInt, p(1).trim.toInt, p(2)
, p(3), p(4).trim.toInt)).toDF()

答案 3 :(得分:0)

Milad Khajavi为我节省了一天。经过几天的争夺,让spark-csv在没有互联网访问权限的集群上工作,我终于接受了他的想法,将软件包下载到虚拟机上。然后我将.ivy2目录从VM复制到其他群集。现在它的工作没有任何问题。

  

将spark-csv下载到您的.m2 directiory,然后使用spark-shell   --packages com.databricks:spark-csv_2.11:1.4.0

     

如果您无法直接下载spark-csv,请在其他系统下载   并将所有.m2目录复制到您的计算机上。

答案 4 :(得分:0)

我也遇到了同样的异常,该异常在下载SPARK-CSV和commons-CSV jar之后得到解决。 您必须使用以下命令下载两个jar SPARK-CSV和commons-csv

SPARK-CSV

scala> wget http://repo1.maven.org/maven2/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.jar

公共CSV

scala> wget http://central.maven.org/maven2/org/apache/commons/commons-csv/1.1/commons-csv-1.1.jar

将这些罐子复制到/ tmp目录中。 现在按以下方式运行spark-shell

  

spark-shell --jars /tmp/spark-csv_2.10-1.5.0.jar,/tmp/commons-csv-1.1.jar