找不到模块:com.databricks#spark-csv_2.10; 1.5.0

时间:2018-11-21 11:28:36

标签: pyspark bigdata spark-streaming

我已经在Jupyter中尝试了以下操作,以便以表格格式读取CSV文件。

pyspark --packages com.databricks:spark-csv_2.10:1.5.0

然后我在日志中遇到以下错误,有关日志“我已在下一条评论中单独列出”的更多详细信息

:::: WARNINGS
module not found: com.databricks#spark-csv_2.10;1.5.0

“我已经检查了spark-csv_2.10-1.5.0.jar”,并且“ commons-csv-1.1.jar”已经存在

如果我忽略该警告,则在运行以下命令时收到此错误“ NameError:未定义名称'sc'”

sqlContext = SQLContext(sc)

我真的很困惑,因此请提出任何建议。 目标是如下读取CSV文件

sqlContext = SQLContext(sc)
data = sqlContext.read.load('file:///path/file.csv', format='com.databricks.spark.csv', header='true',inferSchema='true')

这是日志

pyspark --packages com.databricks:spark-csv_2.10:1.5.0
/home/cloudera/.local/lib/python3.5/site-packages/requests/init.py:83: RequestsDependencyWarning: Old version of cryptography ([1, 3]) may cause slowdown.
warnings.warn(warning, RequestsDependencyWarning)
[I 10:32:29.300 NotebookApp] The port 8888 is already in use, trying another random port.
[I 10:32:29.311 NotebookApp] Serving notebooks from local directory: /home/cloudera/Downloads/coursera-master/big-data-4
[I 10:32:29.312 NotebookApp] 0 active kernels
[I 10:32:29.312 NotebookApp] The Jupyter Notebook is running at: http://localhost:8889/
[I 10:32:29.312 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
WARNING: content window passed to PrivateBrowsingUtils.isWindowPrivate. Use isContentWindowPrivate instead (but only for frame scripts).
pbu_isWindowPrivate@resource://gre/modules/PrivateBrowsingUtils.jsm:25:14
nsBrowserAccess.prototype.openURI@chrome://browser/content/browser.js:15192:21
NewNotebookWidget.prototype.new_notebook@http://localhost:8889/static/tree/js/main.min.js?v=cee9d5ded70fc8733bb888581c22f633:15194:17
.proxy/i@http://localhost:8889/static/tree/js/main.min.js?v=cee9d5ded70fc8733bb888581c22f633:4:5486
x.event.dispatch@http://localhost:8889/static/tree/js/main.min.js?v=cee9d5ded70fc8733bb888581c22f633:5:9954
x.event.add/y.handle@http://localhost:8889/static/tree/js/main.min.js?v=cee9d5ded70fc8733bb888581c22f633:5:6772
[I 10:32:35.674 NotebookApp] Creating new notebook in
[I 10:32:36.695 NotebookApp] Kernel started: 25ed0b47-e0f0-4191-b1bc-984679f2668c
Ivy Default Cache set to: /home/cloudera/.ivy2/cache
The jars for the packages stored in: /home/cloudera/.ivy2/jars
:: loading settings :: url = jar:file:/usr/lib/spark/lib/spark-assembly-1.6.0-cdh5.16.0-hadoop2.6.0-cdh5.16.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
com.databricks#spark-csv_2.10 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
[W 10:32:47.059 NotebookApp] Timeout waiting for kernel_info reply from 25ed0b47-e0f0-4191-b1bc-984679f2668c
:: resolution report :: resolve 8250ms :: artifacts dl 0ms
:: modules in use:
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 1 | 0 | 0 | 0 || 0 | 0 |
---------------------------------------------------------------------

:: problems summary ::
:::: WARNINGS
module not found: com.databricks#spark-csv_2.10;1.5.0

==== local-m2-cache: tried

  file:/home/cloudera/.m2/repository/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.pom

  -- artifact com.databricks#spark-csv_2.10;1.5.0!spark-csv_2.10.jar:

  file:/home/cloudera/.m2/repository/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.jar

==== local-ivy-cache: tried

  /home/cloudera/.ivy2/local/com.databricks/spark-csv_2.10/1.5.0/ivys/ivy.xml

==== central: tried

  https://repo1.maven.org/maven2/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.pom

  -- artifact com.databricks#spark-csv_2.10;1.5.0!spark-csv_2.10.jar:

  https://repo1.maven.org/maven2/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.jar

==== spark-packages: tried

  http://dl.bintray.com/spark-packages/maven/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.pom

  -- artifact com.databricks#spark-csv_2.10;1.5.0!spark-csv_2.10.jar:

  http://dl.bintray.com/spark-packages/maven/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.jar

    ::::::::::::::::::::::::::::::::::::::::::::::

    ::          UNRESOLVED DEPENDENCIES         ::

    ::::::::::::::::::::::::::::::::::::::::::::::

    :: com.databricks#spark-csv_2.10;1.5.0: not found

    ::::::::::::::::::::::::::::::::::::::::::::::
:::: ERRORS
Server access error at url https://repo1.maven.org/maven2/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.pom (javax.net.ssl.SSLException: Received fatal alert: protocol_version)

Server access error at url https://repo1.maven.org/maven2/com/databricks/spark-csv_2.10/1.5.0/spark-csv_2.10-1.5.0.jar (javax.net.ssl.SSLException: Received fatal alert: protocol_version)
:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: com.databricks#spark-csv_2.10;1.5.0: not found]
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1067)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:287)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:154)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
[IPKernelApp] WARNING | Unknown error in handling PYTHONSTARTUP file /usr/lib/spark/python/pyspark/shell.py:

2 个答案:

答案 0 :(得分:0)

我认为您可以通过以下方式使用另一种方式读取pyspark中的csv文件:

spark.read.csv("yourPath", header=True)

,并且不需要导入其他软件包。

答案 1 :(得分:0)

对于spark 2.x版本,此库已内联-https://github.com/databricks/spark-csv。如果您使用的是2.x版本,则无需导入该库