如何通过Jupyter Notebook访问Headnode HDFS文件

时间:2019-01-21 10:34:08

标签: apache-spark pyspark jupyter-notebook hdfs cluster-computing

我已经建立了头节点集群。我成功地将jupyter笔记本与它集成在一起。(Using this answer

我也能够成功运行pyspark。为此我引用了link

现在我想通过jupyter笔记本访问headnode中的hdfs文件,但是当我运行下面的命令时,它将从hdfs中获取数据。

df = sqlContext.read.json('hdfs:///192.168.21.110/user/hdfs/ML/pass/Teleram_18/notefind/2018-12-14/')

我收到以下错误

An error occurred while calling o29.json.
: java.io.IOException: Incomplete HDFS URI, no host: hdfs:///192.168.21.110/user/hdfs/ML/pass/Teleram_18/notefind/2018-12-14/
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:143)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.spark.sql.execution.datasources.DataSource$.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:705)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$15.apply(DataSource.scala:389)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$15.apply(DataSource.scala:389)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:344)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:388)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
    at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:397)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

实际上有什么问题?我注意到的一件事是,我在用户头节点和hdfs用户头节点上都安装了pyspark。并且我通过用户头节点使用jupyter笔记本。

我在hdfs头节点中提交了应用程序,并且能够访问hdfs用户spark shell中的hdfs文件,我该怎么做才能使普通头节点用户可以访问hdfs文件,我的路径没有问题,我可以使用hadoop fs查找数据

更新:我发现在普通用户模式下使用的是python3.5和pyspark 2.4,而在hdfs用户中使用的是python2.7和pyspark 2.3.1。如何解决此问题

0 个答案:

没有答案