PySpark:虽然允许,但权限被拒绝

时间:2018-02-14 08:22:55

标签: python apache-spark permissions pyspark

我是PySpark的新手。我想重写现有的Python程序并使用Spark。它已经与SparkR一起使用了,所以我非常确定Python和Spark是否正确安装。

当我在PySpark中运行一个简单的程序时,一切似乎都没问题。这是一个简短的例子:

from datetime import datetime
from pyspark import SparkContext, SparkConf, SQLContext
from pyspark.sql.types import StructType, StructField, TimestampType, DoubleType, StringType

def lineToRow(line):
    (year, month, day, symbol, volume, price) = line.split("\t")
    # Python 2.x compatible timestamp generation
    dt = datetime(int(year), int(month), int(day))
    return (dt, symbol, float(price))

def loadObservations(sparkContext, sqlContext, path):
    textFile = sparkContext.textFile(path)
    rowRdd = textFile.map(lineToRow)
    schema = StructType([
        StructField('timestamp', TimestampType(), nullable=True),
        StructField('price', DoubleType(), nullable=True),
        StructField('group', StringType(), nullable=True),
    ])
    return sqlContext.createDataFrame(rowRdd, schema);

if __name__ == "__main__":
    conf = SparkConf().setMaster("local").setAppName("Test")
    sc = SparkContext.getOrCreate(conf=conf)

    sqlContext = SQLContext(sc)
    file = '../01-Daten/test.csv'

    obs = loadObservations(sc, sqlContext, file)
    print(obs)

输出结果为:

DataFrame[timestamp: timestamp, price: double, group: string]

使用数据" test.csv":

Time,Price,Group
2017-12-01 0:00:00,20.17,Group1
2017-12-01 1:00:00,17.41,Group1
2017-12-01 2:00:00,16.08,Group2
2017-12-01 3:00:00,17.48,Group1
2017-12-01 4:00:00,20.48,Group2
2017-12-01 5:00:00,21.95,Group2
2017-12-01 6:00:00,24.59,Group2
2017-12-01 7:00:00,38.9,Group1
2017-12-01 8:00:00,46.94,Group2
2017-12-01 9:00:00,44.91,Group2
2017-12-01 10:00:00,43.29,Group1
2017-12-01 11:00:00,45.88,Group2
2017-12-01 12:00:00,42.5,Group2
2017-12-01 13:00:00,44.93,Group1
2017-12-01 14:00:00,45.6,Group2
2017-12-01 15:00:00,45.52,Group2
2017-12-01 16:00:00,48.4,Group1
2017-12-01 17:00:00,58.55,Group2
2017-12-01 18:00:00,46.99,Group2
2017-12-01 19:00:00,38.91,Group1
2017-12-01 20:00:00,36.03,Group1
2017-12-01 21:00:00,31.93,Group2
2017-12-01 22:00:00,31.01,Group2
2017-12-01 23:00:00,27.52,Group1

只要我想看到" obs"中的第一个元素(我将打印更改为" print(obs.first())")我收到IOException,错误:13:

  File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 880, in runfile
    execfile(filename, namespace)

  File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "/home/cornelia/Dokumente/Arbeit/Programme/180124-ZR-Spark/02-Code/ZR-Analyse-Spark.py", line 57, in <module>
    print(obs.first())

  File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py", line 982, in first
    return self.head()

  File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py", line 970, in head
    rs = self.head(1)

  File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py", line 972, in head
    return self.take(n)

  File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py", line 476, in take
    return self.limit(num).collect()

  File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py", line 438, in collect
    port = self._jdf.collectToPython()

  File "/usr/local/lib/python3.6/dist-packages/py4j/java_gateway.py", line 1133, in __call__
    answer, self.gateway_client, self.target_id, self.name)

  File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)

  File "/usr/local/lib/python3.6/dist-packages/py4j/protocol.py", line 319, in get_return_value
    format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o324.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 4, localhost, executor driver): java.io.IOException: Cannot run program "/usr/lib/python3.6": error=13, Permission denied
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:163)
    at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:89)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: error=13, Permission denied
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)
    at java.lang.ProcessImpl.start(ProcessImpl.java:134)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
29 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1517)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1505)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1504)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1504)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1732)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1687)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1676)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2029)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2069)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply$mcI$sp(Dataset.scala:2808)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2805)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2805)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2828)
    at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:2805)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Cannot run program "/usr/lib/python3.6": error=13, Permission denied
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:163)
    at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:89)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
1 more
Caused by: java.io.IOException: error=13, Permission denied
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)
    at java.lang.ProcessImpl.start(ProcessImpl.java:134)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
29 more

所需的文件夹(&#34; /usr/lib/python3.6")和所有子文件夹都拥有所有权限:as you can see here

我尝试了很多东西并进行了大量的Google搜索。知道如何解决这个问题吗?我在Ubuntu 17.10上运行python 3.6。

提前致谢。

1 个答案:

答案 0 :(得分:1)

我引用了错误的路径(/usr/lib/python3.6)。我需要参考usr/bin/python。一切正常。

感谢所有人!