ImportError:运行spark

时间:2016-12-12 20:19:53

标签: python apache-spark python-requests pyspark rdd

我正在尝试使用名为&#34的python包;请求"以及使用pyspark的程序。我已经下载了所需的软件包,并且能够通过包含“导入请求”来处理普通的python程序,但它不适用于pyspark程序并显示" ImportError:没有名为requests的模块& #34 ;.

def get_text(s):
    import requests
    url = s
    data = requests.get(url).text
    return data

调用函数

newrdd=newrdd.map(get_text)

输出错误行

16/12/12 15:42:33 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 48, node090.cm.cluster): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/local/hadoop-2/tmp/hadoop-yarn/nm-local-dir/usercache/wdps1615/appcache/application_1480500761259_0178/container_1480500761259_0178_01_000003/pyspark.zip/pyspark/worker.py", line 172, in     main
    process()
  File "/local/hadoop-2/tmp/hadoop-yarn/nm-local-dir/usercache/wdps1615/appcache/application_1480500761259_0178/container_1480500761259_0178_01_000003/pyspark.zip/pyspark/worker.py", line 167, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/local/hadoop-2/tmp/hadoop-yarn/nm-local-dir/usercache/wdps1615/appcache/application_1480500761259_0178/container_1480500761259_0178_01_000003/pyspark.zip/pyspark/serializers.py", line 133, in dump_stream
    for obj in iterator:
  File "/var/scratch/wdps1615/spark-2.0.2-bin-without-hadoop/python/lib/pyspark.zip/pyspark/rdd.py", line 1507, in func
  File "/var/scratch/wdps1615/Entitytext.py", line 45, in get_text
    import requests
ImportError: No module named requests

Link to error screenshot

2 个答案:

答案 0 :(得分:1)

看起来你已经在另一个python 解释器上执行了你的pyspark应用程序,确保你为该解释器安装了requests包,你可以检查requests是否安装在{ {1}}文件夹。

运行命令并重新启动应用程序将解决问题:

[PYSPARK_VENV]/lib/python2.7/site-packages/

答案 1 :(得分:1)

我有同样的问题,这对我有用:

import sys
sys.path.append('/usr/local/lib/python3.5/dist-packages')
import requests

您也可以使用python2.7代替python3.5,但您必须确保pip包已安装并在该文件夹中可用。