使用Spark的默认log4j配置文件:org / apache / spark / log4j-defaults.properties将默认日志级别设置为" WARN"

时间:2017-02-15 21:43:39

标签: python hadoop apache-spark pyspark

我是新手,我使用带有python 2.7的spark 2.1.0,并且它不起作用。我一直在寻找一个星期来找到问题的解决方案而没有成功。

当我在commmadLine中运行pyspark时出现以下错误:

Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/02/16 02:37:41 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
        at org.apache.hadoop.hive.conf.HiveConf$ConfVars.findHadoopBinary(HiveConf.java:2327)
        at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:365)
        at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at py4j.reflection.CurrentThreadClassLoadingStrategy.classForName(CurrentThreadClassLoadingStrategy.java:40)
        at py4j.reflection.ReflectionUtil.classForName(ReflectionUtil.java:51)
        at py4j.reflection.TypeUtil.forName(TypeUtil.java:243)
        at py4j.commands.ReflectionCommand.getUnknownMember(ReflectionCommand.java:175)
        at py4j.commands.ReflectionCommand.execute(ReflectionCommand.java:87)
        at py4j.GatewayConnection.run(GatewayConnection.java:214)
        at java.lang.Thread.run(Thread.java:745)
17/02/16 02:38:21 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/02/16 02:38:21 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
Traceback (most recent call last):
  File "C:\Spark\spark-2.1.0-bin-hadoop2.7\bin\..\python\pyspark\shell.py", line 43, in <module>
    spark = SparkSession.builder\
  File "C:\Spark\spark-2.1.0-bin-hadoop2.7\python\pyspark\sql\session.py", line 179, in getOrCreate
    session._jsparkSession.sessionState().conf().setConfString(key, value)
  File "C:\Spark\spark-2.1.0-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 1133, in __call__
  File "C:\Spark\spark-2.1.0-bin-hadoop2.7\python\pyspark\sql\utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':"
>>>
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "C:\Python27\lib\atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "C:\Spark\spark-2.1.0-bin-hadoop2.7\python\pyspark\java_gateway.py", line 110, in killChild
    Popen(["cmd", "/c", "taskkill", "/f", "/t", "/pid", str(proc.pid)])
  File "C:\Python27\lib\subprocess.py", line 390, in __init__
    errread, errwrite)
  File "C:\Python27\lib\subprocess.py", line 640, in _execute_child
    startupinfo)
  File "C:\Spark\spark-2.1.0-bin-hadoop2.7\python\pyspark\context.py", line 236, in signal_handler
    raise KeyboardInterrupt()

1 个答案:

答案 0 :(得分:1)

警告信息&#34;无法在Hadoop二进制文件中找到可执行的null \ bin \ winutils.exe。&#34;

我使用Windows,在&#34;环境变量&#34;中添加变量&#34; HADOOP_HOME&#34;具有hadoop目录路径的值。例如,我的hadoop安装在c:\ hadoop下。 这将是&#34; HADOOP_HOME&#34;有价值&#34; c:\ hadoop&#34;

重新启动你的cmd

此时,如果警告信息显示:&#34;无法在Hadoop二进制文件中找到可执行文件c:\ hadoop \ bin \ winutils.exe。&#34;

你需要github下载hadoop-common-2.2.0-bin-master.zip并将winutils.exe复制到c:\ hadoop \ bin

此时,它应该可以正常工作