清理Spark历史日志

时间:2017-03-15 18:29:38

标签: apache-spark

我们长期运行EMR集群,我们提交Spark作业。我看到随着时间的推移,HDFS会填满Spark应用程序日志,这有时会导致主机不健康,因为EMR / Yarn(?)会查看。

运行hadoop fs -R -h /显示[1],清楚地显示没有删除任何应用程序日志。

我们已将spark.history.fs.cleaner.enabled设置为true(在Spark用户界面中对此进行了验证),并希望其他默认值如清除间隔(1天)和更清洁最大年龄(7d),如下所述: http://spark.apache.org/docs/latest/monitoring.html#spark-configuration-options将负责清理这些日志。但事实并非如此。

有什么想法吗?

[1]

-rwxrwx---   2 hadoop spark      543.1 M 2017-01-11 13:13 /var/log/spark/apps/application_1484079613665_0001
-rwxrwx---   2 hadoop spark        7.8 G 2017-01-17 10:51 /var/log/spark/apps/application_1484079613665_0002.inprogress
-rwxrwx---   2 hadoop spark        1.4 G 2017-01-18 08:11 /var/log/spark/apps/application_1484079613665_0003
-rwxrwx---   2 hadoop spark        2.9 G 2017-01-20 07:41 /var/log/spark/apps/application_1484079613665_0004
-rwxrwx---   2 hadoop spark      125.9 M 2017-01-20 09:57 /var/log/spark/apps/application_1484079613665_0005
-rwxrwx---   2 hadoop spark        4.4 G 2017-01-23 10:19 /var/log/spark/apps/application_1484079613665_0006
-rwxrwx---   2 hadoop spark        6.6 M 2017-01-23 10:31 /var/log/spark/apps/application_1484079613665_0007
-rwxrwx---   2 hadoop spark       26.4 M 2017-01-23 11:09 /var/log/spark/apps/application_1484079613665_0008
-rwxrwx---   2 hadoop spark       37.4 M 2017-01-23 11:53 /var/log/spark/apps/application_1484079613665_0009
-rwxrwx---   2 hadoop spark      111.9 M 2017-01-23 13:57 /var/log/spark/apps/application_1484079613665_0010
-rwxrwx---   2 hadoop spark        1.3 G 2017-01-24 10:26 /var/log/spark/apps/application_1484079613665_0011
-rwxrwx---   2 hadoop spark        7.0 M 2017-01-24 10:37 /var/log/spark/apps/application_1484079613665_0012
-rwxrwx---   2 hadoop spark       50.7 M 2017-01-24 11:40 /var/log/spark/apps/application_1484079613665_0013
-rwxrwx---   2 hadoop spark       96.2 M 2017-01-24 13:27 /var/log/spark/apps/application_1484079613665_0014
-rwxrwx---   2 hadoop spark      293.7 M 2017-01-24 17:58 /var/log/spark/apps/application_1484079613665_0015
-rwxrwx---   2 hadoop spark        7.6 G 2017-01-30 07:01 /var/log/spark/apps/application_1484079613665_0016
-rwxrwx---   2 hadoop spark        1.3 G 2017-01-31 02:59 /var/log/spark/apps/application_1484079613665_0017
-rwxrwx---   2 hadoop spark        2.1 G 2017-02-01 12:04 /var/log/spark/apps/application_1484079613665_0018
-rwxrwx---   2 hadoop spark        2.8 G 2017-02-03 08:32 /var/log/spark/apps/application_1484079613665_0019
-rwxrwx---   2 hadoop spark        5.4 G 2017-02-07 02:03 /var/log/spark/apps/application_1484079613665_0020
-rwxrwx---   2 hadoop spark        9.3 G 2017-02-13 03:58 /var/log/spark/apps/application_1484079613665_0021
-rwxrwx---   2 hadoop spark        2.0 G 2017-02-14 11:13 /var/log/spark/apps/application_1484079613665_0022
-rwxrwx---   2 hadoop spark        1.1 G 2017-02-15 03:49 /var/log/spark/apps/application_1484079613665_0023
-rwxrwx---   2 hadoop spark        8.8 G 2017-02-21 05:42 /var/log/spark/apps/application_1484079613665_0024
-rwxrwx---   2 hadoop spark      371.2 M 2017-02-21 11:54 /var/log/spark/apps/application_1484079613665_0025
-rwxrwx---   2 hadoop spark        1.4 G 2017-02-22 09:17 /var/log/spark/apps/application_1484079613665_0026
-rwxrwx---   2 hadoop spark        3.2 G 2017-02-24 12:36 /var/log/spark/apps/application_1484079613665_0027
-rwxrwx---   2 hadoop spark        9.5 M 2017-02-24 12:48 /var/log/spark/apps/application_1484079613665_0028
-rwxrwx---   2 hadoop spark       20.5 G 2017-03-10 04:00 /var/log/spark/apps/application_1484079613665_0029
-rwxrwx---   2 hadoop spark        7.3 G 2017-03-10 04:04 /var/log/spark/apps/application_1484079613665_0030.inprogress

1 个答案:

答案 0 :(得分:14)

我在emr-5.4.0上遇到了这个问题,并将spark.history.fs.cleaner.interval设置为1h,并且能够让清洁工得以运行。

作为参考,这是spark-defaults.conf文件的结尾:

spark.history.fs.cleaner.enabled true
spark.history.fs.cleaner.maxAge  12h
spark.history.fs.cleaner.interval 1h

进行更改后,重新启动spark历史记录服务器。

另一个澄清:在应用程序运行期间设置这些值,即spark-submit通过--conf无效。通过EMR配置API在集群创建时设置它们,或手动编辑spark-defaults.conf,设置这些值并重新启动spark历史服务器。另请注意,下次Spark应用重新启动时,日志将被清除。例如,如果您有一个长时间运行的Spark流式传输作业,它将不会删除该应用程序运行的任何日志,并将继续累积日志。当下次作业重新启动时(可能是因为部署),它将清除旧日志。