如何获取有关Spark阶段和任务的详细信息

时间:2019-01-21 21:05:07

标签: apache-spark

我已经建立了一个Apache Spark集群,其中包含一个主服务器和一个Worker,并且我将Python与Spyder一起用作IDE。到目前为止,一切正常,但是我需要有关集群中任务分配的详细信息。我知道有Spark Web UI,但我想直接在Spyder控制台中获取信息。因此,我的意思是我的代码/脚本的哪一部分是由哪个工人/大师完成的。我认为使用python包“ socket”和socket.gethostname()必须能够获得更多信息。我非常希望获得帮助。 这是我的代码:

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import *
import matplotlib.pyplot as plt
from datetime import datetime
from pyspark.sql.functions import udf
from datetime import datetime
import pyspark.sql.functions as F

#spark = SparkSession \
#    .builder \
#    .appName('weather_data') \
#    .getOrCreate()


spark = SparkSession \
   .builder \
   .appName("weather_data_u") \
   .master('master_ip@...')\
   .getOrCreate()

data.show()
data.printSchema()

data_selected = data\
        .select(data['Date'],
                data['TemperatureHighC'],
                data['TemperatureAvgC'],
                data['TemperatureLowC'],
                data['DewpointHighC'],
                data['DewpointAvgC'],
                data['DewpointLowC'],
                data['HumidityAvg'],
                data['WindSpeedMaxKMH'],
                data['WindSpeedAvgKMH'],
                data['GustSpeedMaxKMH'],
                data['PrecipitationSumCM'])

data_selected.printSchema()
data_selected.show()


f = udf(lambda row: datetime.strptime(row, '%Y-%m-%d'), TimestampType())

data_selected = data_selected\
        .withColumn('date', f(data['Date'].cast(StringType())))\
        .withColumn('t_max', data['TemperatureHighC'].cast(DoubleType()))\
        .withColumn('t_mean', data['TemperatureAvgC'].cast(DoubleType()))\
        .withColumn('t_min', data['TemperatureLowC'].cast(DoubleType()))\
        .withColumn('dew_max', data['DewpointHighC'].cast(DoubleType()))\
        .withColumn('dew_mean', data['DewpointAvgC'].cast(DoubleType()))\
        .withColumn('dew_min', data['DewpointLowC'].cast(DoubleType()))\
        .cache()

 data_selected.show()

t_mean_calculated = data_selected\
.groupBy(F.date_format(data_selected.date, 'M'))\
.agg(F.mean(data_selected.t_max))\
.orderBy('date_format(date, M)')

t_mean_calculated = t_mean_calculated\
.withColumn('month', t_mean_calculated['date_format(date, M)'].cast(IntegerType()))\
.withColumnRenamed('avg(t_max)', 't_max_month')\
.orderBy('month')\
.drop(t_mean_calculated['date_format(date, M)'])\
.select('month', 't_max_month')

t_mean_calculated = t_mean_calculated.collect()

1 个答案:

答案 0 :(得分:0)

作为reported by @Jacek Laskowski自己,您可以使用Spark- Core 本地属性来修改 job-name web-ui

  • callSite.short
  • callSite.long

例如,我的Spark应用程序将多个MySQL表同步到S3,我设置了

spark.sparkContext.setLocalProperty("callSite.short", currentTableName)

因此在web-ui中反映当前表名