我有pyspark
脚本,如下所示。
#!/usr/bin/env python
from datetime import datetime
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
conf = SparkConf()
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)
hivedb='MySql'
table='abc_123'
df = sqlContext.table("{}.{}".format(hivedb,table))
# Register the Data Frame as a TempTable
df.registerTempTable('mytempTable')
#Time:
date=datetime.now().strftime('%Y-%m-%d %H:%M:%S')
#Find min value ID:
min_id = sqlContext.sql("select nvl(min(id),0) as minval from mytempTable").collect()[0].asDict()['minval']
sc.stop()
现在我想分别找出每行代码所花费的时间。像下面的东西
df = sqlContext.table("{}.{}".format(hivedb,table))
Time taken for `df` to create was 10 seconds
date=datetime.now().strftime('%Y-%m-%d %H:%M:%S')
Time taken for finding `date` was 1 second
min_id = sqlContext.sql("select nvl(min(id),0) as minval from mytempTable").collect()[0].asDict()['minval']
Time taken for `min_id` query to execute was 3 seconds
我怎样才能做到这一点。
如果可能的话,我也想打印这些值