使用单个sparkContext迭代函数多次

时间:2017-07-31 17:24:41

标签: apache-spark pyspark spark-dataframe

我的pyspark脚本工作正常。该脚本将从mysql获取数据并创建配置表。

pyspark脚本位于下方。

#!/usr/bin/env python
import sys
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
conf = SparkConf()
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)

#Condition to specify exact number of arguments in the spark-submit command line
if len(sys.argv) != 8:
    print "Invalid number of args......"
    print "Usage: spark-submit import.py Arguments"
    exit()
table = sys.argv[1]
hivedb = sys.argv[2]
domain = sys.argv[3]
port=sys.argv[4]
mysqldb=sys.argv[5]
username=sys.argv[6]
password=sys.argv[7]

df = sqlContext.read.format("jdbc").option("url", "{}:{}/{}".format(domain,port,mysqldb)).option("driver", "com.mysql.jdbc.Driver").option("dbtable","{}".format(table)).option("user", "{}".format(username)).option("password", "{}".format(password)).load()

#Register dataframe as table
df.registerTempTable("mytempTable")

# create hive table from temp table:
sqlContext.sql("create table {}.{} as select * from mytempTable".format(hivedb,table))

sc.stop()

现在使用pyspark脚本调用此shell脚本。对于这个shell脚本,我将表名作为参数传递给文件。

shell script在下面。

#!/bin/bash

source /home/$USER/spark/source.sh
[ $# -ne 1 ] && { echo "Usage : $0 table ";exit 1; }

args_file=$1

TIMESTAMP=`date "+%Y-%m-%d"`
touch /home/$USER/logs/${TIMESTAMP}.success_log
touch /home/$USER/logs/${TIMESTAMP}.fail_log
success_logs=/home/$USER/logs/${TIMESTAMP}.success_log
failed_logs=/home/$USER/logs/${TIMESTAMP}.fail_log

#Function to get the status of the job creation
function log_status
{
       status=$1
       message=$2
       if [ "$status" -ne 0 ]; then
                echo "`date +\"%Y-%m-%d %H:%M:%S\"` [ERROR] $message [Status] $status : failed" | tee -a "${failed_logs}"
                #echo "Please find the attached log file for more details"
                exit 1
                else
                    echo "`date +\"%Y-%m-%d %H:%M:%S\"` [INFO] $message [Status] $status : success" | tee -a "${success_logs}"
                fi
}

# For sql_spark.py
spark-submit --name "${table}" --master "yarn-client" --num-executors 2 --executor-memory 6g  --executor-cores 1 --conf "spark.yarn.executor.memoryOverhead=609" /home/$USER/spark/sql_spark.py ${table} ${hivedb} ${domain} ${port} ${mysqldb} ${username} ${password} > /tmp/logging/${table}.log 2>&1

g_STATUS=$?
log_status $g_STATUS "Spark job ${table} Execution"

echo "************************************************************************************************************************************************************************"

现在我在mysql中有超过200个表。因此,我必须使用spark-submit 200次从mysql到hive获取表。

每次使用spark-submit时,都需要10-12秒才能创建sparkcontext。因此,有效地使用了33分钟的时间来创建sparkcontext。

我想通过只使用一个sparkcontext来减少这段时间。

现在我想要做的是我只想使用一个spark Context并将所有200个表从mysql导入到hive。

我在脚本中使用function创建了total code。我尝试过如下。我可以使用单个sparkContext来达到我的要求,但不确定它是否是正确的方法。

New spark script

#!/usr/bin/env python
import sys
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
conf = SparkConf()
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)

#Condition to specify exact number of arguments in the spark-submit command line
if len(sys.argv) != 8:
    print "Invalid number of args......"
    print "Usage: spark-submit import.py Arguments"
    exit()
args_file = sys.argv[1]
hivedb = sys.argv[2]
domain = sys.argv[3]
port=sys.argv[4]
mysqldb=sys.argv[5]
username=sys.argv[6]
password=sys.argv[7]

def testing(table, hivedb, domain, port, mysqldb, username, password):

    print "*********************************************************table = {} ***************************".format(table)
    df = sqlContext.read.format("jdbc").option("url", "{}:{}/{}".format(domain,port,mysqldb)).option("driver", "com.mysql.jdbc.Driver").option("dbtable","{}".format(table)).option("user", "{}".format(username)).option("password", "{}".format(password)).load()

    #Register dataframe as table
    df.registerTempTable("mytempTable")

    # create hive table from temp table:
    sqlContext.sql("create table {}.{} stored as parquet as select * from mytempTable".format(hivedb,table))

input = sc.textFile('/user/XXXXXXX/spark_args/%s' %args_file).collect()

for table in input:
 testing(table, hivedb, domain, port, mysqldb, username, password)

sc.stop()

任何人都可以建议是否有其他选择,或者我正在做的事情是完全错误的。

0 个答案:

没有答案