从excel读取数据并插入HIVE

时间:2018-01-22 10:53:14

标签: pandas hive pyspark pyspark-sql

我正在使用pandas从Excel工作表中读取数据。我需要使用pyspark将数据插入HIVE。

sparkConf = SparkConf().setAppName("App")
sc = SparkContext(conf = sparkConf)

sqlContext = HiveContext(sc)

excel_file = pd.ExcelFile("export_n_moreExportData10846.xls")
for sheet_name in excel_file.sheet_names:
try:
    df = pd.read_excel(excel_file, header=None, squeeze=True, sheet_name=sheet_name)
    for i, row in df.iterrows():
        if row.notnull().all():
            data = df.iloc[(i+1):].reset_index(drop=True)
                data.columns = list(df.iloc[i])
                break
        for c in data.columns:
        data[c] = pd.to_numeric(data[c], errors='ignore')
    print data #I need to insert this data into HIVE

except:
    continue

2 个答案:

答案 0 :(得分:1)

您可以使用以下代码保存您的Pandas数据框,前提是列类型与Spark兼容:

tablename = 'your_table_name'
df_spark = sqlContext.createDataFrame(data)

#Remove spaces from your column names
columns_with_spaces = filter(lambda x:' ' in x,df.columns)
for column in columns_with_spaces:
     old_column = column
     new_column = column.replace(' ','_')
     df_spark =  df_spark.withColumnRenamed(old_column , new_column)

#Save to Hive
df_spark.write.mode('overwrite').saveAsTable(tableName)

答案 1 :(得分:0)

您可以签出HadoopOffice库,它在主要的大数据平台(MR,Hive,Flink,Spark ...)上提供具有许多功能的Excel读/写: https://github.com/ZuInnoTe/hadoopoffice/wiki