如何在pysark中更改DataFrame的hdfs块大小

时间:2018-03-14 13:55:46

标签: hadoop apache-spark pyspark hdfs

这似乎与

有关

How to change hdfs block size in pyspark?

我可以使用rdd.saveAsTextFile成功更改hdfs块大小, 但不是相应的DataFrame.write.parquet,也无法用拼花格式保存。

不确定它是否是pyspark DataFrame中的错误,或者我没有正确设置配置。

以下是我的测试代码:

##########
# init
##########
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession

import hdfs
from hdfs import InsecureClient
import os

import numpy as np
import pandas as pd
import logging

os.environ['SPARK_HOME'] = '/opt/spark-2.2.1-bin-hadoop2.7'

block_size = 512 * 1024

conf = SparkConf().setAppName("myapp").setMaster("spark://spark1:7077").set('spark.cores.max', 20).set("spark.executor.cores", 10).set("spark.executor.memory", "10g").set("spark.hadoop.dfs.blocksize", str(block_size)).set("spark.hadoop.dfs.block.size", str(block_size))

spark = SparkSession.builder.config(conf=conf).getOrCreate()
spark.sparkContext._jsc.hadoopConfiguration().setInt("dfs.blocksize", block_size)
spark.sparkContext._jsc.hadoopConfiguration().setInt("dfs.block.size", block_size)

##########
# main
##########

# create DataFrame
df_txt = spark.createDataFrame([\{'temp': "hello"}, \{'temp': "world"}, \{'temp': "!"}])

# save using DataFrameWriter, resulting 128MB-block-size

df_txt.write.mode('overwrite').format('parquet').save('hdfs://spark1/tmp/temp_with_df')

# save using rdd, resulting 512k-block-size
client = InsecureClient('http://spark1:50070')
client.delete('/tmp/temp_with_rrd', recursive=True)
df_txt.rdd.saveAsTextFile('hdfs://spark1/tmp/temp_with_rrd')

2 个答案:

答案 0 :(得分:0)

Hadoop和Spark是两个独立的工具,它们有自己的工作策略。 Spark和Parquet使用数据分区和块大小对它们没有意义。按照Spark的说法做什么,然后在HDFS中用它做你想做的事。

您可以通过

更改Parquet分区编号
df_txt.repartition(6).format("parquet").save("hdfs://...")

答案 1 :(得分:0)

从以下链接找到答案:

https://forums.databricks.com/questions/918/how-to-set-size-of-parquet-output-files.html

我可以使用spark.hadoop.parquet.block.size

成功设置镶木地板块尺寸

以下是示例代码:

# init
block_size = 512 * 1024 

conf = SparkConf().setAppName("myapp").setMaster("spark://spark1:7077").set('spark.cores.max', 20).set("spark.executor.cores", 10).set("spark.executor.memory", "10g").set('spark.hadoop.parquet.block.size', str(block_size)).set("spark.hadoop.dfs.blocksize", str(block_size)).set("spark.hadoop.dfs.block.size", str(block_size)).set("spark.hadoop.dfs.namenode.fs-limits.min-block-size", str(131072))

sc = SparkContext(conf=conf) 
spark = SparkSession(sc) 

# create DataFrame 
df_txt = spark.createDataFrame([{'temp': "hello"}, {'temp': "world"}, {'temp': "!"}]) 

# save using DataFrameWriter, resulting 512k-block-size 

df_txt.write.mode('overwrite')。format('parquet')。save('hdfs:// spark1 / tmp / temp_with_df')

# save using DataFrameWriter.csv, resulting 512k-block-size 
df_txt.write.mode('overwrite').csv('hdfs://spark1/tmp/temp_with_df_csv') 

# save using DataFrameWriter.text, resulting 512k-block-size

df_txt.write.mode('overwrite')。text('hdfs:// spark1 / tmp / temp_with_df_text')

# save using rdd, resulting 512k-block-size 
client = InsecureClient('http://spark1:50070') 
client.delete('/tmp/temp_with_rrd', recursive=True) 
df_txt.rdd.saveAsTextFile('hdfs://spark1/tmp/temp_with_rrd')