我之前询问了如何Convert scipy sparse matrix to pyspark.sql.dataframe.DataFrame的问题,并在阅读了所提供的答案以及this article后取得了一些进展。我最终得到了以下代码,用于将scipy.sparse.csc_matrix转换为pandas数据帧:
df = pd.DataFrame(csc_mat.todense()).to_sparse(fill_value=0)
df.columns = header
然后我尝试使用建议的语法将pandas数据帧转换为spark数据帧:
spark_df = sqlContext.createDataFrame(df)
但是,我收到了以下错误:
ValueError: cannot create an RDD from type: <type 'list'>
我不相信它与sqlContext有任何关系,因为我能够将大致相同大小的另一个pandas数据帧转换为spark数据帧,没问题。有什么想法吗?
答案 0 :(得分:0)
to_sparse(fill_value=0)
基本上已经过时了。只需使用标准变体
sqlContext.createDataFrame(pd.DataFrame(csc_mat.todense()))
,只要类型兼容,就可以了。
答案 1 :(得分:0)
我不确定这个问题是否仍然与当前的pySpark版本有关,但这是我在发布此问题几周后得出的解决方案。代码很丑陋,可能效率很低,但是由于对此问题的持续关注,我将其发布在这里。
from pyspark import SparkContext
from pyspark.sql import HiveContext
from pyspark import SparkConf
from py4j.protocol import Py4JJavaError
myConf = SparkConf(loadDefaults=True)
sc = SparkContext(conf=myConf)
hc = HiveContext(sc)
def chunks(lst, k):
"""Yield k chunks of close to equal size"""
n = len(lst) / k
for i in range(0, len(lst), n):
yield lst[i: i + n]
def reconstruct_rdd(lst, num_parts):
partitions = chunks(lst, num_parts)
for part in range(0, num_parts - 1):
print "Partition ", part, " started..."
partition = next(partitions) # partition is a list of lists
if part == 0:
prime_rdd = sc.parallelize(partition)
else:
second_rdd = sc.parallelize(partition)
prime_rdd = prime_rdd.union(second_rdd)
print "Partition ", part, " complete!"
return prime_rdd
def build_col_name_list(len_cols):
name_lst = []
for i in range(1, len_cols):
idx = "_" + str(i)
name_lst.append(idx)
return name_lst
def set_spark_df_header(header, sdf):
oldColumns = build_col_name_lst(len(sdf.columns))
newColumns = header
sdf = reduce(lambda sdf, idx: sdf.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), sdf)
return sdf
def convert_pdf_matrix_to_sdf(pdf, sdf_header, num_of_parts):
try:
sdf = hc.createDataFrame(pdf)
except ValueError:
lst = pdf.values.tolist() #Need to convert to list of list to parallelize
try:
rdd = sc.parallelize(lst)
except Py4JJavaError:
rdd = reconstruct_rdd(lst, num_of_parts)
sdf = hc.createDataFrame(rdd)
sdf = set_spark_df_header(sdf_header, sdf)
return sdf