PySpark中的矩阵乘法A ^ T * A.

时间:2017-06-03 21:01:55

标签: pyspark matrix-multiplication

昨天我问了一个类似的问题 - Matrix Multiplication between two RDD[Array[Double]] in Spark - 然而我决定转到pyspark这样做。我已经在加载和重新格式化数据方面取得了一些进展 - Pyspark map from RDD of strings to RDD of list of doubles - 然而矩阵乘法很困难。让我先分享一下我的进展:

matrix1.txt
1.2 3.4 2.3 
2.3 1.1 1.5
3.3 1.8 4.5
5.3 2.2 4.5
9.3 8.1 0.3
4.5 4.3 2.1 

分享文件很困难,但这就是我的matrix1.txt文件的样子。它是一个以空格分隔的文本文件,包含矩阵的值。接下来是代码:

# do the imports for pyspark and numpy
from pyspark import SparkConf, SparkContext
import numpy as np

# loadmatrix is a helper function used to read matrix1.txt and format
# from RDD of strings to RDD of list of floats
def loadmatrix(sc):
    data = sc.textFile("matrix1.txt").map(lambda line: line.split(' ')).map(lambda line: [float(x) for x in line])
    return(data) 

# this is the function I am struggling with, it should take a line of the 
# matrix (formatted as list of floats), compute an outer product with itself
def AtransposeA(line):
    # pseudocode for this would be...
    # outerprod = compute line * line^transpose     
    # return(outerprod)

# here is the main body of my file    
if __name__ == "__main__":
    # create the conf, sc objects, then use loadmatrix to read data
    conf = SparkConf().setAppName('SVD').setMaster('local')
    sc = SparkContext(conf = conf)
    mymatrix = loadmatrix(sc)

    # this is pseudocode for calling AtransposeA
    ATA = mymatrix.map(lambda line: AtransposeA(line)).reduce(elementwise add all the outerproducts)

    # the SVD of ATA is computed below
    U, S, V = np.linalg.svd(ATA)

    # ...

我的方法如下 - 做矩阵乘法A ^ T * A,我创建一个函数来计算A行的外积。所有外积的元素和是我想要的产品。然后我在map函数中调用AtransposeA(),这样就在矩阵的每一行上执行,最后我使用reduce()来添加生成的矩阵。

我正在努力思考AtransposeA功能应该如何看待。我怎么能像这样在pyspark做一个外部产品?在此先感谢您的帮助!

1 个答案:

答案 0 :(得分:4)

首先,请考虑为什么要使用Spark。听起来您的所有数据都适合内存,在这种情况下,您可以非常直接地使用numpypandas

如果您的数据没有结构化以便行是独立的,那么可能无法通过将行组发送到不同节点来并行化,这是使用Spark的全部要点。

话虽如此......这里有一些pyspark(2.1.1)代码,我认为它可以做你想要的。

# read the matrix file
df = spark.read.csv("matrix1.txt",sep=" ",inferSchema=True)
df.show()
+---+---+---+
|_c0|_c1|_c2|
+---+---+---+
|1.2|3.4|2.3|
|2.3|1.1|1.5|
|3.3|1.8|4.5|
|5.3|2.2|4.5|
|9.3|8.1|0.3|
|4.5|4.3|2.1|
+---+---+---+
# do the sum of the multiplication that we want, and get
# one data frame for each column
colDFs = []
for c2 in df.columns:
    colDFs.append( df.select( [ F.sum(df[c1]*df[c2]).alias("op_{0}".format(i)) for i,c1 in enumerate(df.columns) ] ) )
# now union those separate data frames to build the "matrix"
mtxDF = reduce(lambda a,b: a.select(a.columns).union(b.select(a.columns)), colDFs )
mtxDF.show()
+------------------+------------------+------------------+
|              op_0|              op_1|              op_2|
+------------------+------------------+------------------+
|            152.45|118.88999999999999|             57.15|
|118.88999999999999|104.94999999999999|             38.93|
|             57.15|             38.93|52.540000000000006|
+------------------+------------------+------------------+

这似乎与您从numpy获得的结果相同。

a = numpy.genfromtxt("matrix1.txt")
numpy.dot(a.T, a)
array([[ 152.45,  118.89,   57.15],
       [ 118.89,  104.95,   38.93],
       [  57.15,   38.93,   52.54]])