如何从pyspark数据帧中检索列并将其作为新列插入现有的pyspark数据帧中?

时间:2017-08-25 14:03:07

标签: python-2.7 dataframe insert pyspark

问题是: 我有这样的pyspark数据框

df1:
+--------+
|index   |
+--------+
|     121|
|     122|
|     123|
|     124|
|     125|
|     121|
|     121|
|     126|
|     127|
|     120|
|     121|
|     121|
|     121|
|     127|
|     129|
|     132|
|     122|
|     121|
|     121|
|     121|
+--------+

我想从df1检索索引列并将其插入现有的数据帧df2(长度相同)。

df2:
+--------------------+--------------------+
|               fact1|               fact2|
+--------------------+--------------------+
|  2.4899928731985597|-0.19775025821959014|
|   1.029654847161142|  1.4878188087911541|
|  -2.253992428312965| 0.29853121635739804|
| -0.8866000393025826|  0.4032596563578692|
|0.027618408969029146|  0.3218421798358574|
|  -3.096711320314157|-0.35825821485752635|
|  3.1758221960731525| -2.0598630487806333|
|   7.401934592245097|  -6.359158142708468|
|  1.9954990843859282|  1.9352531243666828|
|   8.728444492631189|  -4.644796442599776|
|    3.21061543955211| -1.1472165049607643|
| -0.9619142291174212| -1.2487100946166108|
|  1.0681264788022142|  0.7901514935750167|
|  -1.599476182182916|  -1.171236788513644|
|   2.657843803002389|   1.456063339439953|
| -1.5683015324294765| -0.6126175010968302|
| -1.6735815834568026|  -1.176721177528106|
| -1.4246852948658484|   0.745873761554541|
|  3.7043534046759716|  1.3993120926240652|
|   5.420426369792451|  -2.149279759367474|
+--------------------+--------------------+

获取3列的新df2:index Fact1,Fact2

任何想法?

提前致谢。

1 个答案:

答案 0 :(得分:2)

希望这有帮助!

import pyspark.sql.functions as f

df1 = sc.parallelize([[121],[122],[123]]).toDF(["index"])
df2 = sc.parallelize([[2.4899928731985597,-0.19775025821959014],[1.029654847161142,1.4878188087911541],
                        [-2.253992428312965,0.29853121635739804]]).toDF(["fact1","fact2"])

# since there is no common column between these two dataframes add row_index so that it can be joined
df1=df1.withColumn('row_index', f.monotonically_increasing_id())
df2=df2.withColumn('row_index', f.monotonically_increasing_id())

df2 = df2.join(df1, on=["row_index"]).sort("row_index").drop("row_index")
df2.show()


如果它解决了您的问题,请不要告诉我们。)