我正在尝试从列值加上另一个列名中获得一个新值。
例如,鉴于此:
Set cnConn = New ADODB.Connection
With cnConn
.Provider = "SQLOLEDB.1"
.CursorLocation = adUseClient
.ConnectionTimeout = 0
.Properties("Data Source").Value = ' serverName
.Properties("Password").Value = ' pswd
.Properties("User ID").Value = ' userName
.Properties("Initial Catalog").Value = ' init DB
.Open
End With
我想得到这个:
+----+---+----+----+
|base| 1| 2| 3|
+----+---+----+----+
| 10| AA| aa| Aa|
| 20| BB| bb| Bb|
| 30| CC| cc| Cc|
+----+---+----+----+
注意:我正在Spark 2.4中进行编码
答案 0 :(得分:0)
我们可以使用explode
函数来解决它。
# Importing requisite functions.
from pyspark.sql.functions import array, col, explode, struct, lit
# Creating the DataFrame
df = sqlContext.createDataFrame([(10,'AA','aa','Aa'),(20,'BB','bb','Bb'),(30,'CC','cc','Cc')],['base','1','2','3'])
df.show()
+----+---+---+---+
|base| 1| 2| 3|
+----+---+---+---+
| 10| AA| aa| Aa|
| 20| BB| bb| Bb|
| 30| CC| cc| Cc|
+----+---+---+---+
编写一个函数来爆炸DataFrame
。
def to_explode(df, by):
# Filter dtypes and split into column names and type description
cols, dtypes = zip(*((c, t) for (c, t) in df.dtypes if c not in by))
# Spark SQL supports only homogeneous columns
assert len(set(dtypes)) == 1, "All columns have to be of the same type"
# Create and explode an array of (column_name, column_value) structs
kvs = explode(array([
struct(lit(c).alias("key"), col(c).alias("val")) for c in cols
])).alias("kvs")
return df.select(by + [kvs]).select(by + ["kvs.key", "kvs.val"])
应用以下功能。由于创建的列new_base
具有decimal
,默认情况下它的类型为double
,因此我们将其显式转换为integer
,以避免每个数字都以{{ 1}}
.0
答案 1 :(得分:0)
使用reduce函数的另一种典型情况:
from functools import reduce
from pyspark.sql.functions import col
cols = df.columns[1:]
df_new = reduce(lambda d1,d2: d1.union(d2),
[ df.select((col('base') + int(c)).astype('int').alias('new_base'), col(c).alias('v')) for c in cols ]
)
df_new.show()
+--------+---+
|new_base| v|
+--------+---+
| 11| AA|
| 21| BB|
| 31| CC|
| 12| aa|
| 22| bb|
| 32| cc|
| 13| Aa|
| 23| Bb|
| 33| Cc|
+--------+---+