|CallID| Customer | Response |
+------+----------------------------------+------------------------------------+
| 1 |Ready to repay the amount. |He is ready to pay $50 by next week.|
| 2 |Mr. John's credit card is blocked.|Asked to verify last 3 transactions.|
| 3 |Mr. Tom is unable to pay bills. |Asked to verify registered email add|
+------+----------------------------------+------------------------------------
我正在选择单个列,执行拼写校正并将它们重新结合在一起。这是我的代码:
1。选择单个列
from textblob import TextBlob
from itertools import islice
from pyspark.sql.functions import monotonically_increasing_id, col, asc
t = df.count()
newColumns = df.schema.names
df_t = df.select(df['Customer'])
s1 = ''
for i in range(t):
rdd = df_t.rdd
s = str(rdd.collect()[i][0])
s1 = s1 + '|' + s
text = str(TextBlob(s1).correct())
l = text.split('|')
rdd2 = sc.parallelize(l)
df1 = rdd2.map(lambda x: (x,)) \
.mapPartitionsWithIndex(lambda idx, it: islice(it, 1, None) if idx == 0 else
it) \
.toDF([newColumns[1]])
s = s1 = rdd = rdd2 = text = ''
l = []
df_t = df.select(df['Response'])
for i in range(t):
rdd = df_t.rdd
s = str(rdd.collect()[i][0])
s1 = s1 + '|' + s
text = str(TextBlob(s1).correct())
l = text.split('|')
rdd2 = sc.parallelize(l)
df2 = rdd2.map(lambda x: (x,)) \
.mapPartitionsWithIndex(lambda idx, it: islice(it, 1, None) if idx == 0 else
it) \
.toDF([newColumns[2]])`
2。重新加入他们
df1 = df1.withColumn("id", monotonically_increasing_id())
df2 = df2.withColumn("id", monotonically_increasing_id())
dffinal = df2.join(df1, "id", "outer").orderBy('id',
ascending=True).drop("id")
3。最终结果
| Customer | Response |
+----------------------------------+------------------------------------+
|Ready to repay the amount. |He is ready to pay $50 by next week.|
|Mr. John's credit card is blocked.|Asked to verify last 3 transactions.|
|Mr. Tom is unable to pay bills. |Asked to verify registered email add|
+----------------------------------+------------------------------------+
如果我们的列数较少,这是一个好方法。但是,有没有一种编写通用代码的方法,我们可以在其中创建DataFrame并根据没有列的元素(如数组或元素列表)将它们联接起来?
答案 0 :(得分:0)
##consider below array ##
In [1]: df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
In [8]: df2 = pd.DataFrame({'E': ['B2', 'B3', 'B6', 'B7'],
'F': ['D2', 'D3', 'D6', 'D7'],
'G': ['F2', 'F3', 'F6', 'F7']},
index=[2, 3, 6, 7])
In [9]: result = pd.concat([df1, df4], axis=1, sort=False)