我正在尝试创建一个功能,我在其中提供了一个经过301跳的URL列表,它为我展平了它。我想将结果列表保存为CSV,这样我就可以将它交给可以实现它并且摆脱301跳的开发人员。
例如,我的抓取工具会生成301跳的列表:
URL1 | URL2 | URL3 | URL4
example.com/url1 | example.com/url2 | |
example.com/url3 | example.com/url4 | example.com/url5 |
example.com/url6 | example.com/url7 | example.com/url8 | example.com/10
example.com/url9 | example.com/url7 | example.com/url8 |
example.com/url23 | example.com/url10 | |
example.com/url24 | example.com/url45 | example.com/url46 |
example.com/url25 | example.com/url45 | example.com/url46 |
example.com/url26 | example.com/url45 | example.com/url46 |
example.com/url27 | example.com/url45 | example.com/url46 |
example.com/url28 | example.com/url45 | example.com/url46 |
example.com/url29 | example.com/url45 | example.com/url46 |
example.com/url30 | example.com/url45 | example.com/url46 |
我想要获得的输出是
URL1 | URL2
example.com/url1 | example.com/url2
example.com/url3 | example.com/url5
example.com/url4 | example.com/url5
example.com/url6 | example.com/10
example.com/url7 | example.com/10
example.com/url8 | example.com/10
example.com/url23 | example.com/url10
...
我已使用以下代码将Pandas数据框转换为列表列表:
import pandas as pd
import numpy as np
csv1 = pd.read_csv('Example_301_sheet.csv', header=None)
outlist = []
def link_flat(csv):
for row in csv.iterrows():
index, data = row
outlist.append(data.tolist())
return outlist
这会将每一行作为一个列表返回,它们全部嵌套在一个列表中,如下所示:
[['example.com/url1', 'example.com/url2', nan, nan],
['example.com/url3', 'example.com/url4', 'example.com/url5', nan],
['example.com/url6',
'example.com/url7',
'example.com/url8',
'example.com/10'],
['example.com/url9', 'example.com/url7', 'example.com/url8', nan],
['example.com/url23', 'example.com/url10', nan, nan],
['example.com/url24', 'example.com/url45', 'example.com/url46', nan],
['example.com/url25', 'example.com/url45', 'example.com/url46', nan],
['example.com/url26', 'example.com/url45', 'example.com/url46', nan],
['example.com/url27', 'example.com/url45', 'example.com/url46', nan],
['example.com/url28', 'example.com/url45', 'example.com/url46', nan],
['example.com/url29', 'example.com/url45', 'example.com/url46', nan],
['example.com/url30', 'example.com/url45', 'example.com/url46', nan]]
如何将每个嵌套列表中的每个网址与同一列表中的最后一个网址相匹配,以生成上述列表?
答案 0 :(得分:2)
您需要使用groupby
+ last
确定每行的最后一个有效项目,然后重塑您的dataFrame并使用melt
构建两列映射。< / p>
df.columns = range(len(df.columns))
df = (
df.assign(URL2=df.stack().groupby(level=0).last())
.melt('URL2', value_name='URL1')
.drop('variable', 1)
.dropna()
.drop_duplicates()
.query('URL1 != URL2')
.sort_index(axis=1)
.reset_index(drop=True)
)
df
URL1 URL2
0 example.com/url1 example.com/url2
1 example.com/url3 example.com/url5
2 example.com/url6 example.com/10
3 example.com/url9 example.com/url8
4 example.com/url23 example.com/url10
5 example.com/url24 example.com/url46
6 example.com/url25 example.com/url46
7 example.com/url26 example.com/url46
8 example.com/url27 example.com/url46
9 example.com/url28 example.com/url46
10 example.com/url29 example.com/url46
11 example.com/url30 example.com/url46
12 example.com/url4 example.com/url5
13 example.com/url7 example.com/10
14 example.com/url7 example.com/url8
15 example.com/url45 example.com/url46
16 example.com/url8 example.com/10