合并大熊猫系列(系列本身)

时间:2016-11-22 04:53:32

标签: list pandas merge group-by series

我有一个pandas数据框,其中一列是Series本身。例如:

df.head()

Col1    Col2  
1       ["name1","name2","name3"]  
1       ["name3","name2","name4"]  
2       ["name1","name2","name3"] 
2       ["name1","name5","name6"] 

我需要在Col1组中连接Col2。我想要像

这样的东西
Col1    Col2  
1       ["name1","name2","name3","name4"]  
2       ["name1","name2","name3","name5","name6"]

我尝试使用groupby作为

.agg({"Col2":lambda x: pd.Series.append(x)})

但这会引发错误,说明需要两个参数。我也尝试在agg函数中使用sum。失败但失败并没有减少。

我该怎么做?

2 个答案:

答案 0 :(得分:1)

是的,您无法对此类分类数据使用.aggby{}。无论如何,这是我对这个问题的抨击,使用了numpy的帮助。 (为清楚起见)

import numpy as np

# Set group by ("Col1") unique values
groupby = df["Col1"].unique()

# Create empty dict to store values on each iteration
d = {}

for i,val in enumerate(groupby):

    # Set "Col1" key, to the unique value (e.g., 1)
    d.setdefault("Col1",[]).append(val)

    # Create empty list to store values from "Col2"
    col2_unis=[]

    # Create sub-DataFrame for each unique groupby value
    sdf = df.loc[df["Col1"]==val]

    # Loop through the 2D-array/Series "Col2" and append each 
    # value to col_unis (using list comprehension)
    col2_unis.append([[j for j in array] for i,array in enumerate(sdf["Col2"].values)])

    # Set "Col2" key, to be unique values of col2_unis
    d.setdefault("Col2",[]).append(np.unique(col2_unis))

new_df = pd.DataFrame(d)

print(new_df)

更精简的版本如下:

d = {}
for i,val in enumerate(df["Col1"].unique()):
    d.setdefault("Col1",[]).append(val)
    sdf = df.loc[df["Col1"]==val]
    d.setdefault("Col2",[]).append(np.unique([[j for j in array] for i,array in enumerate(df.loc[df["Col1"]==val, "Col2"].values)]))
new_df = pd.DataFrame(d)
print(new_df)

通过查看this related SO question了解有关Python .setdefault()词典功能的更多信息。

答案 1 :(得分:1)

您可以groupby使用apply自定义函数,首先按chain(最快solution)展平嵌套列表,然后按set删除重复项,转换为list并最后排序:

import pandas as pd
from  itertools import chain

df = pd.DataFrame({'Col1':[1,1,2,2],
                   'Col2':[["name1","name2","name3"],
                           ["name3","name2","name4"],
                           ["name1","name2","name3"],
                           ["name1","name5","name6"]]})

print (df)
   Col1                   Col2
0     1  [name1, name2, name3]
1     1  [name3, name2, name4]
2     2  [name1, name2, name3]
3     2  [name1, name5, name6]
print (df.groupby('Col1')['Col2']
         .apply(lambda x: sorted(list(set(list(chain.from_iterable(x))))))
         .reset_index())
   Col1                                 Col2
0     1         [name1, name2, name3, name4]
1     2  [name1, name2, name3, name5, name6]

解决方案可以更简单,只需要chainsetsorted

print (df.groupby('Col1')['Col2']
         .apply(lambda x: sorted(set(chain.from_iterable(x))))
         .reset_index())

   Col1                                 Col2
0     1         [name1, name2, name3, name4]
1     2  [name1, name2, name3, name5, name6]