我有一个名称为(person_name),日期和颜色(shirt_color)为列的数据集。
每个人在特定的一天都穿着某种颜色的衬衫。天数可以是任意的。
例如输入:
name day color
----------------
John 1 White
John 2 White
John 3 Blue
John 4 Blue
John 5 White
Tom 2 White
Tom 3 Blue
Tom 4 Blue
Tom 5 Black
Jerry 1 Black
Jerry 2 Black
Jerry 4 Black
Jerry 5 White
我需要找到每个人最常用的颜色。
例如结果:
name color
-------------
Jerry Black
John White
Tom Blue
我正在执行以下操作来获取结果,该方法可以正常运行,但速度很慢:
most_frquent_list = [[name, group.color.mode()[0]]
for name, group in data.groupby('name')]
most_frquent_df = pd.DataFrame(most_frquent_list, columns=['name', 'color'])
现在假设我有一个包含500万个唯一名称的数据集。进行上述操作的最佳/最快方法是什么?
答案 0 :(得分:6)
numpy.add.at
和pandas.factorize
这是为了快速。但是,我也尝试将其组织为可读性。
i, r = pd.factorize(df.name)
j, c = pd.factorize(df.color)
n, m = len(r), len(c)
b = np.zeros((n, m), dtype=np.int64)
np.add.at(b, (i, j), 1)
pd.Series(c[b.argmax(1)], r)
John White
Tom Blue
Jerry Black
dtype: object
groupby
,size
和idxmax
df.groupby(['name', 'color']).size().unstack().idxmax(1)
name
Jerry Black
John White
Tom Blue
dtype: object
name
Jerry Black
John White
Tom Blue
Name: color, dtype: object
Counter
¯\_(ツ)_/¯
from collections import Counter
df.groupby('name').color.apply(lambda c: Counter(c).most_common(1)[0][0])
name
Jerry Black
John White
Tom Blue
Name: color, dtype: object
答案 1 :(得分:4)
更新
必须很难击败它(示例daraframe上的速度比任何建议的pandas解决方案快10倍,比建议的numpy解决方案快1.5倍)。要点是要远离熊猫,并使用itertools.groupby
,当涉及到非数值数据时,做得更好。
from itertools import groupby
from collections import Counter
pd.Series({x: Counter(z[-1] for z in y).most_common(1)[0][0] for x,y
in groupby(sorted(df.values.tolist()),
key=lambda x: x[0])})
# Jerry Black
# John White
# Tom Blue
旧答案
这是另一种方法。它实际上比原始版本慢,但我将其保留在这里:
data.groupby('name')['color']\
.apply(pd.Series.value_counts)\
.unstack().idxmax(axis=1)
# name
# Jerry Black
# John White
# Tom Blue
答案 2 :(得分:4)
来自 pd.Series.mode
df.groupby('name').color.apply(pd.Series.mode).reset_index(level=1,drop=True)
Out[281]:
name
Jerry Black
John White
Tom Blue
Name: color, dtype: object
答案 3 :(得分:2)
如何用transform(max)
进行两个分组?
df = df.groupby(["name", "color"], as_index=False, sort=False).count()
idx = df.groupby("name", sort=False).transform(max)["day"] == df["day"]
df = df[idx][["name", "color"]].reset_index(drop=True)
输出:
name color
0 John White
1 Tom Blue
2 Jerry Black
答案 4 :(得分:0)
类似于@piRSquared的pd.factorize
和np.add.at
答案。
我们使用
将字符串编码为列i, r = pd.factorize(df.name)
j, c = pd.factorize(df.color)
n, m = len(r), len(c)
b = np.zeros((n, m), dtype=np.int64)
但是,不要这样做:
np.add.at(b, (i, j), 1)
max_columns_after_add_at = b.argmax(1)
我们使用jited函数获得max_columns_after_add_at
,并在同一循环中进行累加并找到最大值:
@nb.jit(nopython=True, cache=True)
def add_at(x, rows, cols, val):
max_vals = np.zeros((x.shape[0], ), np.int64)
max_inds = np.zeros((x.shape[0], ), np.int64)
for i in range(len(rows)):
r = rows[i]
c = cols[i]
x[r, c]+=1
if(x[r, c] > max_vals[r]):
max_vals[r] = x[r, c]
max_inds[r] = c
return max_inds
然后最后获得数据帧,
ans = pd.Series(c[max_columns_after_add_at], r)
所以,不同之处在于我们argmax(axis=1) after np.add.at()
的工作方式。
时间分析
import numpy as np
import numba as nb
m = 100000
n = 100000
rows = np.random.randint(low = 0, high = m, size=10000)
cols = np.random.randint(low = 0, high = n, size=10000)
所以这个:
%%time
x = np.zeros((m,n))
np.add.at(x, (rows, cols), 1)
maxs = x.argmax(1)
给予:
CPU时间:用户12.4 s,系统时间:38 s,总计:50.4 s挂墙时间:50.5 s
还有这个
%%time
x = np.zeros((m,n))
maxs2 = add_at(x, rows, cols, 1)
给予
CPU时间:用户108 ms,sys:39.4 s,总计:39.5 s挂墙时间:38.4 s
答案 5 :(得分:0)
对于那些想要将上表转换为数据框并尝试发布答案的用户,可以使用此代码段。将上面的表格复制粘贴到笔记本单元格中,如下所示,请确保删除连字符
df = pd.DataFrame([tuple(i.split()) for i in l])
headers = df.iloc[0]
new_df = pd.DataFrame(df.values[1:], columns=headers)
现在,我们需要将此列表转换为元组列表。
@login_required
def organization_add(request):
if request.method == 'POST':
form = OrganizationAddForm(request.POST)
if form.is_valid():
form.organization_code = form.cleaned_data['organization_code']
form.company_name = form.cleaned_data['company_name']
form.legal_name = form.cleaned_data['legal_name']
form.business_registration_no = form.cleaned_data['business_registration_no']
form.vat_registration_no = form.cleaned_data['vat_registration_no']
form.industry_distribution = form.cleaned_data['industry_distribution']
form.industry_education = form.cleaned_data['industry_education']
form.industry_healthcare = form.cleaned_data['industry_healthcare']
form.industry_manufacturing = form.cleaned_data['industry_manufacturing']
form.industry_retail = form.cleaned_data['industry_retail']
form.industry_services = form.cleaned_data['industry_services']
form.effective_start_date = form.cleaned_data['effective_start_date']
form.effective_end_date = form.cleaned_data['effective_end_date']
org = form.save(commit=False)
org.created_by = request.user
org.last_updated_by = request.user
org.save()
return redirect('organizations_settings')
else:
form = OrganizationAddForm()
return render(request, 'settings/add_organization.html', {'form': form})
现在使用new_df,您可以通过@piRSquared引用上面的答案
答案 6 :(得分:0)
其他答案中讨论的大多数测试结果都存在偏差,原因是使用非常小的测试 DataFrame 作为输入进行测量。 Pandas 有一些固定的但通常可以忽略不计的设置时间,但在处理这个小数据集之后它会显得很重要。
在更大的数据集上,最快的方法是使用 pd.Series.mode()
和 agg()
:
df.groupby('name')['color'].agg(pd.Series.mode)
测试台:
arr = np.array([
('John', 1, 'White'),
('John', 2, 'White'),
('John', 3, 'Blue'),
('John', 4, 'Blue'),
('John', 5, 'White'),
('Tom', 2, 'White'),
('Tom', 3, 'Blue'),
('Tom', 4, 'Blue'),
('Tom', 5, 'Black'),
('Jerry', 1, 'Black'),
('Jerry', 2, 'Black'),
('Jerry', 4, 'Black'),
('Jerry', 5, 'White')],
dtype=[('name', 'O'), ('day', 'i8'), ('color', 'O')])
from timeit import Timer
from itertools import groupby
from collections import Counter
df = pd.DataFrame.from_records(arr).sample(100_000, replace=True)
def factorize():
i, r = pd.factorize(df.name)
j, c = pd.factorize(df.color)
n, m = len(r), len(c)
b = np.zeros((n, m), dtype=np.int64)
np.add.at(b, (i, j), 1)
return pd.Series(c[b.argmax(1)], r)
t_factorize = Timer(lambda: factorize())
t_idxmax = Timer(lambda: df.groupby(['name', 'color']).size().unstack().idxmax(1))
t_aggmode = Timer(lambda: df.groupby('name')['color'].agg(pd.Series.mode))
t_applymode = Timer(lambda: df.groupby('name').color.apply(pd.Series.mode).reset_index(level=1,drop=True))
t_aggcounter = Timer(lambda: df.groupby('name')['color'].agg(lambda c: Counter(c).most_common(1)[0][0]))
t_applycounter = Timer(lambda: df.groupby('name').color.apply(lambda c: Counter(c).most_common(1)[0][0]))
t_itertools = Timer(lambda: pd.Series(
{x: Counter(z[-1] for z in y).most_common(1)[0][0] for x,y
in groupby(sorted(df.values.tolist()), key=lambda x: x[0])}))
n = 100
[print(r) for r in (
f"{t_factorize.timeit(number=n)=}",
f"{t_idxmax.timeit(number=n)=}",
f"{t_aggmode.timeit(number=n)=}",
f"{t_applymode.timeit(number=n)=}",
f"{t_applycounter.timeit(number=n)=}",
f"{t_aggcounter.timeit(number=n)=}",
f"{t_itertools.timeit(number=n)=}",
)]
t_factorize.timeit(number=n)=1.325189442
t_idxmax.timeit(number=n)=1.0613339019999999
t_aggmode.timeit(number=n)=1.0495010750000002
t_applymode.timeit(number=n)=1.2837302849999999
t_applycounter.timeit(number=n)=1.9432825890000007
t_aggcounter.timeit(number=n)=1.8283823839999993
t_itertools.timeit(number=n)=7.0855046380000015