我使用groupby和agg计算了一些聚合函数,因为我需要不同的聚合函数用于不同的变量,例如不是所有的总和,而是x的平均值和平均值,y的平均值等。
有没有办法用agg计算加权平均值?我找到了很多例子,但没有一个例子。
我可以手动计算加权平均值,如下面的代码所示(注意带有**的行),但我想知道是否有更优雅和直接的方式?
我可以创建自己的函数并将其与agg一起使用吗?
为了清楚起见,我完全理解还有其他解决方案,例如
Calculate weighted average with pandas dataframe
还有很多,还有很多。但是,正如我所说,我不确定如何使用agg 来实现这些解决方案,我需要使用agg,因为我需要将不同的聚合函数应用于不同的列(同样,不是所有列的总和,但是x的平均值和平均值,y的平均值等。)
谢谢!
import numpy as np
import pandas as pd
df= pd.DataFrame(np.random.randint(5,8,(1000,4)), columns=['a','b','c','d'])
**df['c * b']= df['c']* df['b']**
g = df.groupby('a').agg(
{'b':['sum', lambda x: x.sum() / df['b'] .sum(), 'mean'],
'c':['sum','mean'], 'd':['sum'],
'c * b':['sum']})
g.columns = g.columns.map('_'.join)
**g['weighted average of c'] = g['c * b_sum'] / g['b_sum']**
答案 0 :(得分:2)
有可能,但真的很复杂:
2017-10-23 22:26:39,210 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,211 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,211 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,213 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,213 INFO [main]: parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: alter table clicks_fact archive partition(day='2017-06-30', file_date='20170629')
2017-10-23 22:26:39,223 INFO [main]: parse.ParseDriver (ParseDriver.java:parse(209)) - Parse Completed
2017-10-23 22:26:39,224 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=parse start=1508777799213 end=1508777799224 duration=11 from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,225 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,234 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_table : db=scheme tbl=clicks_fact
2017-10-23 22:26:39,235 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=sridhar ip=unknown-ip-addr cmd=get_table : db=scheme tbl=clicks_fact
2017-10-23 22:26:39,410 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_partitions_ps_with_auth : db=scheme tbl=clicks_fact[2017-06-30,20170629]
2017-10-23 22:26:39,410 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=sridhar ip=unknown-ip-addr cmd=get_partitions_ps_with_auth : db=scheme tbl=clicks_fact[2017-06-30,20170629]
2017-10-23 22:26:39,463 INFO [main]: ql.Driver (Driver.java:compile(436)) - Semantic Analysis Completed
2017-10-23 22:26:39,463 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=semanticAnalyze start=1508777799225 end=1508777799463 duration=238 from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,463 INFO [main]: ql.Driver (Driver.java:getSchema(240)) - Returning Hive schema: Schema(fieldSchemas:null, properties:null)
2017-10-23 22:26:39,463 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=compile start=1508777799211 end=1508777799463 duration=252 from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,463 INFO [main]: ql.Driver (Driver.java:checkConcurrency(160)) - Concurrency mode is disabled, not creating a lock manager
2017-10-23 22:26:39,464 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,464 INFO [main]: ql.Driver (Driver.java:execute(1328)) - Starting command(queryId=sridhar_20171023222639_d1453a90-0340-411c-b131-77d112862acc): alter table clicks_fact archive partition(day='2017-06-30', file_date='20170629')
2017-10-23 22:26:39,465 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=TimeToSubmit start=1508777799211 end=1508777799465 duration=254 from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,465 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,465 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,465 INFO [main]: ql.Driver (Driver.java:launchTask(1651)) - Starting task [Stage-0:DDL] in serial mode
2017-10-23 22:26:39,465 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_table : db=scheme tbl=clicks_fact
2017-10-23 22:26:39,466 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=sridhar ip=unknown-ip-addr cmd=get_table : db=scheme tbl=clicks_fact
2017-10-23 22:26:39,489 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_partitions_ps_with_auth : db=scheme tbl=clicks_fact[2017-06-30,20170629]
2017-10-23 22:26:39,489 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=sridhar ip=unknown-ip-addr cmd=get_partitions_ps_with_auth : db=scheme tbl=clicks_fact[2017-06-30,20170629]
2017-10-23 22:26:39,526 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - intermediate.archived is hdfs://localhost:54310/user/hive/warehouse/scheme.db/clicks_fact/day=2017-06-30/file_date=20170629_INTERMEDIATE_ARCHIVED
2017-10-23 22:26:39,526 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - intermediate.original is hdfs://localhost:54310/user/hive/warehouse/scheme.db/clicks_fact/day=2017-06-30/file_date=20170629_INTERMEDIATE_ORIGINAL
2017-10-23 22:26:39,542 INFO [main]: common.FileUtils (FileUtils.java:mkdir(501)) - Creating directory if it doesn't exist: hdfs://localhost:54310/user/hive/warehouse/scheme.db/clicks_fact/day=2017-06-30/file_date=20170629/.hive-staging_hive_2017-10-23_22-26-39_212_2574575409261622278-1
2017-10-23 22:26:39,616 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - Creating data.har for hdfs://localhost:54310/user/hive/warehouse/scheme.db/clicks_fact/day=2017-06-30/file_date=20170629
2017-10-23 22:26:39,616 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - in hdfs://localhost:54310/user/hive/warehouse/scheme.db/clicks_fact/day=2017-06-30/file_date=20170629/.hive-staging_hive_2017-10-23_22-26-39_212_2574575409261622278-1/-ext-10000/partlevel
2017-10-23 22:26:39,616 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - Please wait... (this may take a while)
2017-10-23 22:26:39,645 INFO [main]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1173)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2017-10-23 22:26:39,646 INFO [main]: jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2017-10-23 22:26:39,656 ERROR [main]: exec.DDLTask (DDLTask.java:failed(520)) - java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(Lorg/apache/hadoop/mapred/JobClient;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/fs/Path;
at org.apache.hadoop.tools.HadoopArchives.archive(HadoopArchives.java:476)
at org.apache.hadoop.tools.HadoopArchives.run(HadoopArchives.java:862)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hive.ql.exec.DDLTask.archive(DDLTask.java:1359)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:360)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2017-10-23 22:26:39,656 ERROR [main]: ql.Driver (SessionState.java:printError(960)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(Lorg/apache/hadoop/mapred/JobClient;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/fs/Path;
2017-10-23 22:26:39,656 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute start=1508777799464 end=1508777799656 duration=192 from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,656 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,656 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=1508777799656 end=1508777799656 duration=0 from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,673 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2017-10-23 22:26:39,673 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=1508777799673 end=1508777799673 duration=0 from=org.apache.hadoop.hive.ql.Driver>
申请解决方案:
np.random.seed(234)
df= pd.DataFrame(np.random.randint(5,8,(1000,4)), columns=['a','b','c','d'])
wm = lambda x: (x * df.loc[x.index, "c"]).sum() / x.sum()
wm.__name__ = 'wa'
f = lambda x: x.sum() / df['b'] .sum()
f.__name__ = '%'
g = df.groupby('a').agg(
{'b':['sum', f, 'mean', wm],
'c':['sum','mean'],
'd':['sum']})
g.columns = g.columns.map('_'.join)
print (g)
d_sum c_sum c_mean b_sum b_% b_mean b_wa
a
5 2104 2062 5.976812 2067 0.344672 5.991304 5.969521
6 1859 1857 5.951923 1875 0.312656 6.009615 5.954667
7 2058 2084 6.075802 2055 0.342671 5.991254 6.085645
def func(x):
# print (x)
b1 = x['b'].sum()
b2 = x['b'].sum() / df['b'].sum()
b3 = (x['b'] * x['c']).sum() / x['b'].sum()
b4 = x['b'].mean()
c1 = x['c'].sum()
c2 = x['c'].mean()
d1 = x['d'].sum()
cols = ['b sum','b %','wa', 'b mean', 'c sum', 'c mean', 'd sum']
return pd.Series([b1,b2,b3,b4,c1,c2,d1], index=cols)
g = df.groupby('a').apply(func)
print (g)
b sum b % wa b mean c sum c mean d sum
a
5 2067.0 0.344672 5.969521 5.991304 2062.0 5.976812 2104.0
6 1875.0 0.312656 5.954667 6.009615 1857.0 5.951923 1859.0
7 2055.0 0.342671 6.085645 5.991254 2084.0 6.075802 2058.0