我正在尝试构造一个新列,如果这是列“ type”的元素第一次具有列“ xx”的特定值,并且它在第一个列中的值为0,则该列的值为1任何其他情况。
我正在使用的原始数据帧(df)是:
logger.Info("Hello {0}", jObject); // No structured logging, becomes string.Format
logger.Info("Hello {$myobj}", jObject); // Structured logging that forces JObject.ToString
logger.Info("Hello {myobj}", jObject.ToString()); // Converts to string upfront
我正在寻找的结果是:
idx = [np.array(['Jan-18', 'Jan-18', 'Feb-18', 'Mar-18', 'Mar-18', 'Mar-18','Apr-18', 'Apr-18', 'May-18', 'Jun-18', 'Jun-18', 'Jun-18','Jul-18', 'Aug-18', 'Aug-18', 'Sep-18', 'Sep-18', 'Oct-18','Oct-18', 'Oct-18', 'Nov-18', 'Dec-18', 'Dec-18',]),np.array(['A', 'B', 'B', 'A', 'B', 'C', 'A', 'B', 'B', 'A', 'B', 'C','A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'A', 'B', 'C'])]
data = [{'xx': 1000}, {'xx': 1000}, {'xx': 1200}, {'xx': 800}, {'xx': 800}, {'xx': 800},{'xx': 1000}, {'xx': 1000}, {'xx': 800}, {'xx': 1200}, {'xx': 1200}, {'xx': 1200},{'xx': 1000}, {'xx': 1000}, {'xx': 1000}, {'xx': 1600}, {'xx': 1600}, {'xx': 1000}, {'xx': 800}, {'xx': 800}, {'xx': 1000}, {'xx': 1600}, {'xx': 1600}]
df = pd.DataFrame(data, index=idx, columns=['xx'])
df.index.names=['date','type']
df=df.reset_index()
df['date'] = pd.to_datetime(df['date'],format = '%b-%y')
df=df.set_index(['date','type'])
df['xx'] = df.xx.astype('float')
我尝试了以下代码,但不起作用(给出错误消息):
xx yy
date type
2018-01-01 A 1000.0 1.0
B 1000.0 1.0
2018-02-01 B 1200.0 1.0
2018-03-01 A 800.0 1.0
B 800.0 1.0
C 800.0 1.0
2018-04-01 A 1000.0 0.0
B 1000.0 0.0
2018-05-01 B 800.0 0.0
2018-06-01 A 1200.0 1.0
B 1200.0 0.0
C 1200.0 1.0
2018-07-01 A 1000.0 0.0
2018-08-01 B 1000.0 0.0
C 1000.0 1.0
2018-09-01 A 1600.0 1.0
B 1600.0 1.0
2018-10-01 C 1000.0 0.0
A 800.0 0.0
B 800.0 0.0
2018-11-01 A 1000.0 0.0
2018-12-01 B 1600.0 0.0
C 1600.0 1.0
错误消息显示
ValueError:传递0的项目数不正确,放置意味着1。
我尝试了其他方法,例如nth(0),但是它也不起作用。任何关于如何解决此问题的建议都非常欢迎。
答案 0 :(得分:5)
尝试:
df['yy'] = (df.groupby(level=1).xx
.apply(lambda x: (~x.duplicated()).astype(int))
)
df['yy']
输出:
date type
2018-01-01 A 1
B 1
2018-02-01 B 1
2018-03-01 A 1
B 1
C 1
2018-04-01 A 0
B 0
2018-05-01 B 0
2018-06-01 A 1
B 0
C 1
2018-07-01 A 0
2018-08-01 B 0
C 1
2018-09-01 A 1
B 1
2018-10-01 C 0
A 0
B 0
2018-11-01 A 0
2018-12-01 B 0
C 1
Name: yy, dtype: int32
答案 1 :(得分:5)
使用function roleRoulette(guild, role) {
const possible = guild.members.filter(m => m.roles.has(role.id));
if (possible.size === 0) return clearInterval(interval);
const member = possible.random();
member.removeRole(role)
// MASTER: member.roles.remove(role)
.then(() => console.log(`Removed ${role.name} from ${member.tag}.`))
.catch(err => {
console.error(err);
clearInterval(interval);
});
}
const guild = client.guilds.get('576463944298790929');
if (!guild) return;
const role = guild.roles.get('576464298088333323');
if (!role) return;
const interval = setInterval(roleRoulette(), 60 * 1000, guild, role);
+ groupby
+ cumcount
:
astype
结果:
df['yy'] = df.reset_index().groupby(['type','xx']).cumcount().eq(0).astype(int).values
答案 2 :(得分:4)
duplicated
type
和xx
生成元组列表pandas.Series
中,因为我想使用pandas.Series.duplicated
方法numpy.where
在0
和1
之间进行选择 注意:这不使用groupby
,因此应该更有效。
s = pd.Series([*zip(df.index.get_level_values('type'), df.xx)])
df.assign(id=np.where(s.duplicated(), 0, 1))
xx id
date type
2018-01-01 A 1000.0 1
B 1000.0 1
2018-02-01 B 1200.0 1
2018-03-01 A 800.0 1
B 800.0 1
C 800.0 1
2018-04-01 A 1000.0 0
B 1000.0 0
2018-05-01 B 800.0 0
2018-06-01 A 1200.0 1
B 1200.0 0
C 1200.0 1
2018-07-01 A 1000.0 0
2018-08-01 B 1000.0 0
C 1000.0 1
2018-09-01 A 1600.0 1
B 1600.0 1
2018-10-01 C 1000.0 0
A 800.0 0
B 800.0 0
2018-11-01 A 1000.0 0
2018-12-01 B 1600.0 0
C 1600.0 1
答案 3 :(得分:1)
IIUC
idx=df.groupby([df.index.get_level_values(1),df.xx]).head(1).index
df.loc[:,'new']=0
df.loc[idx,'new']=1
df
Out[869]:
xx new
date type
2018-01-01 A 1000.0 1
B 1000.0 1
2018-02-01 B 1200.0 1
2018-03-01 A 800.0 1
B 800.0 1
C 800.0 1
2018-04-01 A 1000.0 0
B 1000.0 0
2018-05-01 B 800.0 0
2018-06-01 A 1200.0 1
B 1200.0 0
C 1200.0 1
2018-07-01 A 1000.0 0
2018-08-01 B 1000.0 0
C 1000.0 1
2018-09-01 A 1600.0 1
B 1600.0 1
2018-10-01 C 1000.0 0
A 800.0 0
B 800.0 0
2018-11-01 A 1000.0 0
2018-12-01 B 1600.0 0
C 1600.0 1