有效地处理300万个Pandas数据帧行

时间:2017-06-12 22:42:38

标签: python csv pandas optimization dataframe

我需要处理一个包含300万行和7列的大型csv文件。  DataFrame的形状:(3421083,7)

我的计划是删除包含特定值的所有行(客户ID)以下是我的处理方式:

#keep track of iterations
track = 0

#import all transactions (orders.csv)
transactions = pd.read_csv('transactions.csv')

#We select all orders that are electronics orders and put them into a df
is_electronics = transactions[transactions.type == "electronics"]          

#Create arrays that will store users to destroy in transactions.csv 
users_to_remove = []

#iterate to add appropriate values:

#  we add all users that ordered electronics  to a list 
for user in is_electronics.user_id:
    users_to_remove.append(user)


#We delete from orders.csv
for user in users_to_remove:
    transactions = transactions[transactions.user_id != user]
    track += 1
    if track == 100000:
        print(track)
        track = 0

transactions.to_csv('not_electronics.csv', index = False)

此操作需要很长时间才能运行它1小时后它仍然没有完成。

我有一个四核桌面i5,3.2 ghz和8gb内存。但是在活动监视器中,计算机仅使用5 gbs的ram和40%的cpu。

有没有办法加快这个过程的计算?通过更改代码还是使用其他库?

我也有一个gpu(gtx 970)我可以用它来处理吗?

谢谢。

1 个答案:

答案 0 :(得分:5)

使用isin

is_electronics = transactions.type == 'electronics'
users_to_remove = transactions.loc[is_electronics, 'user_id'].unique()
transactions[~transactions.user_id.isin(users_to_remove)]

删除此前安全的先前建议

对于子孙后代,这是@ DSM的建议

is_electronics = transactions.type.values == 'electronics'
users = transactions.user_id.values
transactions[~np.in1d(users, users[is_electronics])]