在大型数据集上优化迭代和替换

时间:2019-07-10 08:40:15

标签: python pandas performance numpy itertools

我发了一个帖子here,但是由于我现在没有答案,我认为也许也可以在这里尝试一下,因为我发现它很相关。

我有以下代码:

import pandas as pd
import numpy as np
import itertools 
from pprint import pprint

# Importing the data
df=pd.read_csv('./GPr.csv', sep=',',header=None)
data=df.values
res = np.array([[i for i in row if i == i] for row in data.tolist()], dtype=object)

# This function will make the subsets of a list 
def subsets(m,n):
    z = []
    for i in m:
        z.append(list(itertools.combinations(i, n)))
    return(z)

# Make the subsets of size 2 
l=subsets(res,2)
l=[val for sublist in l for val in sublist]
Pairs=list(dict.fromkeys(l)) 

# Modify the pairs: 
mod=[':'.join(x) for x in Pairs]

# Define new lists
t0=res.tolist()
t0=map(tuple,t0)
t1=Pairs
t2=mod

# Make substitions
result = []
for v1, v2 in zip(t1, t2):
    out = []
    for i in t0:
        common = set(v1).intersection(i)
        if set(v1) == common:
            out.append(tuple(list(set(i) - common) + [v2]))
        else:
            out.append(tuple(i))
    result.append(out)

pprint(result, width=200)  

# Delete duplicates
d = {tuple(x): x for x in result} 
remain= list(d.values())  

它的作用如下:首先,我们在here中导入要使用的csv文件。您会看到它是一个元素列表,对于每个元素,我们找到大小为2的子集。然后,我们对子集进行修改,并将其称为mod。它要做的是说('a','b')并将其转换为'a:b'。然后,对于每一对,我们都要遍历原始数据,然后在任何可以找到对的地方替换它们。最后,我们删除所有重复的副​​本。

该代码适用于少量数据。但是问题是我拥有的文件有30082对,其中每个应扫描〜49000列表,并替换成对。我在Jupyter中运行此程序,一段时间后内核死了。我不知道该如何优化?

1 个答案:

答案 0 :(得分:0)

在整个文件上进行了测试。

您在这里:

= ^ .. ^ =

wt_random=np.random.randint(2, size=(65536,4,4))

对于行:

import pandas as pd
import numpy as np
import itertools

# Importing the data
df=pd.read_csv('./GPr_test.csv', sep=',',header=None)

# set new data frame
df2 = pd.DataFrame()
pd.options.display.max_colwidth = 200


for index, row in df.iterrows():
    # clean data
    clean_list = [x for x in list(row.values) if str(x) != 'nan']
    # create combinations
    items_combinations = list(itertools.combinations(clean_list, 2))
    # create set combinations
    joint_items_combinations = [':'.join(x) for x in items_combinations]

    # collect rest of item names
    # handle firs element
    if index == 0:
        additional_names = list(df.loc[1].values)
        additional_names = [x for x in additional_names if str(x) != 'nan']
    else:
        additional_names = list(df.loc[index-1].values)
        additional_names = [x for x in additional_names if str(x) != 'nan']

    # get set data
    result = []
    for combination, joint_combination in zip(items_combinations, joint_items_combinations):
        set_data = [item for item in clean_list if item not in combination] + [joint_combination]
        result.append((set_data, additional_names))

    # add data to data frame
    data = pd.DataFrame({"result": result})
    df2 = df2.append(data)


df2 = df2.reset_index().drop(columns=['index'])

输出:

chicken cinnamon    ginger  onion   soy_sauce
cardamom    coconut pumpkin