Numpy Vs嵌套词典,哪一个在运行时和内存方面更有效?

时间:2015-06-09 11:31:18

标签: python numpy dictionary

我是numpy的新手。我提到了以下SO问题: Why NumPy instead of Python lists?

上述问题的最终评论似乎表明numpy在特定数据集上的速度可能较慢。

我正在处理1650 * 1650 * 1650数据集。这些基本上是MovieLens数据集中每部电影的相似值以及电影ID。

我的选择是使用3D numpy数组还是嵌套字典。在减少的100 * 100 * 100数据集上,运行时间并没有太大差异。

请在下面找到Ipython代码段:

for id1 in range(1,count+1):
    data1 = df[df.movie_id == id1].set_index('user_id')[cols]
    sim_score = {}
    for id2 in range (1, count+1):
        if id1 != id2:
            data2 = df[df.movie_id == id2].set_index('user_id')[cols]
            sim = calculatePearsonCorrUnified(data1, data2) 
        else: 
            sim = 1
        sim_matrix_panel[id1]['Sim'][id2] = sim



import pdb
from math import sqrt
def calculatePearsonCorrUnified(df1, df2):

sim_score = 0
common_movies_or_users = []

for temp_id in df1.index:
    if temp_id in df2.index:
        common_movies_or_users.append(temp_id)
#pdb.set_trace()
n = len(common_movies_or_users)
#print ('No. of common movies: ' + str(n))
if n == 0:
    return sim_score;

# Ratings corresponding to user_1 / movie_1, present in the common list 
rating1 = df1.loc[df1.index.isin(common_movies_or_users)]['rating'].values
# Ratings corresponding to user_2 / movie_2, present in the common list 
rating2 = df2.loc[df2.index.isin(common_movies_or_users)]['rating'].values


sum1 = sum (rating1)
sum2 = sum (rating2)

# Sum up the squares
sum1Sq = sum (np.square(rating1))
sum2Sq = sum (np.square(rating2))

# Sum up the products
pSum = sum(np.multiply(rating1, rating2))

# Calculate Pearson score
num = pSum-(sum1*sum2/n)
den = sqrt(float(sum1Sq-pow(sum1,2)/n) * float(sum2Sq-pow(sum2,2)/n))
if den==0: return 0
sim_score = (num/den)

return sim_score    

使用这些选项中最精确地计算运行时间的最佳方法是什么?

任何指针都会非常感激。

0 个答案:

没有答案