给定一个(可能很大的~2 + GB)json文件中的节点之间的事务,具有〜百万个节点和~1000万个事务,每个事务具有10-1000个节点,例如
{"transactions":
[
{"transaction 1": ["node1","node2","node7"], "weight":0.41},
{"transaction 2": ["node4","node2","node1","node3","node10","node7","node9"], "weight":0.67},
{"transaction 3": ["node3","node10","node11","node2","node1"], "weight":0.33},...
]
}
最优雅高效的pythonic方法是将其转换为节点亲和度矩阵,其中亲和度是节点之间加权事务的总和。
affinity [i,j] = weighted transaction count between nodes[i] and nodes[j] = affinity [j,i]
e.g。
affinity[node1, node7] = [0.41 (transaction1) + 0.67 (transaction2)] / 2 = affinity[node7, node1]
注意:亲和度矩阵是对称的,因此单独计算下三角就足够了。
值不具代表性***结构示例仅供参考!
node1 | node2 | node3 | node4 | ....
node1 1 .4 .1 .9 ...
node2 .4 1 .6 .3 ...
node3 .1 .6 1 .7 ...... br> node4 .9 .3 .7
1 ... ...
答案 0 :(得分:2)
首先,我会清理数据并用整数表示每个节点,并以这样的字典开头
data=[{'transaction': [1, 2, 7], 'weight': 0.41},
{'transaction': [4, 2, 1, 3, 10, 7, 9], 'weight': 0.67},
{'transaction': [3, 10, 11, 2, 1], 'weight': 0.33}]
不确定这是否足够pythonic但它应该是不言自明的
def weight(i,j,data_item):
return data_item["weight"] if i in data_item["transaction"] and j in data_item["transaction"] else 0
def affinity(i,j):
if j<i: # matrix is symmetric
return affinity(j,i)
else:
weights = [weight(i,j,data_item) for data_item in data if weight(i,j,data_item)!=0]
if len(weights)==0:
return 0
else:
return sum(weights) / float(len(weights))
ln = 10 # number of nodes
A = [[affinity(i,j) for j in range(1,ln+1)] for i in range(1,ln+1)]
查看亲和度矩阵
import numpy as np
print(np.array(A))
[[ 0.47 0.47 0.5 0.67 0. 0. 0.54 0. 0.67 0.5 ] [ 0.47 0.47 0.5 0.67 0. 0. 0.54 0. 0.67 0.5 ] [ 0.5 0.5 0.5 0.67 0. 0. 0.67 0. 0.67 0.5 ] [ 0.67 0.67 0.67 0.67 0. 0. 0.67 0. 0.67 0.67] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0.54 0.54 0.67 0.67 0. 0. 0.54 0. 0.67 0.67] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0.67 0.67 0.67 0.67 0. 0. 0.67 0. 0.67 0.67] [ 0.5 0.5 0.5 0.67 0. 0. 0.67 0. 0.67 0.5 ]]