我有一个非常大的网络要在Networkx中读取和分析(大约5亿行),存储在gzip加权边缘列表中(Node1 Node2 Weight)。到目前为止,我尝试用以下内容阅读:
# Open and Read File
with gzip.open(network,'rb') as fh:
# Read Weighted Edge List
G = nx.read_weighted_edgelist(fh, create_using=nx.DiGraph())
但由于它非常大,我有一些内存问题。我想知道是否有办法在" pandas"中读取文件。沿着具有固定长度的块的样式。谢谢你的帮助。
编辑:
这是我的边缘列表文件的小提取(Node1 Node2 Weight):
30879005 5242 11
44608582 2295986 4
24935102 737450 1
42230925 1801294 1
20926179 2332390 1
40959246 1100438 1
3291058 3226104 1
23192021 5818064 1
16328715 7695005 1
11561383 2102983 1
1886716 1378893 2
23192021 5818065 1
2060097 2060091 1
7176482 3222203 2
46586813 1599030 1
35151866 35151866 1
12420680 1364416 5
612044 92878 1
16260783 3373725 1
26475759 85310 1
21149725 17011789 1
1312990 105320 1
23898296 1633222 3
3635610 2103011 1
12737940 4114680 1
18210502 10816500 1
45999903 45999903 1
8689446 1977413 1
5998987 3453478 3
答案 0 :(得分:3)
将数据作为csv读入pandas df:
df = pd.read_csv(path_to_edge_list, sep='\s+', header=None, names=['Node1','Node2','Weight'])
现在创建一个nx DiGraph并执行列表解析以生成一个元组列表(node1,node2,weight)作为数据:
In [150]:
import networkx as nx
G = nx.DiGraph()
G.add_weighted_edges_from([tuple(x) for x in df.values])
G.edges()
Out[150]:
[(16328715, 7695005),
(42230925, 1801294),
(40959246, 1100438),
(12737940, 4114680),
(3635610, 2103011),
(16260783, 3373725),
(45999903, 45999903),
(7176482, 3222203),
(8689446, 1977413),
(11561383, 2102983),
(21149725, 17011789),
(18210502, 10816500),
(3291058, 3226104),
(23898296, 1633222),
(46586813, 1599030),
(2060097, 2060091),
(5998987, 3453478),
(44608582, 2295986),
(12420680, 1364416),
(612044, 92878),
(30879005, 5242),
(23192021, 5818064),
(23192021, 5818065),
(1312990, 105320),
(20926179, 2332390),
(26475759, 85310),
(24935102, 737450),
(35151866, 35151866),
(1886716, 1378893)]
证明我们有重量属性:
In [153]:
G.get_edge_data(30879005,5242)
Out[153]:
{'weight': 11}
要以块的形式读取边缘列表,请在chunksize
中设置read_csv
参数,并使用上面的代码为每个块添加边和权重。
修改强>
所以要读取块,你可以这样做:
import networkx as nx
G = nx.DiGraph()
for d in pd.read_csv(path_to_edge_list,sep='\s+', header=None, names=['Node1', 'Node2', 'Weight'], chunksize=10000):
G.add_weighted_edges_from([tuple(x) for x in d.values])