出于测试目的,我使用的是具有以下规格的Windows 7笔记本电脑:
我还使用neo4j-enterprise-3.0.2并使用BOLT连接将我的Pycharm python程序连接到数据库。我创建了许多节点和关系。我注意到节点和关系的创建在一定时间后大幅减慢,并且在某一点之后几乎没有进展。
我检查了以下
这是健康监视器向我显示的内容:
问题:为什么节点和关系的创建减慢了这么多?是由于图表大小(但数据集似乎相当小)?它与数据库的BOLT连接和事务有关吗?是否与RAM使用量增加有关?如何防止这种情况?
我创建了这个简单的例子来说明问题:
from neo4j.v1 import GraphDatabase
#BOLT driver
driver = GraphDatabase.driver("bolt://localhost")
session = driver.session()
#start with empty database
stmtDel = "MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r"
session.run(stmtDel)
#Add uniqueness constraint
stmt = 'CREATE CONSTRAINT ON (d:Day) ASSERT d.name IS UNIQUE'
session.run(stmt)
# Create many nodes - run either option 1 or option 2
# # Option 1: Creates a node one by one. This is slow in execution and keeps the RAM flat (no increase)
# for i in range(1,80001):
# stmt1 = 'CREATE (d:Day) SET d.name = {name}'
# dict = {"name": str(i)}
# print(i)
# session.run(stmt1,dict)
#Option 2: Increase the speed and submit multiple transactions at the same time - e.g.1000
# This is very fast but blows up the RAM memory in no time, even with a limit on the page cache of 2GB
tx = session.begin_transaction()
for i in range(1, 80001):
stmt1 = 'CREATE (d:Day) SET d.name = {name}'
dict = {"name": str(i)}
tx.run(stmt1, dict) # append a transaction to the block
print(i)
if divmod(i,1000)[1] == 0: #every one thousand transactions submit the block and creat an new transaction block
tx.commit()
tx.close
tx = session.begin_transaction() #it seems that recycling the session keeps on growing the RAM.
答案 0 :(得分:0)
我第一次做了什么:我实际上切换回py2neo包进行批量事务处理,并且工作正常而不会使我的内存过载。我相信neo4j.v1包和内存管理一定存在问题。
后来:我使用了neo4j软件包的更新版本,并且不再有问题了。它一定是早期版本中的一个错误。