我是Neo4j / Graph数据库的新手,并试图从Cypher cookbook复制教程:http://docs.neo4j.org/chunked/stable/cypher-cookbook-similarity-calc.html
随机数据集包含100种食物和1500人,所有人都通过ATE与“次”整数属性的关系与食物有关。 Food和Person被标记并具有属性“name” - 由auto-index
索引neo4j-sh (?)$ dbinfo -g "Primitive count"
{
"NumberOfNodeIdsInUse": 1600,
"NumberOfPropertyIdsInUse": 151600,
"NumberOfRelationshipIdsInUse": 150000,
"NumberOfRelationshipTypeIdsInUse": 1
}
neo4j-sh (?)$ index --indexes
Node indexes:
node_auto_index
Relationship indexes:
relationship_auto_index
在neo4j-shell中从cookbook运行修改后的查询永远不会完成(可能是因为节点/关系太多了?):
EXPORT name="Florida Goyette"
MATCH (me:Person { name: {name}})-[r1:ATE]->(food)<-[r2:ATE]-(you:Person)
WITH me,count(DISTINCT r1) AS H1,count(DISTINCT r2) AS H2,you
MATCH (me)-[r1:ATE]->(food)<-[r2:ATE]-(you)
RETURN SUM((1-ABS(r1.times/H1-r2.times/H2))*(r1.times+r2.times)/(H1+H2)) AS similarity
LIMIT 100;
所以我开始看看我怎么能早些时候限制为“第一个”100人并且出来了:
EXPORT name="Florida Goyette"
MATCH (me:Person { name: {name} })-[r1:ATE]->(food)
WITH me, food
MATCH (food)<-[r2:ATE]-(you)
WHERE me <> you
WITH me, you
LIMIT 100
MATCH (me)-[r1:ATE]->(food)<-[r2:ATE]-(you)
WITH me, count(DISTINCT r1) AS H1, count(DISTINCT r2) AS H2, you
MATCH (me)-[r1:ATE]->(food)<-[r2:ATE]-(you)
WITH me, you, SUM((1-ABS(r1.times/H1-r2.times/H2))*(r1.times+r2.times)/(H1+H2)) AS similarity
RETURN me.name, you.name, similarity
ORDER BY similarity DESC;
但是这个查询在预热缓存上的表现非常糟糕
100 rows
16038 ms
有没有机会让这样的查询更快地执行“实时”使用?
系统和Neo4j
Windows 7(64位),Intel Core I7-2600K,8GB RAM,SSD驱动器上的Neo4j数据库。
Neo4j社区版:2.1.0-M01(也在2.0.1稳定版上测试)
的Neo4j-community.options
-Xmx2048m
-Xms2048m
neo4j.properties
neostore.nodestore.db.mapped_memory=200M
neostore.relationshipstore.db.mapped_memory=200M
neostore.propertystore.db.mapped_memory=200M
neostore.propertystore.db.strings.mapped_memory=330M
neostore.propertystore.db.arrays.mapped_memory=330M
node_auto_indexing=true
node_keys_indexable=name
relationship_auto_indexing=true
relationship_keys_indexable=times
Cypher dump of my data(503kb压缩)
个人资料输出
ColumnFilter(symKeys=["similarity", "you", "you.name", "me", "me.name"], returnItemNames=["me.name", "you.name", "similarity"], _rows=100, _db_hits=0)
Sort(descr=["SortItem(similarity,false)"], _rows=100, _db_hits=0)
Extract(symKeys=["me", "you", "similarity"], exprKeys=["me.name", "you.name"], _rows=100, _db_hits=200)
ColumnFilter(symKeys=["me", "you", " INTERNAL_AGGREGATEcb085cf5-8982-4a83-ba3d-9642de570c59"], returnItemNames=["me", "you", "similarity"], _rows=100, _db_hits=0)
EagerAggregation(keys=["me", "you"], aggregates=["(INTERNAL_AGGREGATEcb085cf5-8982-4a83-ba3d-9642de570c59,Sum(Divide(Multiply(Subtract(Literal(1),AbsFunction(Subtract(Divide(Property(r1,times(1)),H1),Divide(Property(r2,times(1)),H2)))),Add(Property(r1,times(1)),Property(r2,times(1)))),Add(H1,H2))))"], _rows=100, _db_hits=40000)
SimplePatternMatcher(g="(you)-['r2']-(food),(me)-['r1']-(food)", _rows=10000, _db_hits=0)
ColumnFilter(symKeys=["me", "you", " INTERNAL_AGGREGATE677cd11c-ae53-4d7b-8df6-732ffed28bbf", " INTERNAL_AGGREGATEb5eb877c-de01-4e7a-9596-03cd94cfa47a"], returnItemNames=["me", "H1", "H2", "you"], _rows=100, _db_hits=0)
EagerAggregation(keys=["me", "you"], aggregates=["( INTERNAL_AGGREGATE677cd11c-ae53-4d7b-8df6-732ffed28bbf,Distinct(Count(r1),r1))", "( INTERNAL_AGGREGATEb5eb877c-de01-4e7a-9596-03cd94cfa47a,Distinct(Count(r2),r2))"], _rows=100, _db_hits=0)
SimplePatternMatcher(g="(you)-['r2']-(food),(me)-['r1']-(food)", _rows=10000, _db_hits=0)
ColumnFilter(symKeys=["me", "food", "you", "r2"], returnItemNames=["me", "you"], _rows=100, _db_hits=0)
Slice(limit="Literal(100)", _rows=100, _db_hits=0)
Filter(pred="NOT(me == you)", _rows=100, _db_hits=0)
SimplePatternMatcher(g="(you)-['r2']-(food)", _rows=100, _db_hits=0)
ColumnFilter(symKeys=["food", "me", "r1"], returnItemNames=["me", "food"], _rows=1, _db_hits=0)
Filter(pred="Property(me,name(0)) == {name}", _rows=1,_db_hits=148901)
TraversalMatcher(start={"label": "Person", "producer": "NodeByLabel", "identifiers": ["me"]}, trail="(me)-[r1:ATE WHERE true AND true]->(food)", _rows=148901, _db_hits=148901)
答案 0 :(得分:1)
您正在多次进行相同的MATCH。这会更好吗?
EXPORT name="Florida Goyette"
MATCH (me:Person { name: {name}})-[r1:ATE]->(food)<-[r2:ATE]-(you:Person)
WITH me,r1,r2,count(DISTINCT r1) AS H1,count(DISTINCT r2) AS H2,you
LIMIT 100
RETURN SUM((1-ABS(r1.times/H1-r2.times/H2))*(r1.times+r2.times)/(H1+H2)) AS similarity;
答案 1 :(得分:1)
您使用的是错误的索引类型。使用
创建标签索引CREATE INDEX ON :Person(name)
使用
检查模式索引和约束<强>的Neo4j - 壳强>
schema
schema ls -l :User
或
<强>的Neo4j浏览器强>
:schema
:schema ls -l :User
可能需要对查询进行优化,但请从此处开始。
答案 2 :(得分:0)
在Windows上,内存映射是里面的堆。因此,将堆大小增加到4G。
您不需要旧的自动索引,但新的架构索引如jjaderberg所述。
这会返回多少行?
MATCH (me:Person { name: {name}})-[r1:ATE]->(food)<-[r2:ATE]-(you:Person) RETURN count(*)
以及多少:
MATCH (me:Person { name: {name}})-[r1:ATE]->(food)<-[r2:ATE]-(you:Person)
WITH me,count(DISTINCT r1) AS H1,count(DISTINCT r2) AS H2,you
MATCH (me)-[r1:ATE]->(food)<-[r2:ATE]-(you)
RETURN COUNT(*)
您还可以避免两次匹配:
MATCH (me:Person { name: {name}})-[r1:ATE]->(food)<-[r2:ATE]-(you:Person)
WITH me,
collect([r1,r2]) as rels,
count(DISTINCT r1) AS H1,
count(DISTINCT r2) AS H2,
you
RETURN me,you,
reduce(a=0,r in rels |
a + (1-ABS(r[0].times/H1-r[1].times/H2))*
(r[0].times+r[1].times)
/(H1+H2) as similarity
顺便说一下。如果您使用您的域名,用例和一些示例数据创建了GraphGist,那就太棒了!