neo4j - 使用neo4j rest graph db进行批量插入

时间:2014-03-20 07:38:35

标签: java neo4j

我正在使用2.0.1版。

我喜欢需要插入数十万个节点。我的neo4j图形数据库位于独立服务器上,我正在使用RestApi通过neo4j rest图谱库来实现这一点。

然而,我面临着一个缓慢的表现结果。我把我的查询分成几批,在一个http调用中发送500个密码语句。我得到的结果是:

10:38:10.984 INFO commit
10:38:13.161 INFO commit
10:38:13.277 INFO commit
10:38:15.132 INFO commit
10:38:15.218 INFO commit
10:38:17.288 INFO commit
10:38:19.488 INFO commit
10:38:22.020 INFO commit
10:38:24.806 INFO commit
10:38:27.848 INFO commit
10:38:31.172 INFO commit
10:38:34.767 INFO commit
10:38:38.661 INFO commit

等等。 我正在使用的查询如下:

MERGE (a{main:{val1},prop2:{val2}}) MERGE (b{main:{val3}}) CREATE UNIQUE (a)-[r:relationshipname]-(b);

我的代码是:

private RestAPI restAPI;
private RestCypherQueryEngine engine;
private GraphDatabaseService graphDB = new RestGraphDatabase("http://localdomain.com:7474/db/data/");

...

restAPI = ((RestGraphDatabase) graphDB).getRestAPI();
engine = new RestCypherQueryEngine(restAPI);

...

    Transaction tx = graphDB.getRestAPI().beginTx();

    try {
        int ctr = 0;
        while (isExists) {
            ctr++;
            //excute query here through engine.query()
            if (ctr % 500 == 0) {
                tx.success();
                tx.close();
                tx = graphDB.getRestAPI().beginTx();
                LOGGER.info("commit");
            }
        }
        tx.success();
    } catch (FileNotFoundException | NumberFormatException | ArrayIndexOutOfBoundsException e) {
        tx.failure();
    } finally {
        tx.close();            
    }

谢谢!

更新的基准。 很抱歉,我发布的基准测试不准确,不适用于500个查询。我的ctr变量实际上并不是指cypher查询的数量。

所以现在,我每3秒 500次查询,并且3秒也在不断增加。与嵌入式neo4j相比,它仍然很慢。

1 个答案:

答案 0 :(得分:4)

如果你必须能够使用Neo4j 2.1.0-M01(不要在prod中使用它!!),你可以从新功能中受益。如果您要创建/生成如下CSV文件:

val1,val2,val3
a_value,another_value,yet_another_value
a,b,c
....

您只需要启动以下代码:

final GraphDatabaseService graphDB = new RestGraphDatabase("http://server:7474/db/data/");
final RestAPI restAPI = ((RestGraphDatabase) graphDB).getRestAPI();
final RestCypherQueryEngine engine = new RestCypherQueryEngine(restAPI);
final String filePath = "file://C:/your_file_path.csv";
engine.query("USING PERIODIC COMMIT 500 LOAD CSV WITH HEADERS FROM '" + filePath
    + "' AS csv MERGE (a{main:csv.val1,prop2:csv.val2}) MERGE (b{main:csv.val3})"
    + " CREATE UNIQUE (a)-[r:relationshipname]->(b);", null);

您必须确保可以从安装了服务器的计算机上访问该文件。

在服务器上查看为您执行此操作的my server plugin。如果你构建它并放入plugins文件夹,你可以使用java中的插件,如下所示:

final RestAPI restAPI = new RestAPIFacade("http://server:7474/db/data");
final RequestResult result = restAPI.execute(RequestType.POST, "ext/CSVBatchImport/graphdb/csv_batch_import",
    new HashMap<String, Object>() {
        {
            put("path", "file://C:/.../neo4j.csv");
        }
    });

修改

您还可以在java REST包装器中使用BatchCallback来提高性能,并且还会删除事务样板代码。您可以编写类似于以下内容的脚本:

final RestAPI restAPI = new RestAPIFacade("http://server:7474/db/data");
int counter = 0;
List<Map<String, Object>> statements = new ArrayList<>();
while (isExists) {
    statements.add(new HashMap<String, Object>() {
        {
            put("val1", "abc");
            put("val2", "abc");
            put("val3", "abc");
        }
    });
    if (++counter % 500 == 0) {
        restAPI.executeBatch(new Process(statements));
        statements = new ArrayList<>();
    }
}

static class Process implements BatchCallback<Object> {

    private static final String QUERY = "MERGE (a{main:{val1},prop2:{val2}}) MERGE (b{main:{val3}}) CREATE UNIQUE (a)-[r:relationshipname]-(b);";

    private List<Map<String, Object>> params;

    Process(final List<Map<String, Object>> params) {
        this.params = params;
    }

    @Override
    public Object recordBatch(final RestAPI restApi) {
        for (final Map<String, Object> param : params) {
            restApi.query(QUERY, param);
        }
        return null;
    }    
}