Titan图数据库太慢,有100000个顶点,索引如何优化它?

时间:2014-09-16 03:58:15

标签: database optimization graph titan

这是索引代码:

`
g = TitanFactory.build().set("storage.backend", "cassandra")
            .set("storage.hostname", "127.0.0.1").open();

    TitanManagement mgmt = g.getManagementSystem();

    PropertyKey db_local_name = mgmt.makePropertyKey("db_local_name")
            .dataType(String.class).make();
    mgmt.buildIndex("byDb_local_name", Vertex.class).addKey(db_local_name)
            .buildCompositeIndex();

    PropertyKey db_schema = mgmt.makePropertyKey("db_schema")
            .dataType(String.class).make();
    mgmt.buildIndex("byDb_schema", Vertex.class).addKey(db_schema)
            .buildCompositeIndex();

    PropertyKey db_column = mgmt.makePropertyKey("db_column")
            .dataType(String.class).make();
    mgmt.buildIndex("byDb_column", Vertex.class).addKey(db_column)
            .buildCompositeIndex();

    PropertyKey type = mgmt.makePropertyKey("type").dataType(String.class)
            .make();
    mgmt.buildIndex("byType", Vertex.class).addKey(type)
            .buildCompositeIndex();

    PropertyKey value = mgmt.makePropertyKey("value")
            .dataType(Object.class).make();
    mgmt.buildIndex("byValue", Vertex.class).addKey(value)
            .buildCompositeIndex();

    PropertyKey index = mgmt.makePropertyKey("index")
            .dataType(Integer.class).make();
    mgmt.buildIndex("byIndex", Vertex.class).addKey(index)
            .buildCompositeIndex();

    mgmt.commit();`

这是搜索顶点,然后在3GHz 2GB RAM pc上添加3个边缘的顶点。它在3小时内完成830个顶点,而我有100,000个数据,它太慢了。代码如下:

for (Object[] rowObj : list) {
            // TXN_ID
            Iterator<Vertex> iter = g.query()
                    .has("db_local_name", "Report Name 1")
                    .has("db_schema", "MPS").has("db_column", "txn_id")
                    .has("value", rowObj[0]).vertices().iterator();
            if (iter.hasNext()) {
                vertex1 = iter.next();
                logger.debug("vertex1=" + vertex1.getId() + ","
                        + vertex1.getProperty("db_local_name") + ","
                        + vertex1.getProperty("db_schema") + ","
                        + vertex1.getProperty("db_column") + ","
                        + vertex1.getProperty("type") + ","
                        + vertex1.getProperty("index") + ","
                        + vertex1.getProperty("value"));
            }
            // TXN_TYPE
            iter = g.query().has("db_local_name", "Report Name 1")
                    .has("db_schema", "MPS").has("db_column", "txn_type")
                    .has("value", rowObj[1]).vertices().iterator();
            if (iter.hasNext()) {
                vertex2 = iter.next();
                logger.debug("vertex2=" + vertex2.getId() + ","
                        + vertex2.getProperty("db_local_name") + ","
                        + vertex2.getProperty("db_schema") + ","
                        + vertex2.getProperty("db_column") + ","
                        + vertex2.getProperty("type") + ","
                        + vertex2.getProperty("index") + ","
                        + vertex2.getProperty("value"));
            }
            // WALLET_ID
            iter = g.query().has("db_local_name", "Report Name 1")
                    .has("db_schema", "MPS").has("db_column", "wallet_id")
                    .has("value", rowObj[2]).vertices().iterator();
            if (iter.hasNext()) {
                vertex3 = iter.next();
                logger.debug("vertex3=" + vertex3.getId() + ","
                        + vertex3.getProperty("db_local_name") + ","
                        + vertex3.getProperty("db_schema") + ","
                        + vertex3.getProperty("db_column") + ","
                        + vertex3.getProperty("type") + ","
                        + vertex3.getProperty("index") + ","
                        + vertex3.getProperty("value"));
            }

            vertex4 = g.addVertex(null);
            vertex4.setProperty("db_local_name", "Report Name 1");
            vertex4.setProperty("db_schema", "MPS");
            vertex4.setProperty("db_column", "amount");
            vertex4.setProperty("type", "indivisual_0");
            vertex4.setProperty("value", rowObj[3].toString());
            vertex4.setProperty("index", i);

            vertex1.addEdge("data", vertex4);
            logger.debug("vertex1 added");
            vertex2.addEdge("data", vertex4);
            logger.debug("vertex2 added");
            vertex3.addEdge("data", vertex4);
            logger.debug("vertex3 added");
            i++;
            g.commit();
        }

无论如何都要优化此代码吗?

1 个答案:

答案 0 :(得分:1)

为了完整起见,Aurelius Graphs邮件列表中回答了这个问题:

https://groups.google.com/forum/#!topic/aureliusgraphs/XKT6aokRfFI

基本上:

  1. 构建/使用真正的复合索引: mgmt.buildIndex("by_local_name_schema_value", Vertex.class).addKey(db_local_name).addKey(db_schema).addKey(value).buildComposite();
  2. 在每个循环周期后不要调用g.commit(),而是做一些事情 像这样:if (++1%10000 == 0) g.commit()
  3. 如果尚未启用,则启用storage.batch-loading
  4. 如果所有你可以扔在cassandra是2G的RAM考虑使用BerkleyDB。 RAM的Cassandra prefers 4G最小,可能会更喜欢“更多”
  5. 我不知道您的数据的性质,但是您可以按Powers of Ten - Part I博客文章和wiki - 使用{{{3}}中的说明对其进行预先排序并使用BatchGraph {1}}会阻止您维护上面第2条中描述的交易。