如何通过安宁将数据插入德鲁伊

时间:2016-03-14 05:51:10

标签: druid

通过关注http://druid.io/docs/latest/tutorials/tutorial-loading-streaming-data.html的教程,我可以通过Kafka控制台将数据插入德鲁伊

Kafka控制台

spec文件如下所示

实例/索引/ wikipedia.spec

[
  {
    "dataSchema" : {
      "dataSource" : "wikipedia",
      "parser" : {
        "type" : "string",
        "parseSpec" : {
          "format" : "json",
          "timestampSpec" : {
            "column" : "timestamp",
            "format" : "auto"
          },
          "dimensionsSpec" : {
            "dimensions": ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"],
            "dimensionExclusions" : [],
            "spatialDimensions" : []
          }
        }
      },
      "metricsSpec" : [{
        "type" : "count",
        "name" : "count"
      }, {
        "type" : "doubleSum",
        "name" : "added",
        "fieldName" : "added"
      }, {
        "type" : "doubleSum",
        "name" : "deleted",
        "fieldName" : "deleted"
      }, {
        "type" : "doubleSum",
        "name" : "delta",
        "fieldName" : "delta"
      }],
      "granularitySpec" : {
        "type" : "uniform",
        "segmentGranularity" : "DAY",
        "queryGranularity" : "NONE"
      }
    },
    "ioConfig" : {
      "type" : "realtime",
      "firehose": {
        "type": "kafka-0.8",
        "consumerProps": {
          "zookeeper.connect": "localhost:2181",
          "zookeeper.connection.timeout.ms" : "15000",
          "zookeeper.session.timeout.ms" : "15000",
          "zookeeper.sync.time.ms" : "5000",
          "group.id": "druid-example",
          "fetch.message.max.bytes" : "1048586",
          "auto.offset.reset": "largest",
          "auto.commit.enable": "false"
        },
        "feed": "wikipedia"
      },
      "plumber": {
        "type": "realtime"
      }
    },
    "tuningConfig": {
      "type" : "realtime",
      "maxRowsInMemory": 500000,
      "intermediatePersistPeriod": "PT10m",
      "windowPeriod": "PT10m",
      "basePersistDirectory": "\/tmp\/realtime\/basePersist",
      "rejectionPolicy": {
        "type": "messageTime"
      }
    }
  }
]

我通过

开始实时
java -Xmx512m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.realtime.specFile=examples/indexing/wikipedia.spec -classpath config/_common:config/realtime:lib/* io.druid.cli.Main server realtime

在Kafka控制台中,我粘贴并输入以下内容

{"timestamp": "2013-08-10T01:02:33Z", "page": "Good Bye", "language" : "en", "user" : "catty", "unpatrolled" : "true", "newPage" : "true", "robot": "false", "anonymous": "false", "namespace":"article", "continent":"North America", "country":"United States", "region":"Bay Area", "city":"San Francisco", "added": 57, "deleted": 200, "delta": -143}

然后我倾向于通过创建select.json并运行curl -X POST 'http://localhost:8084/druid/v2/?pretty' -H 'content-type: application/json' -d @select.json

来执行查询

select.json

 {
   "queryType": "select",
   "dataSource": "wikipedia",
   "dimensions":[],
   "metrics":[],
   "granularity": "all",
   "intervals": [
     "2000-01-01/2020-01-02"
   ],

   "filter" : {"type":"and",
        "fields" : [
                { "type": "selector", "dimension": "user", "value": "catty" }
        ]
   },

   "pagingSpec":{"pagingIdentifiers": {}, "threshold":500}
 }

我能够得到以下结果。

[ {
  "timestamp" : "2013-08-10T01:02:33.000Z",
  "result" : {
    "pagingIdentifiers" : {
      "wikipedia_2013-08-10T00:00:00.000Z_2013-08-11T00:00:00.000Z_2013-08-10T00:00:00.000Z" : 0
    },
    "events" : [ {
      "segmentId" : "wikipedia_2013-08-10T00:00:00.000Z_2013-08-11T00:00:00.000Z_2013-08-10T00:00:00.000Z",
      "offset" : 0,
      "event" : {
        "timestamp" : "2013-08-10T01:02:33.000Z",
        "continent" : "North America",
        "robot" : "false",
        "country" : "United States",
        "city" : "San Francisco",
        "newPage" : "true",
        "unpatrolled" : "true",
        "namespace" : "article",
        "anonymous" : "false",
        "language" : "en",
        "page" : "Good Bye",
        "region" : "Bay Area",
        "user" : "catty",
        "deleted" : 200.0,
        "added" : 57.0,
        "count" : 1,
        "delta" : -143.0
      }
    } ]
  }
} ]

我似乎正确地设置了德鲁伊。

现在,我想通过HTTP端点插入数据。根据{{​​3}},似乎推荐的方法是使用tranquility

安宁

我通过

启动了索引服务
java -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath config/_common:config/overlord:lib/*: io.druid.cli.Main server overlord

conf / server.json 看起来像

{
   "dataSources" : [
      {
         "spec" : {
            "dataSchema" : {
                "dataSource" : "wikipedia",
                "parser" : {
                    "type" : "string",
                    "parseSpec" : {
                      "format" : "json",
                      "timestampSpec" : {
                        "column" : "timestamp",
                        "format" : "auto"
                      },
                      "dimensionsSpec" : {
                        "dimensions": ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"],
                        "dimensionExclusions" : [],
                        "spatialDimensions" : []
                      }
                    }
                },
                "metricsSpec" : [{
                    "type" : "count",
                    "name" : "count"
                }, {
                    "type" : "doubleSum",
                    "name" : "added",
                    "fieldName" : "added"
                }, {
                    "type" : "doubleSum",
                    "name" : "deleted",
                    "fieldName" : "deleted"
                }, {
                    "type" : "doubleSum",
                    "name" : "delta",
                    "fieldName" : "delta"
                }],
                "granularitySpec" : {
                    "type" : "uniform",
                    "segmentGranularity" : "DAY",
                    "queryGranularity" : "NONE"
                }
            },
            "tuningConfig" : {
               "windowPeriod" : "PT10M",
               "type" : "realtime",
               "intermediatePersistPeriod" : "PT10M",
               "maxRowsInMemory" : "100000"
            }
         },
         "properties" : {
            "task.partitions" : "1",
            "task.replicants" : "1"
         }
      }
   ],
   "properties" : {
      "zookeeper.connect" : "localhost",
      "http.port" : "8200",
      "http.threads" : "8"
   }
}

然后,我使用

启动服务器
bin/tranquility server -configFile conf/server.json

我执行帖子How realtime data input to Druid?content-type等于application/json

{"timestamp": "2013-08-10T01:02:33Z", "page": "Selamat Pagi", "language" : "en", "user" : "catty", "unpatrolled" : "true", "newPage" : "true", "robot": "false", "anonymous": "false", "namespace":"article", "continent":"North America", "country":"United States", "region":"Bay Area", "city":"San Francisco", "added": 57, "deleted": 200, "delta": -143}

我得到以下回复

{"result":{"received":1,"sent":0}}

似乎宁静已收到我们的数据,但未能将其发送给德鲁伊!

我尝试运行curl -X POST 'http://localhost:8084/druid/v2/?pretty' -H 'content-type: application/json' -d @select.json,但没有得到我通过安宁插入的输出。

知道为什么吗?感谢。

4 个答案:

答案 0 :(得分:3)

当您发送的数据超出窗口期间时,通常会发生这种情况。如果要手动插入数据,请以毫秒为单位提供准确的当前时间戳(UTC)。此外,如果您使用任何脚本生成数据,则可以轻松完成。确保它是UTC当前时间。

答案 1 :(得分:2)

非常困难 设置德鲁伊以便在实时数据插入时正常工作。

我找到的最好的赌注是,使用https://github.com/implydata。 Imply是围绕德鲁伊的一套包装,使其易于使用。

然而,暗示的实时插入也不完美。在通过实时插入3000万件物品后,我进行了实验OutOfMemoryException。这将导致先前插入的3000万行数据丢失。

有关数据丢失的详细信息,请访问:https://groups.google.com/forum/#!topic/imply-user-group/95xpYojxiOg

已提交发票:https://github.com/implydata/distribution/issues/8

答案 2 :(得分:0)

德鲁伊流媒体窗口时间非常短(10分钟)。在此期间之外,您的活动将被忽略。

答案 3 :(得分:0)

不插入的另一个原因是协调员/外表正在运行中的内存不足