为什么Cygnus没有使用MongoDB连接到另一个虚拟机?

时间:2015-12-30 14:50:01

标签: mongodb fiware-cygnus

早上好,

我有以下一组虚拟机:

  • VM A.
    • Generic Enablers Orion and Cygnus
    • IP:10.10.0.10
  • VM B.
    • MongoDB的
    • IP:10.10.0.17

天鹅座配置是:

/usr/cygnus/conf/cygnus_instance_mongodb.conf

#####
#
# Configuration file for apache-flume
#
#####
# Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
# 
# This file is part of fiware-connectors (FI-WARE project).
# 
# cosmos-injector is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# cosmos-injector is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
# 
# You should have received a copy of the GNU Affero General Public License along with fiware-connectors. If not, see
# http://www.gnu.org/licenses/.
# 
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es

# Who to run cygnus as. Note that you may need to use root if you want
# to run cygnus in a privileged port (<1024)
CYGNUS_USER=cygnus

# Where is the config folder
CONFIG_FOLDER=/usr/cygnus/conf

# Which is the config file
CONFIG_FILE=/usr/cygnus/conf/agent_mongodb.conf

# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters 
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent

# Name of the logfile located at /var/log/cygnus. It is important to put the extension '.log' in order to the log rotation works properly
LOGFILE_NAME=cygnus.log

# Administration port. Must be unique per instance
ADMIN_PORT=8081

# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30

/usr/cygnus/conf/agent_mongodb.conf

#####
#
# Copyright 2014 Telefónica Investigación y Desarrollo, S.A.U
# 
# This file is part of fiware-connectors (FI-WARE project).
# 
# fiware-connectors is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# fiware-connectors is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
# 
# You should have received a copy of the GNU Affero General Public License along with fiware-connectors. If not, see
# http://www.gnu.org/licenses/.
# 
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es

#=============================================
# To be put in APACHE_FLUME_HOME/conf/agent.conf
#
# General configuration template explaining how to setup a sink of each of the available types (HDFS, CKAN, MySQL).

#=============================================
# The next tree fields set the sources, sinks and channels used by Cygnus. You could use different names than the
# ones suggested below, but in that case make sure you keep coherence in properties names along the configuration file.
# Regarding sinks, you can use multiple types at the same time; the only requirement is to provide a channel for each
# one of them (this example shows how to configure 3 sink types at the same time). Even, you can define more than one
# sink of the same type and sharing the channel in order to improve the performance (this is like having
# multi-threading).
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel

#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf

# ============================================
# OrionMongoSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# true if the grouping feature is enabled for this sink, false otherwise
cygnusagent.sinks.mongo-sink.enable_grouping = false
# the FQDN/IP address where the MySQL server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_host = 10.10.0.17:27017
# a valid user in the MongoDB server
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above
cygnusagent.sinks.mongo-sink.mongo_password = 
# prefix for the MongoDB databases
cygnusagent.sinks.mongo-sink.db_prefix = hvds_
# prefix for the MongoDB collections
cygnusagent.sinks.mongo-sink.collection_prefix = hvds_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# specify if the sink will use a single collection for each service path, for each entity or for each attribute
cygnusagent.sinks.mongo-sink.data_model = collection-per-entity  
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mongo-sink.attr_persistence = column

#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100

执行以下步骤时:

我订阅传感器和数据一直想要保存:

(curl http://10.10.0.10:1026/NGSI10/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
    {
    "entities": [
        {
            "type": "Sensor",
            "isPattern": "false",
            "id": "sensor003"
        }
    ],
    "attributes": [
        "potencia_max",
        "potencia_min",
        "coste",
        "co2"
    ],
    "reference": "http://localhost:5050/notify",
    "duration": "P1M",
    "notifyConditions": [
        {
            "type": "ONTIMEINTERVAL",
            "condValues": [
                "PT5S"
            ]
        }
    ]
}
EOF

然后我创建或修改这些数据:

(curl http://10.10.0.10:1026/NGSI10/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
{
    "contextElements": [
        {
            "type": "Sensor",
            "isPattern": "false",
            "id": "sensor003",
            "attributes": [
                {
                    "name":"potencia_max",
                    "type":"float",
                    "value":"1000"
                },
                {
                    "name":"potencia_min",
                    "type":"float",
                    "value":"200"
                },
                {
                    "name":"coste",
                    "type":"float",
                    "value":"0.24"
                },
                {
                    "name":"co2",
                    "type":"float",
                    "value":"12"
                }
            ]
        }
    ],
    "updateAction": "APPEND"
}
EOF

获得了预期的结果,但是当访问VM B的数据库时,为了查看他们是否创建并保存了数据,我们发现它没有发生:

  

admin(空)
  当地0.078GB
  localhost(空)

如果我们转到VM A的数据库,我们可以看到谁创建了数据库:

  

admin(空)
   hvds_def_serv 0.078GB
   hvds_qsg 0.078GB
  当地0.078GB
  猎户座0.078GB

我可以说明它是如何解决的吗?

提前感谢您的帮助

编辑1

我订阅了sensor005

(curl http://10.10.0.10:1026/NGSI10/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
{
    "entities": [{
        "type": "Sensor",
        "isPattern": "false",
        "id": "sensor005"
    }],
    "attributes": [
        "muestreo"
    ],
    "reference": "http://localhost:5050/notify",
    "duration": "P1M",
    "notifyConditions": [{
        "type": "ONCHANGE",
        "condValues": [
            "muestreo"
        ]
    }],
    "throttling": "PT1S"
}
EOF

然后我编辑数据:

(curl http://10.10.0.10:1026/NGSI10/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
{
    "contextElements": [
        {
            "type": "Sensor",
            "isPattern": "false",
            "id": "sensor005",
            "attributes": [
                {
                    "name":"magnitud",
                    "type":"string",
                    "value":"energia"
                },
                {
                    "name":"unidad",
                    "type":"string",
                    "value":"Kw"
                },
                {
                    "name":"tipo",
                    "type":"string",
                    "value":"electrico"
                },
                {
                    "name":"valido",
                    "type":"boolean",
                    "value":"true"
                },
                {
                    "name":"muestreo",
                    "type":"hora/kw",
                    "value": {
                        "tiempo": [
                            "10:00:31",
                            "10:00:32",
                            "10:00:33",
                            "10:00:34",
                            "10:00:35",
                            "10:00:36",
                            "10:00:37",
                            "10:00:38",
                            "10:00:39",
                            "10:00:40",
                            "10:00:41",
                            "10:00:42",
                            "10:00:43",
                            "10:00:44",
                            "10:00:45",
                            "10:00:46",
                            "10:00:47",
                            "10:00:48",
                            "10:00:49",
                            "10:00:50",
                            "10:00:51",
                            "10:00:52",
                            "10:00:53",
                            "10:00:54",
                            "10:00:55",
                            "10:00:56",
                            "10:00:57",
                            "10:00:58",
                            "10:00:59",
                            "10:01:60"
                        ],
                        "kw": [
                            "200",
                            "201",
                            "200",
                            "200",
                            "195",
                            "192",
                            "190",
                            "189",
                            "195",
                            "200",
                            "205",
                            "210",
                            "207",
                            "205",
                            "209",
                            "212",
                            "215",
                            "220",
                            "225",
                            "230",
                            "250",
                            "255",
                            "245",
                            "242",
                            "243",
                            "240",
                            "220",
                            "210",
                            "200",
                            "200"
                        ]
                    }
                }
            ]
        }
    ],
    "updateAction": "APPEND"
}
EOF

这些是生成的两个日志:

/var/log/contextBroker/contextBroker.log

/var/log/cygnus/cygnus.log

编辑2

/var/log/cygnus/cygnus.log with DEBUG

0 个答案:

没有答案