我正在尝试为我的Watson发现服务设置本地mongodb抓取工具。 MongoDB启动并运行。我下载了JDBC连接器(mongodb-driver-3.4.2.jar)并将其放在/opt/ibm/crawler/connectorFramework/crawler-connector-framework-0.1.18/lib/java/database /
中让我告诉你如何修改配置文件:
在crawler.conf上,在主要部分“input_adapter”下,我更改了以下值:
crawl_config_file = "connectors/database.conf",
crawl_seed_file = "seeds/database-seed.conf",
extra_jars_dir = "database",
on seeds / database-seed.conf,种子>属性部分,网址的部分如下所示:
{
name ="url",
value="mongo://localhost:27017/local/tweets?per=1000"
},
(也尝试使用mongodb而不是mongo)
在connectors / database.conf上,文件的第一部分如下所示:
crawl_extender {
attribute = [
{
name="protocol",
value="mongo"
},
{
name="collection",
value="SomeCollection"
}
],
(也尝试使用mongodb而不是mongo)
当我运行crawler命令时,这是我的输出:
pish@ubuntu-crawler:~$ crawler crawl --config ./crawler-config/config/crawler.conf
2017-08-02 04:29:10,206 INFO: Connector Framework service will start and connect to crawler on port 35775
2017-08-02 04:29:10,460 INFO: This crawl is running in CrawlRun mode
2017-08-02 04:29:10,460 INFO: Running a crawl...
2017-08-02 04:29:10,465 INFO: URLs matching these patterns will be not be processed: (?i)\.(xlsx?|pptx?|jpe?g|gif|png|mp3|tiff)$
2017-08-02 04:29:10,500 INFO: HikariPool-1 - Starting...
2017-08-02 04:29:10,685 INFO: HikariPool-1 - Start completed.
2017-08-02 04:29:12,161 ERROR: There was a problem processing URL mongo://localhost:27017/local/tweets?per=1000: Couldn't load JDBC driver :
2017-08-02 04:29:17,184 INFO: HikariPool-1 - Shutdown initiated...
2017-08-02 04:29:17,196 INFO: HikariPool-1 - Shutdown completed.
2017-08-02 04:29:17,198 INFO: The service for the Connector Framework Input Adapter was signaled to halt.
Attempting to shutdown the crawler cleanly.
我的抓取工具中缺少什么或做错了什么?
答案 0 :(得分:0)
最后,事实证明我还必须在其中一个配置文件中指定连接字符串。它现在有效。