我的MongoDB实例生成千兆字节的日志(通过QUERY组件),即使我将全局详细级别设置为0,所有组件设置为-1(详细程度继承)。
这对实例的性能影响很大。
以下是记录记录的示例,db.stats()和db.getLogComponents()的输出。
PS / EDIT - 因为我已经看到了投票结果......:我已经尝试过我在网上找到的所有内容(使用带有quiet = true的yaml配置文件,我检查了分析level是0,我甚至尝试将日志组件显式设置为0,但没有任何效果,mongo仍在大量记录这些查询......)为什么mongo会继续生成这些日志条目?有什么建议可以关闭它吗?感谢。
2016-02-23T09:14:27.089 + 0100 I QUERY [conn526]查询jdigger.stacktraces查询:{hashcode:730309037} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates: 0 writeConflicts:0 numYields:399 nreturned:2 reslen:1114 locks:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400} 268ms 2016-02-23T09:14:27.089 + 0100 I QUERY [conn546]查询jdigger.stacktraces查询:{hashcode:1} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts: 0 numYields:399 nreturned:4 reslen:232 locks:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms 2016-02-23T09:14:27.089 + 0100 I QUERY [conn532]查询jdigger.stacktraces查询:{hashcode:-1176121626} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts :0 numYields:399 nreturned:3 reslen:2162 lock:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 268ms 2016-02-23T09:14:27.089 + 0100 I QUERY [conn533]查询jdigger.stacktraces查询:{hashcode:-1452854181} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts :0 numYields:399 nreturned:1 reslen:888 locks:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms 2016-02-23T09:14:27.089 + 0100 I QUERY [conn529]查询jdigger.stacktraces查询:{hashcode:401721954} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts: 0 numYields:399 nreturned:2 reslen:776 lock:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms 2016-02-23T09:14:27.091 + 0100 I QUERY [conn524]查询jdigger.stacktraces查询:{hashcode:-73774731} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts :0 numYields:399 nreturned:1 reslen:311 locks:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms 2016-02-23T09:14:27.195 + 0100 I QUERY [conn534]查询jdigger.stacktraces查询:{hashcode:1} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts: 0 numYields:399 nreturned:4 reslen:232 locks:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 278ms 2016-02-23T09:14:27.203 + 0100 I QUERY [conn525]查询jdigger.stacktraces查询:{hashcode:1} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts: 0 numYields:399 nreturned:4 reslen:232 locks:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 268ms 2016-02-23T09:14:27.204 + 0100 I QUERY [conn528]查询jdigger.stacktraces查询:{hashcode:401721954} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts: 0 numYields:399 nreturned:2 reslen:776 lock:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms 2016-02-23T09:14:27.204 + 0100 I QUERY [conn547]查询jdigger.stacktraces查询:{hashcode:196127445} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts: 0 numYields:399 nreturned:2 reslen:1004 lock:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms 2016-02-23T09:14:27.204 + 0100 I QUERY [conn535]查询jdigger.stacktraces查询:{hashcode:-1176121626} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts :0 numYields:399 nreturned:3 reslen:2162 lock:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms 2016-02-23T09:14:27.205 + 0100 I QUERY [conn544]查询jdigger.stacktraces查询:{hashcode:401721954} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts: 0 numYields:399 nreturned:2 reslen:776 lock:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms 2016-02-23T09:14:27.207 + 0100 I QUERY [conn543]查询jdigger.stacktraces查询:{hashcode:1} planSummary:COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts: 0 numYields:399 nreturned:4 reslen:232 locks:{Global:{acquireCount:{r:800}},Database:{acquireCount:{r:400}},Collection:{acquireCount:{r:400}}} 269ms
db.stats() { "分贝" :" jdigger", "集合" :2, "对象" :4821385, " avgObjSize" :284.73536857147894, "命令datasize" :1372818835, " storageSize" :395268096, " numExtents" :0, "索引" :4, " indexSize" :81928192, " OK" :1 }
db.getLogComponents() { "冗长" :0, " AccessControl的" :{ "冗长" : - 1 }, "命令" :{ "冗长" : - 1 }, "控制" :{ "冗长" : - 1 }, "执行" :{ "冗长" : - 1 }, "地理" :{ "冗长" : - 1 }, "指数" :{ "冗长" : - 1 }, "网络" :{ "冗长" :-1, " ASIO" :{ "冗长" : - 1 }, "桥" :{ "冗长" : - 1 } }, "查询" :{ "冗长" : - 1 }, "复制" :{ "冗长" : - 1 }, "分片" :{ "冗长" : - 1 }, "存储" :{ "冗长" :-1, "期刊" :{ "冗长" : - 1 } }, "写" :{ "冗长" : - 1 }, " ftdc" :{ "冗长" : - 1 }
答案 0 :(得分:0)
MongoDB的探查器会自动记录任何超过100毫秒的查询,因此可能就是这样。您可以使用db.setProfilingLevel(0)
将其关闭 - 请参阅https://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
但是,我还考虑为您的集合添加一个针对您的查询进行优化的索引,因为现在,查询正在读取集合中的每个记录(COLLSCAN)。
答案 1 :(得分:0)
我不确定这是否应该被视为变通方法或实际解决方案,但我能够实现我的目标,即通过使用db.setProfilingLevel(0,1000)来减少一般的日志记录开销。
正如有人所指出的那样,Mongo似乎在描述" longlasting"默认情况下的查询,无论分析级别设置或任何其他参数。
由于我的大多数查询都是200毫秒的查询,因此将阈值设置为1秒对我来说非常有用。鉴于我的用例的实现细节,200毫秒并没有必要令人震惊,因此我并没有真正适应"规范"。