Nginx + php-fpm + mongoDB(+ mongodb php-lib)
试图比较mongoDB的压缩率 但是结果却不如预期。 这是我的实验。
/etc/mongod.conf
# mongod.conf //default setting
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
在mongoDB shell中创建集合时设置压缩
mongoDB shell> db.createCollection(“ test”,{storageEngine:{wiredTiger:{configString:'block_compressor = none,prefix_compression = false'}}})
压缩选项总共设置为6
block_compressor = none或snappy或zlib // prefix_compression = false或true
与db.printCollectionStats()一起检查时,选项已正确应用。
插入数据大小为100KB * 100000 =约9GB。
但是db.test.storageSize()结果。
block_compression none = 10653536256(字节)
block_compression snappy = 10653405184(字节)
block_compression zlib = 6690177024(字节)
与没有压缩相比,zlib压缩了大约40%。 但是,没有一个和活泼的一样。(prefix_compress也保持不变。)
我应该添加什么设置?
+ UPDATE
snappy + false
"compression" : {
"compressed pages read" : 0,
"compressed pages written" : 0,
"page written failed to compress" : 100007,
"page written was too small to compress" : 1025
}
zlib + false
"compression" : {
"compressed pages read" : 0,
"compressed pages written" : 98881,
"page written failed to compress" : 0,
"page written was too small to compress" : 924
}
“写入的页面无法压缩”是什么意思? 解决办法是什么?
+ update2
使用的mongoDB服务器版本:4.0.9
insert data document
$result = $collection->insertOne( ['num'=> (int)$i ,
'title' => "$i",
'main' => "$i",
'img' => "$t",
'user'=>"$users",
'like'=> 0,
'time'=> "$date" ] );
---Variable Description---
$i = 1 ~ 100,000 (Increment by 1)
$t = 100KB(102400byt) random string
$users = (Random 10 characters in 12134567890abcdefghij)
$data = Real-time server date (ex = 2019:05:18 xx.xx.xx)
index
db.test.createIndex( { "num":1 } )
db.test.createIndex( { "title":1 } )
db.test.createIndex( { "user":1 } )
db.test.createIndex( { "like":1 } )
db.test.createIndex( { "time":1 } )
收集统计信息太长,因此我只输入两个。
snappy + false
“ creationString”:“ access_pattern_hint = none,allocation_size = 4KB,app_metadata =(formatVersion = 1),assert =(commit_timestamp = none,read_timestamp = none),block_allocation = best, block_compressor = snappy ,cache_resident = false,checksum = on,colgroups =,collator =,columns =,dictionary = 0,encryption =(keyid =,name =),exclusive = false,extractor =,format = btree,huffman_key =,huffman_value =,ignore_in_memory_cache_size = false,不可变= false,internal_item_max = 0,internal_key_max = 0,internal_key_truncate = true,internal_page_max = 4KB,key_format = q,key_gap = 10,leaf_item_max = 0,leaf_key_max = 0,leaf_page_max = 32KB,leaf_value_max =(64MB, enabled = true),lsm =(auto_throttle = true,bloom = true,bloom_bit_count = 16,bloom_config =,bloom_hash_count = 8,bloom_oldest = false,chunk_count_limit = 0,chunk_max = 5GB,chunk_size = 10MB,merge_custom =(前缀=,开始生成= 0,后缀=),merge_max = 15,merge_min = 0),memory_page_image_max = 0,memory_page_max = 10m,os_cache_dirty_max = 0,os_cache_max = 0,前缀压缩= false ,prefix_compr ession_min = 4,source =,split_deepen_min_child = 0,split_deepen_per_child = 0,split_pct = 90,type = file,value_format = u“,
snappy + true
“ creationString”:“ access_pattern_hint = none,allocation_size = 4KB,app_metadata =(formatVersion = 1),assert =(commit_timestamp = none,read_timestamp = none),block_allocation = best, block_compressor = snappy ,cache_resident = false,checksum = on,colgroups =,collator =,columns =,dictionary = 0,encryption =(keyid =,name =),exclusive = false,extractor =,format = btree,huffman_key =,huffman_value =,ignore_in_memory_cache_size = false,不可变= false,internal_item_max = 0,internal_key_max = 0,internal_key_truncate = true,internal_page_max = 4KB,key_format = q,key_gap = 10,leaf_item_max = 0,leaf_key_max = 0,leaf_page_max = 32KB,leaf_value_max =(64MB, enabled = true),lsm =(auto_throttle = true,bloom = true,bloom_bit_count = 16,bloom_config =,bloom_hash_count = 8,bloom_oldest = false,chunk_count_limit = 0,chunk_max = 5GB,chunk_size = 10MB,merge_custom =(前缀=,开始生成= 0,后缀=),merge_max = 15,merge_min = 0),memory_page_image_max = 0,memory_page_max = 10m,os_cache_dirty_max = 0,os_cache_max = 0, prefix_compression = true ,prefix_compre ssion_min = 4,source =,split_deepen_min_child = 0,split_deepen_per_child = 0,split_pct = 90,type = file,value_format = u“,
感谢您的关注。
答案 0 :(得分:1)
跳出来的一件事是您使用allocation_size=4KB
。使用此分配大小,您的磁盘块太小而无法压缩,因此它们不会被压缩。增加allocation_size
即可开始压缩。