使用CM_Cache_Backend_Redis进行Bad Redis sunion性能

时间:2017-03-12 23:39:46

标签: performance magento redis

我们有一个Magento EE 1.14情况,Redis缓存到位,尽管我们已经应用了所有的故障排除和最佳实践配置,但仍然给我们带来持续的性能问题。

缓存和FPC的配置是:

    <cache>
        <backend>Mage_Cache_Backend_Redis</backend>
        <backend_options>
            <server>127.0.0.1</server> <!-- or absolute path to unix socket -->
            <port>6379</port>
            <persistent>cache-db0</persistent> <!-- Specify a unique string like "cache-db0" to enable persistent connections. -->
            <database>0</database>
            <password></password>
            <force_standalone>0</force_standalone>  <!-- 0 for phpredis, 1 for standalone PHP -->
            <connect_retries>3</connect_retries>    <!-- Reduces errors due to random connection failures -->
            <read_timeout>20</read_timeout>         <!-- Set read timeout duration -->
            <automatic_cleaning_factor>0</automatic_cleaning_factor> <!-- Disabled by default -->
            <compress_data>1</compress_data>  <!-- 0-9 for compression level, recommended: 0 or 1 -->
            <compress_tags>1</compress_tags>  <!-- 0-9 for compression level, recommended: 0 or 1 -->
            <compress_threshold>20480</compress_threshold>  <!-- Strings below this size will not be compressed -->
            <compression_lib>gzip</compression_lib> <!-- Supports gzip, lzf and snappy -->
            <use_lua>0</use_lua> <!-- Set to 1 if Lua scripts should be used for some operations -->
        </backend_options>
    </cache>
    <full_page_cache>
        <backend>Mage_Cache_Backend_Redis</backend>
        <backend_options>
            <server>127.0.0.1</server> <!-- or absolute path to unix socket -->
            <port>6379</port>
            <persistent>cache-db2</persistent> <!-- Specify a unique string like "cache-db0" to enable persistent connections. -->
            <database>2</database> <!-- Separate database 2 to keep FPC separately -->
            <password></password>
            <force_standalone>0</force_standalone>  <!-- 0 for phpredis, 1 for standalone PHP -->
            <connect_retries>3</connect_retries>    <!-- Reduces errors due to random connection failures -->
            <lifetimelimit>57600</lifetimelimit>    <!-- 16 hours of lifetime for cache record -->
            <compress_data>0</compress_data>        <!-- DISABLE compression for EE FPC since it already uses compression -->
            <auto_expire_lifetime></auto_expire_lifetime> <!-- Force an expiry (Enterprise_PageCache will not set one) -->
            <auto_expire_refresh_on_load></auto_expire_refresh_on_load> <!-- Refresh keys when loaded (Keeps cache primed frequently requested resources) -->
        </backend_options>
    </full_page_cache>

在正常操作中,Redis表现良好,但大多数情况下没有问题,但网站遇到问题的唯一时间是执行更新产品库存和定价的每日导入任务。在此之后,Redis性能会立即显着恶化,并且有时会产生Magento错误报告,其中&#34;读取连接错误&#34;。执行垃圾收集脚本时,此问题就消失了。

在分析Redis操作时,我们注意到sunion调用的效果特别差。以下是2个Magento-Redis情况之间的比较统计数据,第一个很好,第二个是我们有问题的情况:

&#39;良好&#39;的统计数据Magento / Redis服务器

used_memory_human:6.70G
instantaneous_ops_per_sec:155
db0:keys=43699,expires=7462,avg_ttl=27636661
db2:keys=6375,expires=6375,avg_ttl=47526572
db3:keys=68969,expires=18510,avg_ttl=75156076
cmdstat_hget:calls=3742856691,usec=27239921284,usec_per_call=7.28
*cmdstat_sunion:calls=806716,usec=160088795,usec_per_call=198.45

我们的问题情况统计数据:

used_memory_human:54.33M
instantaneous_ops_per_sec:269
db0:keys=3895,expires=988,avg_ttl=4073916
db2:keys=7875,expires=561,avg_ttl=18645058
db4:keys=101,expires=23,avg_ttl=6494711
db5:keys=591,expires=8,avg_ttl=65237814
cmdstat_hget:calls=35389067,usec=237642652,usec_per_call=6.72
*cmdstat_sunion:calls=14018,usec=517646874,usec_per_call=36927.30

你可以看到平均糟糕的太阳表现:36927.30对比每次通话198.45微秒。

redis-cli info:

# Server
redis_version:3.2.8
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:6ee7659e3cbceeef
redis_mode:standalone
os:Linux 3.13.0-107-generic x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.4
process_id:6267
run_id:b27c19a312f347b10e4afe68692ffa0d22613f1c
tcp_port:6379
uptime_in_seconds:182016
uptime_in_days:2
hz:20
lru_clock:12966704

关于可能造成这种情况的任何想法?

0 个答案:

没有答案