I have a load balancer and two ec2 instances with php-fpm + nginx to serve my website and have configured redis to store php sessions. By running command "keys *" on redis-cli, I realized that php is creating a lot of empty sessions beyond the correct ones. Even if I close the browser, clean all cookies and do not run any php command or open any urls, it keeps creating empty sessions. The problem is that the session expiry time is 15hours, so it will create more sessions than will remove in this time, since it's creating about 30 empty sessions per hour. The only way that it stoped creating new sessions was by stopping php-fpm at my instances.
My guess is that is maybe something with the load balancer health check, I added my nginx.conf and php.ini below and you can see how I'm handling these load balancer checkings and my php session configs.
keys *
1) "PHPREDIS_SESSION:22u4tot1ilj2jn2pegsvsa9455"
2) "PHPREDIS_SESSION:u9c530pk3h0kr0moigf9a030c7"
...
316) "PHPREDIS_SESSION:d3t36ou13ljuj5ntt2l2b6sne0"
317) "PHPREDIS_SESSION:5kbn03dn01qdn405pg43bbd1i3"
Only 1 session is filled up, other 316 are empty. By running "ttl key", I see the expire time is same that I've set on php.ini.
My php code is just a session_start(); for tests purposes.
My php.ini:
session.use_strict_mode = 0
session.use_cookies = 1
session.cookie_secure = 1
session.use_only_cookies = 1
session.name = PHPSESSID
session.cookie_lifetime = 54000
session.cookie_path = /
session.cookie_domain = .domain.xxx
session.cookie_httponly = 1
session.serialize_handler = php
session.gc_probability = 1
session.gc_divisor = 50
session.gc_maxlifetime = 54000
session.cache_limiter = nocache
session.cache_expire = 900
session.use_trans_sid = 0
I've checked phpinfo() and nothing is overwriting these configs
Nginx.conf:
#This is the block that responds to loadbalancer requests and serves the website
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /var/www/html;
upstream php-fpm {
server 127.0.0.1:9000;
}
location /nginx-health {
access_log off;
return 200 "healthy\n";
}
try_files $uri $uri/ @rewrite;
location @rewrite {
rewrite ^/(.*)$ /index.php?param=$1;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_intercept_errors on;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
}
}
#this is the block to serve a websocket listener. It handles conections directly to the node, no passing by load balancer.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name sub.domain.xxx;
root /var/www/html;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 720m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:4555;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
ssl_certificate /etc/letsencrypt/live/sub.domain.xxx/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sub.domain.xxx/privkey.pem;
}
I tried to create a cronjob to get redis keys, test its value and delete the empty ones but I saw running "keys" command is really bad for production enviroments. Does anyone have an idea how to fix this issue?
答案 0 :(得分:0)
我无法确切告诉您发生了什么,但是我有一些建议:
keys
是一项O(n)操作,但是如果您的实例很小,那么它就微不足道了。请密切注意您的慢速日志记录,看看您的keys
操作是否真的花费了太长时间,但我想是没有。