好的,我有一个快速驱动的API,我也有socket.io运行来接收/发送实时事件...所有工作只是花花公子。我需要集群我的应用程序。我根据以下代码设置了所有内容。我开始工作,他们得到连接,一切正常,除了现在我不能“爆炸”到所有socket.io连接的事实。以下是设置(取自this):
var express = require('express'),
cluster = require('cluster'),
net = require('net'),
sio = require('socket.io'),
sio_redis = require('socket.io-redis');
var port = 3000,
num_processes = require('os').cpus().length;
if (cluster.isMaster) {
// This stores our workers. We need to keep them to be able to reference
// them based on source IP address. It's also useful for auto-restart,
// for example.
var workers = [];
// Helper function for spawning worker at index 'i'.
var spawn = function(i) {
workers[i] = cluster.fork();
// Optional: Restart worker on exit
workers[i].on('exit', function(worker, code, signal) {
console.log('respawning worker', i);
spawn(i);
});
};
// Spawn workers.
for (var i = 0; i < num_processes; i++) {
spawn(i);
}
// Helper function for getting a worker index based on IP address.
// This is a hot path so it should be really fast. The way it works
// is by converting the IP address to a number by removing the dots,
// then compressing it to the number of slots we have.
//
// Compared against "real" hashing (from the sticky-session code) and
// "real" IP number conversion, this function is on par in terms of
// worker index distribution only much faster.
var workerIndex = function (ip, len) {
var _ip = ip.split(/['.'|':']/),
arr = [];
for (el in _ip) {
if (_ip[el] == '') {
arr.push(0);
}
else {
arr.push(parseInt(_ip[el], 16));
}
}
return Number(arr.join('')) % len;
}
// Create the outside facing server listening on our port.
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var worker = workers[worker_index(connection.remoteAddress, num_processes)];
worker.send('sticky-session:connection', connection);
}).listen(port);
} else {
// Note we don't use a port here because the master listens on it for us.
var app = new express();
// Here you might use middleware, attach routes, etc.
// Don't expose our internal server to the outside.
var server = app.listen(0, 'localhost'),
io = sio(server);
// Tell Socket.IO to use the redis adapter. By default, the redis
// server is assumed to be on localhost:6379. You don't have to
// specify them explicitly unless you want to change them.
io.adapter(sio_redis({ host: 'localhost', port: 6379 }));
// Here you might use Socket.IO middleware for authorization etc.
// Listen to messages sent from the master. Ignore everything else.
process.on('message', function(message, connection) {
if (message !== 'sticky-session:connection') {
return;
}
// Emulate a connection event on the server by emitting the
// event with the connection the master sent us.
server.emit('connection', connection);
connection.resume();
});
}
所以我从各种机器连接到测试并发,工作人员做他们的事情并且一切都很好,但是当我得到IO连接时,我正在记录TOTAL“连接”计数,并且每个实例总是1。我需要一种说法
allClusterForks.emit(stuff)
我在正确的worker pid上获得连接,但“ALL CONNECTIONS”总是返回1.
io.on('connection', function(socket) {
console.log('Connected to worker %s', process.pid);
console.log("Adapter ROOMS %s ", io.sockets.adapter.rooms);
console.log("Adapter SIDS %s ", io.sockets.adapter.sids);
console.log("SOCKETS CONNECTED %s ", Object.keys(io.sockets.connected).length);
});
我可以看到使用Redis MONITOR
进入订阅/取消订阅1454701383.188231 [0 127.0.0.1:63150] "subscribe" "socket.io#/#gXJscUUuVQGzsYJfAAAA#"
1454701419.130100 [0 127.0.0.1:63167] "subscribe" "socket.io#/#geYSvYSd5zASi7egAAAA#"
1454701433.842727 [0 127.0.0.1:63167] "unsubscribe" "socket.io#/#geYSvYSd5zASi7egAAAA#"
1454701444.630427 [0 127.0.0.1:63150] "unsubscribe" "socket.io#/#gXJscUUuVQGzsYJfAAAA#"
这些是来自2台不同机器的连接,我希望通过使用socket io redis适配器,这些订阅将在同一个redis连接上进入,但它们是不同的。
我完全错过了什么?有一个令人惊讶的缺乏文档/文章,这不是完全过时/错误/模棱两可。
编辑: 节点v5.3.0 Redis v3.0.6 Socket.io v1.3.7
答案 0 :(得分:0)
所以,如果有人遇到这个,我发现实际上&#34;看&#34;在跨进程的连接套接字计数不是一件事,但广播或向它们发射是。所以我基本上只是&#34;测试&#34;没原因。一切都按预期工作。我将重写socket.io-redis适配器以允许检查跨进程的计数。
几年前有一个拉动请求来实现对我试图做的事情的支持。 https://github.com/socketio/socket.io-redis/pull/15我可能会尝试清理并重新提交。