Scaling socket.io between servers

socket = io multiple connections
socket = io adapter
socket = io multiple clients
socket = io 400 (bad request)
socket.io-redis example
socket.io-redis npm
socket = io nginx
socket.io kubernetes

What are the approaches for scaling socket.io applications? I see the following problem which I don't understand how solve:

  • How can a scaled socket.io app broadcast to a room? In other words, how will socket.io know about the neighbors from other servers?

It is hard for me to imagine how it should work -- maybe a shared variant store for all the necessary information, such as redis -- is this a possibility?

EDIT: I found this article: http://www.ranu.com.ar/2011/11/redisstore-and-rooms-with-socketio.html

Based on it I did the following:

   var pub = redis.createClient();  
   var sub = redis.createClient();
   var store = redis.createClient();
   pub.auth("pass");
   sub.auth("pass");
   store.auth("pass");

    io.configure( function(){
io.enable('browser client minification');  // send minified client
io.enable('browser client etag');          // apply etag caching logic based on version number
    io.enable('browser client gzip');          // gzip the file
io.set('log level', 1);                    // reduce logging
io.set('transports', [                     // enable all transports (optional if you want flashsocket)
    'websocket'
  , 'flashsocket'
  , 'htmlfile'
  , 'xhr-polling'
  , 'jsonp-polling'
]);
var RedisStore = require('socket.io/lib/stores/redis');
io.set('store', new RedisStore({redisPub:pub, redisSub:sub, redisClient:store}));
    });

But I get the following error:

      Error: Uncaught, unspecified 'error' event.
     at RedisClient.emit (events.js:50:15)
     at Command.callback (/home/qwe/chat/io2/node_modules/socket.io/node_modules/redis/index.js:232:29)
     at RedisClient.return_error (/home/qwe/chat/io2/node_modules/socket.io/node_modules/redis/index.js:382:25)
     at RedisReplyParser.<anonymous> (/home/qwe/chat/io2/node_modules/socket.io/node_modules/redis/index.js:78:14)
     at RedisReplyParser.emit (events.js:67:17)
     at RedisReplyParser.send_error (    /home/qwe/chat/io2/node_modules/socket.io/node_modules/redis/lib/parser/javascript.js:265:14)
     at RedisReplyParser.execute (/home/qwe/chat/io2/node_modules/socket.io/node_modules/redis/lib/parser/javascript.js:124:22)
     at RedisClient.on_data (/home/qwe/chat/io2/node_modules/socket.io/node_modules/redis/index.js:358:27)
     at Socket.<anonymous> (/home/qwe/chat/io2/node_modules/socket.io/node_modules/redis/index.js:93:14)
     at Socket.emit (events.js:67:17)

My Redis credentials are definitely correct.

EDIT: Very strange, but with Redis authorization disabled then everything works. So the question is still valid. Additionally, I have a question about how to get information (for example user name) for all the participants of a group (room) in this RedisStorage mode, is it possible to implement this? Ideally this can be done through the Redis Pub/Sub functionality.

you can use socket.io cluster for this to work https://github.com/muchmala/socket.io-cluster

Scaling socket.io between servers, Let's see what is the difference between Polling and WebSocket. Long Polling. Sockets allow receiving events from the server without requesting  It seems like a good solve for this would be to enable peer-to-peer communication between different socket.io instances on the express servers, this way they can all stay in sync with one another. You can do this using something like scuttlebutt and with its event emitter – Travis Kaufman Jun 13 '15 at 18:57

Try adding this code in;

pub.on('error', function (err) {
  console.error('pub', err.stack);
});
sub.on('error', function (err) {
  console.error('sub', err.stack);
});
store.on('error', function (err) {
  console.error('store', err.stack);
});

It won't fix it, but it should at least give you a more useful error.

Scaling Node.js Socket Server with Nginx and Redis, ) is single threaded, meaning that it cannot take advantage of multi-core processors. In order to do this, we need to run a separate Node. js instance for each core. However, these instances would be completely independent and not share any data. Hey all, I'm using socket.io on multiple servers (several Azure machines) and processes (using cluster). I found the regular answers that say I should use sticky sessions to make sure connections to an instance of socket.io will continue

I suggested you to not using RedisStore. It has a problem with CPU usage because of its poor use of pub-sub which result in unscalable (It can receive load less than one pure node.js instance with socket.io which is pretty useless). I personally used Redis as a data store to keep room list in there and implement my own room function (Redis is a key-value database in memory but has persistent mechanic). When you want a room data, just fetch data from same redis and that's it. However, to be able to run Socket.io in multiple instance, you also need a load balancer like HAProxy, Nginx to separate works to multiple node.js port or else, your user will still use only just one node.js process. This is a huge work. If you also has other web frontend in other language, that's more work too because some network block all port except port 80 and 443. You can read more information about these things at:

http://book.mixu.net/node/ch13.html

Developing a scalable real-time desktop or mobile application with , concurrent connections. This guy appears to have succeeded in having over 1 million concurrent connections on a single Node. js server. It's not clear to me exactly how many ports he was using though. I'm going to structure my response into two parts: 1. What is expensive in socket.io? Put another why, what causes you to need to "scale"? You assume it&#039;s raw concurrency, but it&#039;s somewhat more complicated than that.

Another possible solution is to use an alternative like PubNub to scale real-time interaction. I came across a similar problem when developing Mote.io and decided to go with a hosted solution instead of building a load balancer. I now work for PubNub.

PubNub will take care of the datasync problem you're talking about. Normally you would need to sync redis across servers or load balance your clients to the same instance to make sure they get all the same messages. PubNub abstracts this so you don't need to worry about it.

Real-time Chat Apps in 10 Lines of Code

Enter Chat and press enter
<div><input id=input placeholder=you-chat-here /></div>

Chat Output
<div id=box></div>

<script src=http://cdn.pubnub.com/pubnub.min.js></script>
<script>(function(){
var box = PUBNUB.$('box'), input = PUBNUB.$('input'), channel = 'chat';
PUBNUB.subscribe({
    channel  : channel,
    callback : function(text) { box.innerHTML = (''+text).replace( /[<>]/g, '' ) + '<br>' + box.innerHTML }
});
PUBNUB.bind( 'keyup', input, function(e) {
    (e.keyCode || e.charCode) === 13 && PUBNUB.publish({
        channel : channel, message : input.value, x : (input.value='')
    })
} )
})()</script>

Differences between socket.io and websockets, is a JavaScript library for realtime web applications. It enables realtime, bi-directional communication between web clients and servers. It has two parts: a client-side library that runs in the browser, and a server-side library for Node. js. Scaling a realtime chat app on AWS using Socket.io, Redis, and AWS Fargate But one of the missing pieces in that first article is scaling. Ideally this application should be able to handle

Using RabbitMQ

I achieved socket.io application scaling using rabbitMQ. In my current setup, I run two replicas of a socket.io application containers in docker swarm and communicate with them. Here's the demonstration with container ID shown with each message:

How to

RabbitMQ is a message broker and basically, it syncs all the instances of the application backend. Each instance of the backend pushes its message to a queue on rabbitMQ which is consumed by all other instances. RabbitMQ handler in NodeJS is given below.

function rabbitHandler(io){
  rabbitMQHandler('amqp://test_rabbit', function(err, options){

    if(err){
      throw err;  
    }

    options.onMessageReceived = onMessageReceived;

    io.on('connection', websocketConnect);

    function websocketConnect(socket){

      console.log('New connection')
      io.emit('start', {ipHost: os.hostname()})

      socket.on('disconnect', socketDisconnect);
      socket.on('message', socketMessage);

      function socketDisconnect(e){
        console.log('Disconnect ', e);
      }

      function socketMessage(text){
        var message =  {text: text, date: new Date(), ip: os.hostname()};
  //      io.emit('message', message) // Instead of emitting the message on socket, it is being pushed on rabbitMQ queue.
        options.emitMessage(message);
      }
    }

    function onMessageReceived(message){

      io.emit('message', message)
    }

  });
} 

There's no change in the socket client whatsoever.

The whole project is given on the following link with docker image and docker compose files. You can try it out there.

https://github.com/saqibahmed515/chat-scaling

Maximum concurrent Socket.IO connections, You can use Redis Rdapter instead of Redis Store for Socket.IO 1.x. Adapter is designed not to share data among server processes as  Horizontally Scaling Node.js and WebSockets with Redis The Node.js cluster module is a common method of scaling servers, allowing for the use of all available CPU cores. However, what happens when you must scale to multiple servers or virtual machines?

Socket.IO, The WebSocket client opens up a connection to the server and reuses it But in reality, we would want to scale across multiple servers and we  Scaling WebSockets. While talking to developers who haven’t used WebSockets yet, they usually have the same concern: how do you scale it out across multiple servers? Publishing to a channel on one server is fine, provided all subscribers are connected to that one server. As soon as you have multiple servers, you need to add something else to

Scaling socket.io on multiple servers and processes · Issue #1853 , The result was a containerized Node.js process running a socket.io server. Then client connections are distributed across each of these copies of the  Due to the fact socket.io uses heartbeat mechanism to detect if a socket is disconnected, it's understandable that there is some small time difference between when client side/server side 'disconnect' events are fired, like ~10 seconds.

How to Scale WebSockets, WebSocket is a powerful protocol, but scaling WebSocket server can probably know the difference between vertical and horizontal scaling. Socket.IO allows bi-directional communication between client and server. Bi-directional communications are enabled when a client has Socket.IO in the browser, and a server has also integrated the Socket.IO package. While data can be sent in a number of forms, JSON is the simplest.