How to deploy multiple Node.js Socket.IO servers with Nginx and SSL

A late post this time. I spent a good part of the past week figuring out how to deploy one or more Socket.IO-based Node.js servers using Nginx. Since it was about deployment, SSL was an important factor too. I write this post because of the sheer amount of Googling and trial-and-error I had to go through before I finally had a solution working. The primary problems in finding a straightforward solution via search engines were-

1. SSL – Especially since my application requires re-direction of POST requests as they are; something a simple rewrite command in Nginx wouldn’t accomplish.

2. Socket servers – Deploying a simple Node.js server is one thing, deploying a socket-based server is something different altogether.

3. Multiple Socket-servers – This was undoubtedly the trickiest part. To re-direct the incoming request to the appropriate server, and present it to the server in a format it understands, was the most difficult job (atleast for me). I experimented with a lot of Nginx rewriting, Node.js namespacing, etc., before I finally found an answer that worked.

So, here goes the procedure involved….

Step 1. Write your Node.js code(obviously)

For the sake of this tutorial, I will use two simple echo servers. Each is programmed to listen to its own port.

Server-1 (node1.js)


//PORT to connect to
const PORT = 3001;

//Instantiate socket server
var app = require('http').createServer().listen(PORT);
var io = require('socket.io').listen(app);

//Simple echo server
io.on('connection', function(socketconnection){
	socketconnection.send("Connected to Server-1");
	
	socketconnection.on('message', function(message){
		socketconnection.send(message);
	});
});

Server-2 (node2.js)


//PORT to connect to
const PORT = 3002;

//Instantiate socket server
var app = require('http').createServer().listen(PORT);
var io = require('socket.io').listen(app);

//Simple echo server
io.on('connection', function(socketconnection){
	socketconnection.send("Connected to Server-2");
	
	socketconnection.on('message', function(message){
		socketconnection.send(message);
	});
});


Step 2. Install PM2 for running Node.js servers

There are many options for this; I use PM2 for its simple commands and nice-looking interface. PM2 also helps ‘watch’ the Node.js servers you deploy, so that they can be restarted in case of failure or code changes. It also offers many other options for the way you want your server to function, but I won’t go into those details here.

Install PM2 using

npm install pm2 -g

Once installed, you can start your servers(with watching enabled) as follows:

pm2 start --watch node1.js
pm2 start --watch node2.js

Depending on your setup, you might have to use sudo with the above commands. Click here if you want to know more PM2 commands.

Step 3. Install and start Nginx

Pretty straight-forward.

sudo apt-get install nginx
sudo service nginx start

Step 4. Get your SSL certificates

You can either use un-verified self-signed certificates (good for development/testing), or buy ones from someplace like Comodo (essential for deployment).

To generate self-signed ones, do

sudo mkdir /etc/nginx/ssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt

This will put your certificates in /etc/nginx/ssl.

Step 5. Configure Nginx

Make a file called server_nginx.conf and put the following code in it-


#Upstream Node Server-1
upstream node1 {
	server 127.0.0.1:3001;
}

#Upstream Node Server-2
upstream node2 {
	server 127.0.0.1:3002;
}

#To redirect all HTTP traffic(keeping requests like POST intact)
#to HTTPS
server {
	listen 80;
	server_name localhost;

    location / {
  		return 307 https://localhost$request_uri;
	}
}


#The actual HTTPS server
server {
	listen 443;
    ssl on;
	server_name localhost;

	#SSL certificates
    ssl_certificate /etc/nginx/ssl/nginx.crt;
    ssl_certificate_key /etc/nginx/ssl/nginx.key;

	#For Server-1
	location /server1/ {
		#Configure proxy to pass data to upstream node1
		proxy_pass http://node1/socket.io/;
		#HTTP version 1.1 is needed for sockets
    	proxy_http_version 1.1;
    	proxy_set_header Upgrade $http_upgrade;
    	proxy_set_header Connection "upgrade";
	}

	#For Server-2
	location /server2/ {
		#Configure proxy to pass data to upstream node2
		proxy_pass http://node2/socket.io/;
		#HTTP version 1.1 is needed for sockets
    	proxy_http_version 1.1;
    	proxy_set_header Upgrade $http_upgrade;
    	proxy_set_header Connection "upgrade";
	}

}

You will now have to make Nginx aware of this configuration by symlinking this file into /etc/nginx/sites-enabled. To do that, do

sudo ln -s /path/to/server_nginx.conf /etc/nginx/sites-enabled/

Then restart Nginx

sudo service nginx restart

Thats about it. But before we go to the client code, a few points to note about the above Nginx .conf file:

1. You can use rewrite on line 18; I do it this way because it preserves non-GET requests as they are. If you know a better way to go about it, do let me know :-).

2. Defining the Node.js servers in upstream blocks is good practice, especially since it helps do easy load-balancing between identical Node.js servers in the future. More on that here.

3. If you buy SSL certificates from a vendor, you can modify lines 30 and 31 as necessary.

4. Observe lines 36 and 46. The trailing /socket.io/ (including the trailing backslash) is essential, since it replaces the original URI and expresses the request in a format that Socket.IO will understand.

5. Depending on your application, you might have to add more code to the .conf file. For example, if your application involves long intervals where no reading takes place on the socket connection, you might have to increase the proxy read-timeout period.

6. Ofcourse, during actual deployment, you will have to replace locahost with your actual domain name.

Finally, heres a client code to connect to a server and send a message:


var io = require('socket.io-client');
var conn = io.connect('https://localhost', {path: '/server1'});

conn.emit('message', 'Some message');
conn.on('message', function (data){
	console.log(data);
});

Note the ‘path’ parameter thats conveyed during connection, and the explicit use of https in the address.

Thats it for this time :-D. Hope this helps someone, and saves him/her the trouble of searching the internet for a complete day trying to accomplish (what is really) a trivial thing. Thanks for reading!

Advertisements

17 thoughts on “How to deploy multiple Node.js Socket.IO servers with Nginx and SSL

  1. Is it possible that nodejs 1 socket is want to send message to nodejs 2 socket, I mean how nodejs 1 identifies the socket which nodejs 1 want to send a message is connected to nodejs2.

    1. Both the Node Servers are complete servers on their own, not just interfaces. So every client will EITHER talk only with Node-1, or Node-2, not both. My application doesn’t require any communication between Node-1 and Node-2 since all the information they need is obtained from Redis. But if yours does, you can use Redis for the same. Everytime Node-1 wants to notify Node-2 of something, it would just publish to a channel Node-2 is listening on.

  2. Sachin, thank you for the detailed post. My setup is somewhat different from yours. I have a single upstream block load balancing the connection to three Node.js servers (running on different VMs).

    I am confused about this directive:

    proxy_pass http://node1/socket.io/;

    In my case, the single location block (/) deals with all the requests, not just ones from socket.io clients. What exactly does this directive do and, more to the point, what will its effect be on regular HTTPS requests?

    1. If you aren’t using socket.io, then you don’t need the ‘/socket.io/’ part. The proxy_pass directive tells Nginx setup to direct all requests that come to ‘/server1/’ to the Node-1 server. Since the node server is running on localhost(and therefore has not direct connection to the internet), Nginx acts like a _proxy_ to divert all relevant connections to and from the node server- essentially acting as a communicator of sorts.

      1. Great post!! I’m trying to do basically this exact setup, except I get an error when adding the proxy_pass directive.

        I’m actually already using proxy_pass to successfully redirect to my second node server… but I can’t seem to get a connection between my second server’s socket client and server. I get an error when changing proxy_pass.

  3. Hey thank you for this article.
    I wonder:
    Im using a indefinite number of forked node processes (depends on how much are needed).
    Each of them relies on socket.io just as your example. With your approach i just could configure the server_nginx.conf – file to, lets say: 20 servers.But what if the count of my node processes exceed this number? Is there a way to do this more generic/dynamically?

    1. It might be very tricky to do what you suggest – at least the way you think of it. Every time you change the Nginx settings, you have to restart the server. So having the dynamic setting with Nginx may not be a good idea. What you _could_ do however, is define an upper bound on the Node servers in the Nginx conf file. For each you would define a port number as I have shown. Now you may not actually have a Node server listening on each of those defined ports. Have a ‘pool’ of free Ports in memory (say Redis). So whenever you fork a Node process, ask it to take a free port from Redis and start listening on it. Nginx will already be ready to send data its way. Does this make sense to you?

      1. I thought maybe its possible to achieve the solution via wildcards in the nginx settings. but your solution seems okay (since i cannot launch a vast number of node-apps on my server anyway).
        I wouldnt need Redis, right – it would be sufficient to store the free-port information on the master-node process?

  4. Hey so sorry for the lame comment, however: I had to install nginx with homebrew because I am on a mac trying to test this out. And for some reason ‘/etc/nginx/sites-enabled’ does not exist, and even if I make the file and link the server_nginx.conf file, and run nginx it does not consider the new config file.

    Any thoughts? If not no worries I will keep pushing. Thanks so much for your post.

  5. Hi, Thanks for the post. But my setup is somewhat different. I have two node application running on port 3000 on two different EC2 servers with nginx proxy passing to port 3000 with https. (If http then, 301 to https as well). I also have an ELB over both of these servers. I’ve tried AWS Application LB as it has built in websocket support, tried adding proxy protocol behaviour to AWS ELB but nothing has worked. I am not using any Redis and facing the same problem.Any idea what should I do?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s