Weekly product update- tips for configuring SSL termination on Cloud Load Balancers
We like to keep you up-to-date with what's happening at Mailgun each week, but sometimes the projects we're working on are all in the Research & Development phases and we don't have much to share. That happened this week, so nothing big to report. But that doesn't mean we haven't been busy. Since Mailgun customers are developers like us, we thought we would share what we learned doing a common maintenance task: adding SSL termination to our Cloud Load Balancers that handle control panel and website traffic. We found some interesting things, especially around performance, that may be of use in your own application. If that's not your thing, don't worry, come back next week for more new features and improvements.
Adding SSL termination to Rackspace Cloud Load Balancers
A common requirement for SaaS providers (like Mailgun and many of our customers) is to keep track of sessions by IP address. When passing traffic through a load balancer, there are different ways to make sure that original IP address is perserved. In the past, we used an F5 load balancer which passes a $remoteaddr variable that we used in our nginx config. This variable was a client's address taken from socket options. When we recently moved to Rackspace Cloud Load Balancers to serve traffic for our website and control panel, we noticed that Cloud Load Balancers provide the load balancer's IP for $remoteaddr, instead of the client's IP address. So we needed another way to perserve this information.
Cloud Load Balancers can add a X-Forwarded-For header with client's IP value, however this option doesn't work for SSL which we use for our control panel. Why wouldn't it work? Simple. In order to add a header to the file sent over SSL, the load balancer first needs to decrypt the file. By default, a load balancer is not equipped to do this and the header is not added. Luckily, Cloud Load Balancers offers an SSL termination option that allows you to decrypt SSL traffic before passing it to the destination servers. It has certain advantages around reducing load on application servers. To make it work you just need to enable SSL termination for your load balancer and provide it with an SSL private key and certificate.
This is actually a cool example of separation of concerns. Instead of having all SSL related stuff in your server configuration files,you put it into the load balancer and it just works.
An important thing you should keep in mind is that after you enable SSL termination, all the HTTPS traffic coming through the load balancer will become HTTP. It seems obvious, yet it's pretty common to forget about this little detail. We actually ran into this issue ourselves when testing on our staging environment. The solution is to use X-Forwarded-Proto header that the load balancer sets to "https" for HTTPS traffic. Here is a good example of how you do it in nginx.
A surprising performance result using SSL termination
Before we implemented the change, we wanted to make sure that performance didn't suffer when using SSL termination. To check for this, we ran some performance tests on our staging environment and got some unexpected results: load-balancers with ssl termination actually performed better on average than load-balancers without it.
To run the rest, we performed a GET on three different URLs
[list type = check]
- GET /cp
- GET /cp/log
- GET /cp/domains
Avg. performance (in milliseconds) without SSL termination
Avg. performance (in milliseconds) with SSL termination
While there was one instance where performance with SSL was slower, on average performance was surprising better: reducing average load time from 3957 milliseconds to 3811 milliseconds, a decrease of 3.7%. Not bad considering that SSL termination adds steps to the process of transferring a file from the client to the server and back.
That's it for this week. We hope that the next time you need to add SSL termination to your load balancers, you'll find this information useful. Till next week.