I recently had a client ask about website hosting that would be able to sustain a lot of hypothetical traffic. The details were unclear, so I didn’t have much to work with – but it got me thinking:
How much traffic could my hosting setup sustain?
Personally, I’ve always pondered the answer to that question. While I don’t consider myself that strong of a dev-ops person, I still have to deal with questions like these. So I set out to find a possible answer.
My Current Setup
Let’s talk about my current site (archive.jplhomer.org). I’m using Digital Ocean for hosting (they’re fantastic, by the way). I’m running their smallest-sized droplet:
- 512MB RAM
- 1 CPU
- 20GB SSD storage
- Ubuntu 14.04
It works just fine for my needs – I don’t get much traffic on my blog currently. I’m also running two other sites on this droplet, and they don’t have any problems performing.
While the specs on this droplet are fine for at least a few sites, what would happen if one of these sites were to get hit with a bunch of traffic?
Testing Digital Ocean’s Waters: 512MB Droplet
I stumbled upon this cool service called Loader.io. They offer load testing for websites and applications by sending a large number of requests to a site over a given period of time.
First, I signed up for Loader.io and entered this domain (archive.jplhomer.org). Note: This was kind of risky, as a failure would mean my site would be inaccessible. Low stakes, though.
The first thing it will ask you to do is verify that own the domain (so you don’t go around trying to crash other peoples’ servers). I chose to verify using the DNS method since I use CloudFlare for my DNS, and they have near-instantaneous TTL (more on CloudFlare in a bit).
This was easy enough to add to CloudFlare using their DNS Settings area:
Once I added the TXT record to CloudFlare, I waited a couple seconds and then hit “Verify” back on Loader.io.
Now, it was time to run my first test. Loader.io lets you run up to 10,000 maximum connections (on the free plan). I set them to hit my site over the course of one minute.
How did it go? Not well.
As shown in the screenshot above, the test lasted a mere eight seconds before halting due to errors. After about six seconds, Loader.io had sent 82 clients to my site successfully before it started failing (for another 230 clients).
Granted, that is a lot of traffic – especially for my blog – but DigitalOcean’s smallest droplet is definitely not cut out for this kind of traffic.
Let Me Upgrade You: 4GB Droplet
At work, we’re hosting quite a few clients on another one of Digital Ocean’s droplets. I would call it “medium-sized”:
- 4GB RAM
- 2 CPUs
- 60GB SSD storage
- Ubuntu 14.04
Clearly some system upgrades in the RAM department here. As to not take accidentally down my company’s production server, I decided to spin up my own version of that size of droplet.
Side note: How awesome is it that I can spin up a decent-sized virtual server to play around with for an hour and only pay 6 cents?
I decided to boot this droplet with a version of WordPress pre-installed.
And off I went! Within a minute, I was able to SSH into the droplet, complete the WordPress installation, and set up a hostname for the droplet at load.archive.jplhomer.org.
I decided to not install any third-party caching plugins at this point, because I was truly curious what 10,000 requests would do to a WordPress install (with fresh MySQL queries each time).
I set up a new hostname and test in Loader.io for this new instance and gave it a whirl.
Whomp, whomp. While it lasted a little longer than my puny 512MB droplet, the test still failed after about 12 seconds. This time, 467 clients were successful while 557 timed out.
I guess this makes sense, right? Because loading a WordPress homepage probably involves a few database queries, and the number of those queries times the total number of clients can really add up for a standalone database server.
Caching to the Rescue
Luckily, it’s considered good practice to implement caching on your production websites. Using WordPress as a specific example, there are a number of popular third-party caching plugins to try out.
I went back to my dummy WordPress site and installed the plugin that is featured on the Install screen: WP Super Cache. After a few clicks, I had the plugin installed and caching the live site.
Did it help?
Hey-oh! It sure did. I finally had my first complete test.
Let’s look at some of the statistics here. At its most latent, my server responded in 5,449ms. Super slow. But, at its quickest, my server responded in 11ms. That’s darn fast.
So, the average latency for 4GB droplet running WordPress with WP Super Cache enabled: 51ms
What About Wordfence?
I typically don’t use WP Super Cache, because I find its user interface confusing and annoying. I recently found a cool plugin called Wordfence Security (its UI isn’t anything to write home about, either) which does a bunch of stuff – caching is one of those things.
I installed Wordfence and enabled Basic caching (their advanced caching was apparently incompatible with my server).
After re-running the same test on Loader.io, here’s what I got:
It succeeded, and with a slightly lower average response time.
So, the average latency for 4GB droplet running WordPress with Wordfence Basic caching enabled: 12ms
CloudFlare Is My Friend
I mentioned CloudFlare earlier in this post and for good reason: they’re pretty great. CloudFlare offers protection against DDoS attacks for sites around the globe. Along with that, they offer what’s pretty close to a Content Delivery Network (CDN). For free.
This means I can plug in any number of my sites to CloudFlare and have them handle the traffic coming to my site (and serve up static assets like images, scripts, plain HTML files, etc). Since WordPress is a dynamic CMS, CloudFlare won’t serve a static version of my entire site from their global network, but they do a good job of serving most of the heavier parts.
So, I was curious: Would having CloudFlare enabled for this dummy WordPress site help in a high-traffic situation?
When Loader.io hits a site homepage, for example, it’s not loading all of the assets required for that page – just the HTML from that homepage. This means any regular user visiting the homepage is also going to hit my server with additional requests for however many assets they need to load (images, scripts, stylesheets).
This is where I could get into real trouble if I didn’t use some sort of CDN.
My latency was pretty awful, and it looks like I had quite a few timeouts (1472). I also tried visiting the site in my browser during the test, and I was greeted with this message:
Clearly, a 4GB droplet can’t handle ~20,000 requests within a one minute span.
But I wanted to try it again – this time, with CloudFlare enabled:
I re-ran the same test:
Preparing for Heavy Traffic
Knowing what I know now, I think I can make the following assumptions:
- A WordPress site on a 512MB droplet from DigitalOcean would never survive when expecting heavy site traffic
- A WordPress site on a 4GB “medium-sized” droplet from DigitalOcean would maybe survive if caching was enabled
- You must use a CDN to serve your static assets if you’re planning to get a lot of traffic
I hope that continuing to use a medium-sized droplet along with caching and CloudFlare will do the trick for my sites that are expecting traffic.
You know what, though? Until it actually happens (which is not often for me), I won’t know what will work.
What other things could I do to help my website take on heavy traffic? A couple options:
- Set up a load balancer (ELB on the Amazon ecosystem or by creating my own using HAProxy on Digital Ocean)
- Run multiple, mirrored application servers behind the load balancer
- Set up a dedicated database server running MySQL
- Rely on someone else to host my site
- Hire someone smarter than me to host it
Of course, those things cost money (in hardware or people costs).
But as with many things in the web development world: I’ll try something until it fails, and then I’ll find another way.