Scaling static content

I read a post today in the Plenty of Fish Blog about his Content Delivery Network (CDN) for his static content. He posts graphs showing that he has exceeded 1.1 TB/139 Mbps in a day and peaking at a fraction under 3,000 hits a second. All very impressive.

Anyway, it made me think about how we’ll scale our static content. I think that in the very near future it will move off our main server onto a dedicated server. I’ll start with the cheapest hardware and increase the specification as we need to.

I’m guessing that with a million members we would have about 30 GB of photos and something in the region of 600 GB a day of traffic. I’m pretty sure that we could serve this off a single server; one of the factors that would be strongly in our favour is that there would be a small number of photos that would account for most of this traffic so it would all be nicely cached internally.

ServePath sell a dedicated, unmetered 100 Mbps connection for $1500 per month and I think that the server would cost around $500 a month so this would all be very affordable.

I watched an excellent video on how You Tube scaled. On their static content they reported a big gain in moving form Apache to lighttpd to complement their CDN but I think that Fab Swingers is unlikely to be serving that volume of traffic so Apache will do fine.

They also mentioned in passing that they switched from Ext to Reiser because they had far too many files in a directory. I don’t want to make that switch but I think that I should restructure the photos directory, it presently has over 4000 subdirectories in it but as we scale up this is likely to get too big.

It was also good to hear that our architecture is very similar to You Tube (Linux, Apache [on app servers], MySQL, Python) so I am pretty comfortable that we will cope.

I’m much more concerned about the app server side, interestingly he said that their aim was to complete every request in under 100 ms. This is something that we’re presently way over but I’ve got a few ideas for improvement.

Leave a Reply

You must be logged in to post a comment.