Cloud Computing has gained a lot of momentum this year. We’re hearing about more new platforms all the time, and all the big players are working hard to carve out a chunk of this space. Cloud computing originally promised us unlimited scalability at a lower cost than we could achieve ourselves, but I’m starting to see cloud technologies promoted as a “green” technology, too.
According to an article on ecoinsite.com, cloud vendors with worldwide networks could choose to steer traffic to data centers where it’s dark (thus, cooler) to cut cooling costs. Since this typically corresponds to off-peak electricity rates, researchers from MIT and Carnegie Mellon University believe that a strategy like this could cut energy costs by 40%.
Clearly, this is cause for great celebration, but how ready are our systems for “follow the moon” computing?
One of the tricky bits that crossed my mind was increased latency. As important as processing speed is, latency can be even more important to a user’s web experience. Most of the “speed up your app” talks and articles I’ve seen in the last year or so stress the importance of moving static resource files to some sort of Content Delivery / Distribution Network (CDN). In addition to offloading HTTP requests, CDN’s improve your application’s speed by caching copies of your static files all around the globe so that wherever your users are, there’s a copy of this file somewhere nearby.
“Follow the moon” is going to take us in exactly the opposite direction (at least for dynamic content). While we may still serve static content from a CDN, we’re now going to locate our processing in an off-peak area to serve our peak-period users.
While this might not seem like a big problem (given that we routinely access web sites from around the globe right now), I believe the added latency is going to adversely affect most contemporary web architectures.
A quick, “back of the napkin” calculation can give us a rough idea of the sort of latency we’re talking about. The circumference of the earth is around 25,000 miles. Given the speed of light, which is the fastest we could hope to communicate from here to there, we’re looking at a communication time of at least 12,500 / 186,000 = 0.067 seconds (67 ms each way), for a round trip of 135ms.
Taking a couple quick, random measurements shows that we’re not too far off. I pinged http://www.auda.org.au/ in 233ms and http://bc.whirlpool.net.au/ in 248ms, which shows the additional overhead incurred in all the intermediate routers.
If your application is “chatty”, you’re going to notice this sort of delay. The AJAX-style asynchronous UI favored in modern web apps will buffer the user a bit by not becoming totally unresponsive during these calls, but on the other hand, these UI’s tend to generate a lot of HTTP requests as the various UI elements update and refresh, and I believe that the overwhelming majority of UI’s are going to show a significant slowdown.
Although increased latency means that you may have a hard time moving to “follow the moon” on some applications, there are steps you can take that will prepare your architecture so it’s able to withstand these changes.
Partition your application to give yourself the greatest deployment flexibility. If you can find the large chunks of work in your app and encapsulate them such that you can call them and go away until processing is done, then these partitions are excellent choices to be deployed to a “follow the moon” cloud.
Finally, when assembling components and partitions into an application, use messaging to tie the pieces together. Messaging is well-supported on most cloud platforms, and its asynchronous nature minimizes the effect of network latency. When you send a message off to be processed, you don’t necessarily care that it’s going to take an extra quarter of a second to get there and back, as long as you know when it’s done.
These changes will take some time to sink in, but we’re going to see more and more cloud-influenced architectures in the coming years, so you’d do well to get ready.
Related articles by Zemanta
- In Order to Cut Costs, Data Centers Need to Follow the (Energy) Money: Report (gigaom.com)
- A Greener Internet with Energy-Aware Data Routing Algorithms (treehugger.com)
- Saving money by load balancing to where electricity is cheap (arstechnica.com)
- An Electricity-Cost Aware Internet Routing Scheme (hardware.slashdot.org)