“Follow the moon” architecture

Cloud Computing has gained a lot of momentum this year.  We're hearing about more new platforms all the time, and all the big players are working hard to carve out a chunk of this space.  Cloud computing originally promised us unlimited scalability at a lower cost than we could achieve ourselves, but I'm starting to see cloud technologies promoted as a "green" technology, too.

Refractive phenomena, such as this rainbow, ar...
Image via Wikipedia

According to an article on ecoinsite.com, cloud vendors with worldwide networks could choose to steer traffic to data centers where it's dark (thus, cooler) to cut cooling costs.  Since this typically corresponds to off-peak electricity rates, researchers from MIT and Carnegie Mellon University believe that a strategy like this could cut energy costs by 40%.

Clearly, this is cause for great celebration, but how ready are our systems for "follow the moon" computing?

One of the tricky bits that crossed my mind was increased latency.  As important as processing speed is, latency can be even more important to a user's web experience.  Most of the "speed up your app" talks and articles I've seen in the last year or so stress the importance of moving static resource files to some sort of Content Delivery / Distribution Network (CDN).  In addition to offloading HTTP requests, CDN's improve your application's speed by caching copies of your static files all around the globe so that wherever your users are, there's a copy of this file somewhere nearby.

"Follow the moon" is going to take us in exactly the opposite direction (at least for dynamic content).  While we may still serve static content from a CDN, we're now going to locate our processing in an off-peak area to serve our peak-period users.

While this might not seem like a big problem (given that we routinely access web sites from around the globe right now), I believe the added latency is going to adversely affect most contemporary web architectures.

A quick, "back of the napkin" calculation can give us a rough idea of the sort of latency we're talking about.  The circumference of the earth is around 25,000 miles.  Given the speed of light, which is the fastest we could hope to communicate from here to there, we're looking at a communication time of at least 12,500 / 186,000 = 0.067 seconds (67 ms each way), for a round trip of 135ms.

Taking a couple quick, random measurements shows that we're not too far off.  I pinged http://www.auda.org.au/ in 233ms and http://bc.whirlpool.net.au/ in 248ms, which shows the additional overhead incurred in all the intermediate routers.

If your application is "chatty", you're going to notice this sort of delay.  The AJAX-style asynchronous UI favored in modern web apps will buffer the user a bit by not becoming totally unresponsive during these calls, but on the other hand, these UI's tend to generate a lot of HTTP requests as the various UI elements update and refresh, and I believe that the overwhelming majority of UI's are going to show a significant slowdown.

Although increased latency means that you may have a hard time moving to "follow the moon" on some applications, there are steps you can take that will prepare your architecture so it's able to withstand these changes.

Partition your application to give yourself the greatest deployment flexibility.  If you can find the large chunks of work in your app and encapsulate them such that you can call them and go away until processing is done, then these partitions are excellent choices to be deployed to a "follow the moon" cloud.

Finally, when assembling components and partitions into an application, use messaging to tie the pieces together.  Messaging is well-supported on most cloud platforms, and its asynchronous nature minimizes the effect of network latency.  When you send a message off to be processed, you don't necessarily care that it's going to take an extra quarter of a second to get there and back, as long as you know when it's done.

These changes will take some time to sink in, but we're going to see more and more cloud-influenced architectures in the coming years, so you'd do well to get ready.

Reblog this post [with Zemanta]

The role of QA – it’s not just testing anymore

In most development organizations, Quality Assurance is treated as an under-appreciated necessity.  In organizations that develop software while not considering themselves "development organizations", it's quite possible that you won't even find a QA group.  QA, it seems, just can't get no respect.

U.S. Army Photo of :en:EDVAC as installed in :...
Image via Wikipedia

Yet QA, if it's executed well, can give your organization the confidence to move boldly and quickly without fear, because QA done right is all about controlling the fear of the unknown.

Too often, QA is viewed as a mechanical exercise.  It's all about writing test plans and clicking through applications.  But this view is short-sighted, and it misses the context that makes a great QA team a strategic partner in a development shop.  In order to really create an effective QA organization, I believe it's crucial to keep an eye on the big picture.  Software is inherently unreliable, and your job is to reduce uncertainty.

I know - this is blasphemy - especially from a software developer, and yes, there once was a time when software was entirely predictable and deterministic.  When I began programming, it could fairly be said that doing the same thing twice in a row would yield the same results.  This is no longer the case.  The incredible list of factors that contribute to variations in outcome ("it works on my machine") grows longer every day.  If you weren't already twitching, multi-core CPU's are working hard to make each trip through the execution pipeline a new and exciting journey (CPU performance is no longer exactly deterministic).

Testing, once the cornerstone of QA, is now only anecdotally interesting.  My only advice on testing is to try to understand the enormity of the total space of possible testing scenarios so that you come to realize that you can't possibly test every combination of function point, hardware, and software. This will help you focus on picking the scenarios you test strategically. Automate where you can, and use your brightest testers to look for weakness in the product.

Beyond traditional testing, true QA requires that you find ways to make the product better.  Obviously, you can write feature requests, but think bigger than that, too. Build testability into the application so that your team is better equipped to reduce uncertainty.   Use active verification, logging crosschecks, and so on to provide tangible evidence that your system is really doing what you think it's supposed to do.

Testing is passive, and it's becoming less effective all the time.  Quality Assurance is active, so get involved early and make sure that the testing you do actually means something.

Reblog this post [with Zemanta]

Deborah Kurata blogging on OO basics

All Saints Chapel in the Cathedral Basilica of St.
Image via Wikipedia

Deborah Kurata is well-known to us old-timers who used to read her excellent articles on OO techniques years ago.  She's been flying under the radar for a while, but she's recently started blogging (great news)!

Over the last couple weeks, Deborah has been doing something of a quick primer on OO for her readers, based on some of her earlier writings.  While I expect most readers here already consider themselves OO experts, I really encourage everyone to read through this series:

If you're new to OO, these will be a great way to improve your skills.  If you're experienced in OO, though, I'd like you to consider something else.

Notice the simplicity and elegance in Deborah's descriptions of Objects.  Compare this to the big, framework-based, computer-generated, uber-designs we deal with every day.  There's a reason every developer prefers green-field development to brown-field development, and there's a reason why we see a sensation like Ruby-On-Rails every few years.

The simplicity and beauty of starting over is liberating.  Starting a brand-new object hierarchy with only the "real" properties of those objects brings a clarity to our design that we rarely see in Enterprise applications.

I urge you to experience this feeling of simplicity from time to time, and remember what it feels like.  When you start looking at frameworks, tools, standards, and all the other trappings of grown-up development, consider how these things move us away from the simplicity of Deborah's definitions.  If you can implement frameworks and tools that stay out of your way and leave the purity of real objects visible to the developer, please give some thought to how that's going to help connect your developers to your customers (you know -- the folks who pay for you to build this stuff in the first place).

Now, go read those articles!

Reblog this post [with Zemanta]

1&1 – Unlimited bandwidth hosting

Web host 1&1 just announced that they're lifting the bandwidth caps on all their hosting plans.  The previous caps were high enough that they rarely affected most customers, but anyone who was ever "slash-dotted" will appreciate this move.

Image representing 1&1 as depicted in CrunchBase
Image via CrunchBase

As you may have noticed, I use 1&1 to host this site, and I also run a couple of club sites on them.  Although I saw some reliability problems with 1&1 a couple years ago, I have to say they've been pretty good since then.  I use mon.itor.us to watch uptime on all these sites, and I haven't seen any major issues in a long time.

If you need a host, include these guys in your eval list.

Reblog this post [with Zemanta]