Tilting the web — really?

Here's a gem for you, courtesy of Engadget:

Firefox 3.6 will support accelerometers, make the internet seasick (video).

Right.  I can't wait to write CSS for this.  Are you kidding me??

If you look at the picture of this browser, you can see that the frame of the browser window stays locked in place on the screen, and sure enough - as the screen is tilted, the rendering within the browser is tilted to compensate.

Mozilla Firefox
Image via Wikipedia

Can this really be considered a feature people want or need on their devices?  Here's the bottom line, Firefox:  You're running on a rectangular device, no matter how it's tilted.  There are only two reasonable layouts on a rectangular device:  portrait and landscape.  For everything in the middle, I'm going to go out on a limb here and stipulate that if someone is holding a device at an angle, it's probably because the viewer is at an angle, and thus, wants to turn the device to match their aspect.

Let's also not forget that a screen rendered at non-square angles is going to introduce all sorts of fun graphical artifacts as you re-render at a constantly-changing angle.  Remember that our fonts and graphics are optimized for a rectangular pixel layout, and anything other than square is going to require interpolated rendering of everything on the screen (which, being a mobile device, by the way, is probably of lower resolution than our desktops.  Take a look at this image:

Engadget tilted on screen
Engadget tilted on screen

Ignore, if you're able, the weird optical illusion that makes the rest of your screen now look like it's tilted to the right.  This image, in fact, shows the mobile version of Engadget's home page tilted about fifteen degrees to the left.  Note that while this is certainly still readable, the fonts show significant pixelation -- they're blurry and jagged.

You can also see that the screen is cut off on four corners.  This is because the size of the canvas that will fit perfectly inscribed within the fixed physical screen changes as the angle of tilt changes.  I'm sure there's a geometric formula that can be used to figure this out, and it would be more or less complicated depending on whether you hold the aspect ratio of the canvas constant (which could work right up to the point where you switch from portrait to landscape).  In any event, the size is going to change, and that means that you either change the size of the HTML canvas, reflowing text, images, etc., or you do still more interpolation and rendering to account for the smaller canvas size you're painting.  Either way, I think the user loses.

Can you just picture the bugs that are going to come in once people start using this?

  • Your app blinks when I tilt the screen.
  • When I tilt my screen, I can see everything just fine, but when I tilt it back, the left side of the text is cut off.
  • When I tilt on FireFox, your app works fine, but when I tilt on Chrome, it doesn't work at all.
  • I got a migraine because of your app.
  • What's with the screen swimming around??

Can anyone, anywhere really come up with a scenario where I, as a user of a mobile device, am going to hold my device at an angle and hope that my software can figure out how to render a pixelated, ugly version of the app I'm using at an angle that now, no longer matches the angle of my screen, and (dare I add) keyboard?

On behalf of all web developers everywhere on Earth, Firefox, I implore you:

Stop the madness!!

Reblog this post [with Zemanta]

The construction analogy

I did it again - just like so many others before me.

Last night, Rob Conery dropped a rant about an industry heavyweight who, Rob felt, was doing us all a giant disservice by suggesting that perhaps less could be more, process-wise.  So, of course, I chimed in, and I used the tired, old construction analogy -- you know -- building software is like constructing a house, right?

Sunset over Pearl Qatar.
Image by fatboyke (Luc) via Flickr

In my defense, the specific point I was trying to make was that the software industry isn't mature in the same way as construction.  Where you have building codes in Civil Engineering, for instance, there's no equivalent binding set of rules governing how software is built -- just a nebulous collection of "best practices".

Then, this morning, I read another post on the IxDA discussion board trying to explain how software design (specifically, functional specifications) is like the function an architect performs when someone builds a home.

Clearly, there are holes in that analogy, and I pointed out a couple in a comment on that site.  The complexity (in terms of options) in construction pales next to software development in all but trivial applications.  As I mentioned, in my comment, a buyer could sit with an architect and describe parts of a home in pretty general terms, and be quite satisfied with the resulting home, because there already exists a rich vocabulary of assumptions about how homes are supposed to work.

Software interface design is automatically more complicated than physical buildings in part because we’re emulating “real” things in software.  All the metaphors and affordances we design into our software provide unlimited complexity, such that any conversation about what a customer wants to see in software has to occur at a high level of detail.

Beyond specifying behavior, the process of construction proceeds quite differently in software, as well.  When building a home, you quickly come to understand that some aspects of construction are literally “set in stone” very early in the project, and more importantly, these things are intuitively understood by the customer.  If a homeowner wishes to change a foundation or the frame of a home, he’s not surprised to learn that this will be an expensive change; anyone can see how one is built upon another.

Building codes also help constrain choices (and therefore, costs) when building a home.  There’s simply no arguing about how studs are spaced, or the materials that are used for different parts of the home, because these are mandated by building codes.  In software, everything is negotiable, which implies that it must be specified.  Once again, the analogy breaks down.

As many times as I’ve seen the construction analogy, it’s clear that a lot of people want it to work.  Just remember that it only works to a point.

Enhanced by Zemanta

Cloud pricing: is Azure competitive?

Now that Azure's pricing model is starting to take shape, we're starting to see what Microsoft meant when they said Azure would be priced "competitively".  This week, Oakleaf Systems compared Azure to Amazon's cloud offerings, highlighting costs for developers to gets started on these platforms (Amazon Undercuts Windows Azure Table and SQL Azure Costs with Free SimpleDB Quotas).

Clouds on Steroids
Image by ...-Wink's in Mexicali-... via Flickr

I believe Microsoft may be able to get away with pricing at a slight premium to Amazon and GAE, in large part because SQL Azure is an absolute killer feature because of its compatibility with existing SQL Server code.  Nevertheless, Azure pricing has to be aggressive, or the value proposition for Azure starts to look weak.  I think it's especially important to make the platform available to developers at a free or near-free price point, or Microsoft just won't be able to build the mind-share and experienced developer base it needs to grow, so I'll continue to keep an eye on this space.

Reblog this post [with Zemanta]

CSLA – free anti-obsolescence toolkit

Microsoft Silverlight
Image via Wikipedia

Rocky Lhotka recently announced another CSLA upgrade.  One of the features in this release is support for data annotations that work in both WinForms and SilverLight.  Yesterday, Rocky blogged about this implementation (Leveraging data annotation attributes in CSLA .NET).

While this feature is pretty cool all by itself, it's easy to miss the real win here.  Looking back at the features that Rocky's added to CSLA over the last few years, there's a constant stream of technology-enabling features.  If you've built to this framework (and stayed reasonably current with new releases), you've gotten a free pass to take advantage of everything from remoting to WCF to data binding to SilverLight.

Here's a news flash for you -- that's exactly why you get on board with a framework.  You don't get this with design patterns, and you don't get this with software factories.  Frankly, you don't get this with a lot of other frameworks, either.

Well done, Rocky, and thanks.

Reblog this post [with Zemanta]

Self-Managing distributed applications

I recently read about the SELFMAN project on ReadWriteWeb (Coming Soon: Internet Apps that Heal Themselves).  This project, started in 2006, aims to standardize and package the infrastructure needed to manage large-scale distributed applications.  So far, they've produced some sample / reference applications using components written in Java.  I don't know if the bits themselves will find their way to your project any time soon, but if you're working on large-scale applications, you should definitely take a look at their approach.

Skynet
Image by hartboy via Flickr

As far as I can tell, much of the actual software that's been produced thus far deals with managing an application where you control all the pieces.  The techniques used by this software are interesting in and of themselves, but I'm really more interested to see if any of these techniques can be broadened in application to help federalized systems cope with unreliable components.

Right now, we're in the early days of applied cloud computing, and we're seeing a fairly regular parade of large-scale, highly-publicized cloud failures from big-name players like Google, Amazon, Facebook, and Twitter.  As we deploy more applications into the cloud and construct these applications to more seamlessly integrate with all the other applications in the cloud, we start to introduce some really insidious dependencies.  How can we ensure that our application doesn't crash every time twitter burps?

At present, this sort of resilience must be custom-crafted in each and every application, which is time-consuming and error-prone.  I'd love to see more thought put into standardized approaches like those used by SELFMAN so that enterprise resiliency can become the norm rather than the exception.

Reblog this post [with Zemanta]

“Follow the moon” architecture

Cloud Computing has gained a lot of momentum this year.  We're hearing about more new platforms all the time, and all the big players are working hard to carve out a chunk of this space.  Cloud computing originally promised us unlimited scalability at a lower cost than we could achieve ourselves, but I'm starting to see cloud technologies promoted as a "green" technology, too.

Refractive phenomena, such as this rainbow, ar...
Image via Wikipedia

According to an article on ecoinsite.com, cloud vendors with worldwide networks could choose to steer traffic to data centers where it's dark (thus, cooler) to cut cooling costs.  Since this typically corresponds to off-peak electricity rates, researchers from MIT and Carnegie Mellon University believe that a strategy like this could cut energy costs by 40%.

Clearly, this is cause for great celebration, but how ready are our systems for "follow the moon" computing?

One of the tricky bits that crossed my mind was increased latency.  As important as processing speed is, latency can be even more important to a user's web experience.  Most of the "speed up your app" talks and articles I've seen in the last year or so stress the importance of moving static resource files to some sort of Content Delivery / Distribution Network (CDN).  In addition to offloading HTTP requests, CDN's improve your application's speed by caching copies of your static files all around the globe so that wherever your users are, there's a copy of this file somewhere nearby.

"Follow the moon" is going to take us in exactly the opposite direction (at least for dynamic content).  While we may still serve static content from a CDN, we're now going to locate our processing in an off-peak area to serve our peak-period users.

While this might not seem like a big problem (given that we routinely access web sites from around the globe right now), I believe the added latency is going to adversely affect most contemporary web architectures.

A quick, "back of the napkin" calculation can give us a rough idea of the sort of latency we're talking about.  The circumference of the earth is around 25,000 miles.  Given the speed of light, which is the fastest we could hope to communicate from here to there, we're looking at a communication time of at least 12,500 / 186,000 = 0.067 seconds (67 ms each way), for a round trip of 135ms.

Taking a couple quick, random measurements shows that we're not too far off.  I pinged http://www.auda.org.au/ in 233ms and http://bc.whirlpool.net.au/ in 248ms, which shows the additional overhead incurred in all the intermediate routers.

If your application is "chatty", you're going to notice this sort of delay.  The AJAX-style asynchronous UI favored in modern web apps will buffer the user a bit by not becoming totally unresponsive during these calls, but on the other hand, these UI's tend to generate a lot of HTTP requests as the various UI elements update and refresh, and I believe that the overwhelming majority of UI's are going to show a significant slowdown.

Although increased latency means that you may have a hard time moving to "follow the moon" on some applications, there are steps you can take that will prepare your architecture so it's able to withstand these changes.

Partition your application to give yourself the greatest deployment flexibility.  If you can find the large chunks of work in your app and encapsulate them such that you can call them and go away until processing is done, then these partitions are excellent choices to be deployed to a "follow the moon" cloud.

Finally, when assembling components and partitions into an application, use messaging to tie the pieces together.  Messaging is well-supported on most cloud platforms, and its asynchronous nature minimizes the effect of network latency.  When you send a message off to be processed, you don't necessarily care that it's going to take an extra quarter of a second to get there and back, as long as you know when it's done.

These changes will take some time to sink in, but we're going to see more and more cloud-influenced architectures in the coming years, so you'd do well to get ready.

Reblog this post [with Zemanta]

The role of QA – it’s not just testing anymore

In most development organizations, Quality Assurance is treated as an under-appreciated necessity.  In organizations that develop software while not considering themselves "development organizations", it's quite possible that you won't even find a QA group.  QA, it seems, just can't get no respect.

U.S. Army Photo of :en:EDVAC as installed in :...
Image via Wikipedia

Yet QA, if it's executed well, can give your organization the confidence to move boldly and quickly without fear, because QA done right is all about controlling the fear of the unknown.

Too often, QA is viewed as a mechanical exercise.  It's all about writing test plans and clicking through applications.  But this view is short-sighted, and it misses the context that makes a great QA team a strategic partner in a development shop.  In order to really create an effective QA organization, I believe it's crucial to keep an eye on the big picture.  Software is inherently unreliable, and your job is to reduce uncertainty.

I know - this is blasphemy - especially from a software developer, and yes, there once was a time when software was entirely predictable and deterministic.  When I began programming, it could fairly be said that doing the same thing twice in a row would yield the same results.  This is no longer the case.  The incredible list of factors that contribute to variations in outcome ("it works on my machine") grows longer every day.  If you weren't already twitching, multi-core CPU's are working hard to make each trip through the execution pipeline a new and exciting journey (CPU performance is no longer exactly deterministic).

Testing, once the cornerstone of QA, is now only anecdotally interesting.  My only advice on testing is to try to understand the enormity of the total space of possible testing scenarios so that you come to realize that you can't possibly test every combination of function point, hardware, and software. This will help you focus on picking the scenarios you test strategically. Automate where you can, and use your brightest testers to look for weakness in the product.

Beyond traditional testing, true QA requires that you find ways to make the product better.  Obviously, you can write feature requests, but think bigger than that, too. Build testability into the application so that your team is better equipped to reduce uncertainty.   Use active verification, logging crosschecks, and so on to provide tangible evidence that your system is really doing what you think it's supposed to do.

Testing is passive, and it's becoming less effective all the time.  Quality Assurance is active, so get involved early and make sure that the testing you do actually means something.

Reblog this post [with Zemanta]

Deborah Kurata blogging on OO basics

All Saints Chapel in the Cathedral Basilica of St.
Image via Wikipedia

Deborah Kurata is well-known to us old-timers who used to read her excellent articles on OO techniques years ago.  She's been flying under the radar for a while, but she's recently started blogging (great news)!

Over the last couple weeks, Deborah has been doing something of a quick primer on OO for her readers, based on some of her earlier writings.  While I expect most readers here already consider themselves OO experts, I really encourage everyone to read through this series:

If you're new to OO, these will be a great way to improve your skills.  If you're experienced in OO, though, I'd like you to consider something else.

Notice the simplicity and elegance in Deborah's descriptions of Objects.  Compare this to the big, framework-based, computer-generated, uber-designs we deal with every day.  There's a reason every developer prefers green-field development to brown-field development, and there's a reason why we see a sensation like Ruby-On-Rails every few years.

The simplicity and beauty of starting over is liberating.  Starting a brand-new object hierarchy with only the "real" properties of those objects brings a clarity to our design that we rarely see in Enterprise applications.

I urge you to experience this feeling of simplicity from time to time, and remember what it feels like.  When you start looking at frameworks, tools, standards, and all the other trappings of grown-up development, consider how these things move us away from the simplicity of Deborah's definitions.  If you can implement frameworks and tools that stay out of your way and leave the purity of real objects visible to the developer, please give some thought to how that's going to help connect your developers to your customers (you know -- the folks who pay for you to build this stuff in the first place).

Now, go read those articles!

Reblog this post [with Zemanta]

1&1 – Unlimited bandwidth hosting

Web host 1&1 just announced that they're lifting the bandwidth caps on all their hosting plans.  The previous caps were high enough that they rarely affected most customers, but anyone who was ever "slash-dotted" will appreciate this move.

Image representing 1&1 as depicted in CrunchBase
Image via CrunchBase

As you may have noticed, I use 1&1 to host this site, and I also run a couple of club sites on them.  Although I saw some reliability problems with 1&1 a couple years ago, I have to say they've been pretty good since then.  I use mon.itor.us to watch uptime on all these sites, and I haven't seen any major issues in a long time.

If you need a host, include these guys in your eval list.

Reblog this post [with Zemanta]