Apple is really pushing its luck

Apple has been the undisputed darling of electronics marketing since the introduction of the iPod.  Everything they touch turns to gold, and they've built the mystique of the Apple brand into a legendary golden goose.

Image representing iPhone App Store as depicte...
Image via CrunchBase

But it's been a tough week for Apple.

People have been grumbling about the arbitrary and seemingly random approval process for apps on the iPhone, but the applesauce really hit the fan this week when Apple revoked apps that work with Google Voice app:

There's No App For That

This has set off a small firestorm among developers:

Is this the beginning of the end for Apple?  Probably not.  This isn't the first time Apple has made some consumers mad, but there's only so many times they can pull stunts like this before it starts to catch up with them.

Reblog this post [with Zemanta]

Backwards compatibility can kill you

Windows XP
Image via Wikipedia

"Release early, release often."  This is the Web 2.0 mantra, and it's also a major guiding principle behind agile development proceses.

In product development, conventional wisdom has it that first-to-market- or first-mover advantage is hugely important.  But for software products, this can kill you by painting your product into a corner from which it never recovers.  Just about every software product is burdened with backward compatibility issues, and for many products, compatibility is paramount as customers create files with these products, integrate them into their environments, and come to depend on the software acting the way it does.

With its very first release, then, a software company can find itself wearing an anchor.  Let's look at some examples.

Microsoft Windows

Windows is the definitive example of a product that's become hamstrung by backward compatibility.  Compatibility was one of the things that killed Vista.  It turns out that you can only slap so many coats of paint over a dry-rotted wall before you need to rebuild the frame.  Even security (another noted weakness) would be better if Microsoft had really had the option to start over.

In the mean time, Apple was able to come out with a fresh operating system, and they're now cleaning Microsoft's clock in high-end systems.

Smartphones

Let's look at another example: smartphones. Palm and Blackberry were the first big players in this market, and Windows Mobile came along after that. Since those early days, all three of these OS's have remained essentially the same, and that's now causing them to look very, very dated. The iPhone, and to a lesser degree, Google's Android, are eating their lunch.

Palm's situation had gotten so dire, in fact, that many predicted their demise. Out of this desperation, though, Palm launched the Pre, which (surprise) is based on a brand-new OS. This isn't a coincidence: Palm simply wouldn't have been able to get to the Pre by migrating PalmOs incrementally, taking care not to ruffle any feathers along the way.

Lessons Learned

These disruptive changes are risky. Again, it's not surprising that it took a near-collapse on Palm's part to force them to roll the big dice on the Pre. This is a bet-the-company move for Palm, and they couldn't find the stones to make this move until they had little to lose by failing. This is not to say that I blame them; on the contrary, it's really an acknowledgment that it's a truly bold move to see companies innovating like this when they're still on top.

So what's the lesson here?

Simple. When you make platform decisions, understand that you're going to be sticking with them for a while. Your designs and decisions have the potential to stick around and haunt you for a very long time. But when you do determine that it's time for a change, you need to be able to cut the chains and move on every once in a while.

It's never too late to learn. Microsoft is demonstrating that they've learned this lesson (to some extent) in Windows 7. They've just announced that they're going to include a virtualized, licensed copy of Windows XP so that users can run old software in that old OS. Not only will this make the customers happy (because the can keep their old apps), it also sets the stage for Microsoft to finally kick some of that old OS compatibility code to the curb. Good riddance.

What's the albatross around your neck? What's it going to take for you to finally get rid of it?

Reblog this post [with Zemanta]

My development fabric is unraveled

I'm a few days into working with the Azure July CTP, using Steve Marx's excellent PDC presentation as a bit of a primer.  I'm following along with Steve's presentation, and it was working just fine for a while.  I had a working Azure app, using an MVC front-end, and I was reading and writing images as blobs from the local development-version storage pools.

Then, the next thing I knew, I was busted:

Role instances did not start within the time allowed.
Role instances did not start within the time allowed.

I shut down the Development Fabric, as instructed, and even restarted my machine, to no avail whatsoever.  I started Googling this error, and found a handful of other people who've seen this error, too.  I even found a bug logged on Microsoft's Connect site, but no solution has presented itself.  There's an event (3006: Parser error) logged every time I try to start the app, but that's the sum total of the clues I've got to go on with this one.

So far, I've tried tearing out all the stuff I've added to the project since it was last working (which didn't help at all), I've tried blowing away and recreating the storage pools (no joy), and I've tried creating a brand-new Azure project (which worked).  Thus, I'm forced to conclude that something caused this particular project to be irreparably hosed, but I've still got no idea what caused the problem.

This could really slow me down...

Reblog this post [with Zemanta]

Azure isn’t supposed to do this!

I'm doing some research on Azure (finally playing with the bits), and in the process, spending some time cruising Steve Marx's blog.  Steve presented some Azure stuff at PDC earlier this year, and there's some great stuff on his blog if you're working with Azure for the first time.  One of the things you'll learn if you watch his PDC stuff is that his blog is, itself, built on Azure and running in the cloud, so it should be a showcase for all of Azure's scalability claims.

Thus, it was with great surprise that I clicked a link and saw this Azure app apparently taking a dirt nap:

This webpage is not available.

The webpage at http://blog.smarx.com/?ct=1!8!c21hcng-/1!48!MjUyMTc1NjA1Nzk4NDI2NzA1MiBhdC1sYXN0LS1zcGFtLQ-- might be temporarily down or it may have moved permanently to a new web address.

I refreshed a couple of times, to no avail, and finally tried going back to the home page, which worked fine.  I'm still not sure exactly what went wrong, but it would appear that the god-awful token that was used to track my navigation got lost in the cloud somewhere.

The lesson here?  For production apps, you're still going to need to build your Azure apps defensively, and make sure that customer-facing hiccups are handled in a user-friendly fashion.  As a user, I don't know (or care) if this error was an Azure failure or a failure of the app that's hosted on Azure.  This isn't a dreadful error when I'm browsing a blog, but it could have been if I'd been paying bills or making an online purchase.

Great scalability blog

I just found a site called highscalability.com via feedly.  There's some great stuff in here, including the article I read first: Stack Overflow Architecture.  If you're building big apps or apps that you hope will be big someday, you're going to want to add this blog to your reader.

Reblog this post [with Zemanta]

Revenge of the Private Cloud

Back in February, amidst the news about Microsoft's new Azure platform, I asked why the concept of Private Clouds seemed to be either dismissed or ignored as a viable Enterprise strategy (see also "PDC Reactions" and "Could Azure be self-hosted?").

PrivateCloudYesterday, I learned that there's hope, after all.

I happened to be in the office for one of our internal MS development "user group" meetings, and we were lucky enough to have Brian Prince stop in to talk to us about Azure.  The talk was great - he demo'ed a super-simple sample app that really helped make some of the Azure concepts real, but one of the things that caught my attention was only mentioned in passing: Microsoft is bringing Azure technology to private data centers.

Point #1 -- boot from VHD.  This is a cool little novelty item in Windows 7, but it turns out that its origin is really the Azure team.  This is how they spin up instances for you, and it was a cool enough idea that the Windows team built it into Win 7.

Point #2 -- Yes, Microsoft is going to power private clouds.  Mark my words: this is going to be big.  In fact, why don't you leave yourself a reminder to come back here in about five years so I can say, "I told you so."  This is absolutely one of those technical announcements that's completely underwhelming at introduction because it's so disruptive that people don't know how to deal with it.

When you read about MS's Private Cloud products, watch for the bit about chargebacks.  You'll probably miss this the first time you read it.  Again, this line-item doesn't mean too much until you consider that this technology is coming from Azure.  You know -- Azure -- the platform where you can rent capacity and Microsoft sends you a bill at the end of the month.  How'd you like to be able to do that for your internal customers?  I thought so.

So what's an architect to do if this technology won't be fully appreciated for a number of years?

Simple.  Learn the basics of the technology and start architecting for the cloud even if your app and the cloud aren't quite ready for one another.  Specifically,

  • Look at how cloud apps scale.  The way storage and processing scale in the cloud is different than what you're used to.  Figure out what that means, and design to be compatible with this.
  • Revisit your storage assumptions.  Azure is going to support SQL Server, but it's clear that SQL Server isn't the 100% brain-dead automatic end-all-be-all storage choice that it once was.
  • Get comfortable with messaging and queuing.  This is how cloud components are connected.
  • Get comfortable with threads.  This should be on your list anyway, because of the rise of multi-core processors, but if you understand thread management, you'll be more comfortable with processing threads distributed across the cloud.
  • Think about chargebacks.  If you're suddenly able to track processing and storage utilization with great ease and accuracy, what does that mean for your shared applications?  How would you split up costs for services that are shared across departments?

There are some really powerful concepts here that are just starting to emerge, and now is the time to start wrapping your brain around them.

What can your enterprise do with private clouds?

Reblog this post [with Zemanta]

How long??

I love these dialog boxes:

forever

How hard, exactly, would it be to figure out that a process that had been running at a rate of "x" had suddenly dropped to an entirely unreasonable speed -- so slow, for instance, that perhaps it had actually stopped, and thus, would very likely either fail or resume operating at a reasonable speed again?

Apparently, it's pretty difficult.

Add Lightness

Lotus Elise 111S
Image via Wikipedia

I recently read a review of a new Lotus (the car, not the spreadsheet), and the reviewer recalled the defining design tenant of Colin Chapman, who explained that his goal was always to "add lightness".

This isn't as straightforward as one might imagine in car design -- we don't easily give up all the conveniences we've come to expect over the years (radios, antilock brakes, windshields, etc.).  Similarly, government regulations insist that our cars meet certain safety standards.

These are certainly good things for us and our automobiles, but they've resulted in a substantial bloating of our automobiles over the years.  It's simply easier to make a car safer by adding more steel, for instance.  When the performance of the car suffers, it's far easier to bore out the motor or add a couple more cylinders than it is to find and trim excess weight.

A typical sports car has a power-to-weight ratio of 1:10 or better (or 10 lbs. per horsepower).  This means that if you want to boost the performance of your car by either adding power or reducing weight, this ratio suggests that adding 10hp would be roughly equivalent to dropping 100lbs.  Which do you think is easier?  How about if we make it 50hp or 500lbs?  There's simply no way you can remove 500lbs from a modern car without making some compromises, and/or incurring extra costs by employing exotic materials.

So why bother "adding lightness"?  Why not just bulk up with a bigger motor?  Part of the answer is the feedback loop you start.  A bigger motor will itself weigh more, so part of its additional power is offset by its own weight.  You'll need bigger brakes to slow this extra weight down.  Bigger brakes will call for larger wheels to clear the discs.  Maybe a bulkier frame to accommodate the power.  Beefier suspension pieces to deal with the extra weight.  Pretty soon you've eaten up a fair bit of the power you just added, and the weight of your car is up dramatically.

The rest of the answer, though, can only be appreciated by driving a light car.  Rather than rely on huge tires to grip the road and a huge engine to move you, a light sports car is quick because it's light.  A Lotus is more nimble, tossable, and agile than its competitors in large part because it doesn't carry as much mass.  There's less inertia to overcome when changing directions, and the driver feels much more connected to the road.

Lotus Mk 2
Image by exfordy via Flickr

There's got to be a software lesson here, doesn't there?

Indeed, there is.  In our world, we add lightness by adding simplicity.  Just as we don't want to live without antilock brakes, nobody really wants to go back to running DOS or green-screen applications, but where are the places where we've accrued bulk unintentionally?

Do you remember your reaction the first time you saw Ruby-On-Rails, or maybe even MS Access?  A simple approach to software can make us question how we ended up with all the bulk we see in many of our applications.  How many lines of code exist simply because "that's the way we always do it"?  If you start simpler, can you do without some of the layers we take for granted?

How do you add lightness to your software?

Reblog this post [with Zemanta]

Data Smells

Developers are familiar with "code smells" --the little signs you see upon superficial examination of code that lead you to fear deeper pathalogical problems.  Over time, many developers become pretty good at spotting these signs, and volumes have been written about how to address these problems once they're detected.

But code smells aren't the only signs that problems are lurking in your system.  Most systems with even moderately complex data models can hide all sorts of problems in their data.

Skunk works logo
Image via Wikipedia

A good system, of course, will be coded defensively, such that it can tolerate, or maybe even fix bad data.  This is feasible and practical in small to mid-sized systems, but it becomes increasingly difficult as systems become larger and more complicated.  In all but trivially-small applications, bad data is a very real problem.

Like bad code, bad data is sometimes bad in very subtle ways.  Database constraints can (and should) be used to prevent obvious problems with things like unique id's, foreign-key references, required values, and so on.  This is a minimal requirement, but it won't help you deal with data that violates complex business rules (ex: an order must have an associated invoice if the status of the order is "placed order").

Typically, you'll find examples of data rule violations when you're diagnosing error, or maybe when you're doing reporting or data analysis.  When an instance of bad data is discovered, you've really got only two ways to deal with the problem:  Fix the code, or fix the data.

Fixing the code is often our first reaction, since we're (generally) more comfortable working in code than data in the first place.  We'll often go to the source of the error, and we'll change the code to tolerate this particular class of bad data, but it's important for us to ask ourselves if this is truly a fix for the problem:

  • Just because we fix the code in one place, how can we know we won't blow up somewhere else because of the same bad data?
  • If there's really a business rule governing this data, how are we helping by tolerating violations to those rules?
  • Are the business rules governing this data known at all (I know this sounds silly, but it's going to be a valid question more often than you might think)?
  • How did the bad data get into the system to begin with (is there a real bug upstream that's allowing bad data to be created)?

So in some cases, at least part of the problem is to fix the bad data.  Again, there are some questions you should pose of your system before you dive headlong into SQL:

  • Are there other instances of this data corruption?  How many?
  • What are the circumstances of the problem?  Is there a way to predict the scope or context of the problem?  Perhaps the context can lead you to the source of the data corruption.
  • Can the data be fixed at all?  Sometimes, the damage is irreversible, and repairs can be quite difficult.

As you may have gathered by now, the sooner these issues can be nipped in the bud, the better off you'll be.  I'll cover some strategies to help you with this in a future post.

Reblog this post [with Zemanta]