Reboot needed – really??

Microsoft, o que vem por aí?
Image by Daniel F. Pigatto via Flickr

It just happened to me again.  I came into work and was greeted by the cold, grey screen that told me my PC had rebooted.  Windows Update had struck again.

I've got a PC that hosts my development environment in a VM, so every time this happens, not only do I need to restart my VM (which takes for.ev.er...), I then need to wait for my VM to install the same damned update, and then reboot itself (which takes for.ev.er...).  This process alone typically eats a good hour or so.

Last night, though, I was running SQL scripts -- one in my VM, and another on another PC in my cube that I use just for SQL Server.  These were long-running scripts to do data migration to support testing and bug fix verifications that really need to be done asap.  Both machines were dispatched by Windows Update in the middle of the scripts, and both scripts had to be restarted.

This time, I'm losing an entire day because of Windows Update.

I've only got one question: Why the hell does Windows need to reboot every single time it installs any kind of update?

I've used Ubuntu for months on end, and I've seen it install all sorts of updates; rarely did one require a reboot.

Windows Update 649mb!
Image by Tom Raftery via Flickr

In terms of operating system enhancements, there's nothing I've wanted this badly since plug & play.  I'm really trying to understand how the Windows product planners keep missing this.   I'm picturing the product planning meeting: all the great minds gathered around the conference room table, and a list of enhancements up on the board.  Somewhere up there, between WinFS and Internet Explorer 14 (the one that finally supports all the W3C standards), there's my bullet point:  Windows Updates without reboots.

"Nope.  Gonna have to pass on that this time.  We need another new 3-D task switcher, so 'Updates without reboots' is just going to have to push to the next release."

Really??

I don't have the foggiest idea how many engineers are on the Windows team, but it's difficult to imagine that there isn't a spare rocket scientist somewhere who could banish this problem to the scrap heap of forgotten PC memories, right there next to QEMM where it belongs.

Back in the 90's, I used to work with a guy who started running one of the early Linux distros, and he'd brag that his Linux box had been up for six months straight, or something like that.  That's fifteen years ago, folks.

Is it possible that Microsoft hasn't fixed this problem because it realizes that Windows still needs to be rebooted every once in a while even after all these years of trying to get it right?

Wouldn't that be sad?

Reblog this post [with Zemanta]

Table variables in SQL Server

Working in SQL requires a mind-shift from procedural thinking to set-based thinking.  Although you can get SQL to operate one record at a time, it's usually slower, and almost always painful to force SQL to work like that.

One of the implications of set-based operations is that traditional single-value variables can't quite handle many processing needs -- you need a collection-like variable, and traditionally, that's meant a temporary, or temp table.  Temp table implementation is pretty similar across database engines, but it's also slightly cumbersome.  In many cases, SQL Server's Table variables are a better solution.

Although Microsoft introduced table variables with SQL Server 2000, I hadn't had occasion to try them out until recently.  Like cursors, temporary tables have usually been just enough of a pain that I avoided them until I really needed one.


CREATE TABLE #mystuff
(
id INT,
stuff VARCHAR(50)
)

-- do stuff with your table

DROP TABLE #mystuff

The thing that always messed me up when using temp tables was the DROP TABLE at the end.  When developing or debugging, or when working with ad-hoc SQL, I frequently missed the DROP, which showed up shortly afterwards when I tried to CREATE a table that already existed.

Table variables eliminate the need for the DROP, because they are reclaimed automatically as soon as they go out of scope:


CREATE TABLE @mystuff
(
id INT,
stuff VARCHAR(50)
)

-- do stuff with your table

Other advantages: for small tables, Table variables are more efficient than "real" temp tables, and they can be used in user-defined functions (unlike temp tables).  Like temp tables, you can create constraints, identity columns, and defaults - plenty of power to help you work through set-based problems.

For more on Table variables, see the following articles:

Table Variables In T-SQL

Should I use a #temp table or a @table variable?

Reblog this post [with Zemanta]

New syncing options for WinMo phones

Several mobile phones
Image via Wikipedia

Microsoft and Google have each announced syncing tools for Windows Mobile phones recently, but based on what I'm seeing, I'm sticking with a service you've probably never heard of.

Microsoft announced "My Phone" last week, and today announced that it will be available for free.  At present, it's in limited beta, but I'd expect it to be unleashed on willing participants pretty soon.  This service looks to be pretty limited, though -- it looks to be a great way to back up your phone, but not too much beyond that.

Today, though, Google announced plans to support syncing for contacts and calendars to a bunch of phones, including Windows Mobile.  Woot!  A closer look at the fine print, though confirms that this (like all new Google features) is a beta feature, and may not be completely baked yet.  Two-way sync, in particular, seems to be iffy for Windows Mobile devices.

Microsoft Office Outlook
Image via Wikipedia

So who needs these services, anyway? WinMo phones already synch to Outlook just fine, don't they?

Personally, I want something like this because I don't use Outlook anymore.

A few months ago, in a fit of Vista-inspired digust, I loaded Ubuntu on my home desktop, and vowed never to be stuck on a single desktop OS again.  I'd been using Outlook, and this was the only way I'd been able to sync contacts and calendars to my Windows Mobile phone, and so began a long quest for a wireless sync option for my phone.  I was amazed at how difficult this actually turned out to be, but I ended up using a service that I've been pretty happy with.

Continue reading "New syncing options for WinMo phones"

Countdown to release, according to Microsoft

Chart showing the stages in the software relea...
Image via Wikipedia

A while ago, I pointed out a Microsoft development team that was doing a great job of giving us glimpses inside the sausage factory.  In that instance, the Windows Home Server team was showing us how bugs are managed late in the release cycle.

Now, Microsoft is opening up again - on a larger scale this time.  A blogger on the Windows 7 team  has written a really informative post on the process the Win 7 team expects to use to move from Beta to General Availability.

If you ship commercial software, you need to understand the terms used in this article.  You also need to make sure your developers and your boss understand them.  These milestones are the dates that drive your release, and it's critical that everyone in your company shares an understanding of what these milestones mean.

If you develop in-house software, you may not use these terms, but you should still understand the concepts.  The same idea applies to size: smaller projects will have fewer public milestones.

Once you've decided on how you're going to define these milestones for your organization, keep track of projected and actual dates for a couple of releases.  Now, as you plan future releases, you've got some valuable data to help you: all things being equal, you'd expect to spend similar amounts of time in each stage of the release for similarly-sized features.  Use these ratios as a sanity check against your project plan; if your ratios are way off, you'd better be able to explain why.

Note that if you change the meaning of these terms every time you use them, they instantly become useless for estimates and measurements, and quickly become useless as a means of communication, so pick a quantifiable definition you like, and stick to it!

Reblog this post [with Zemanta]

What, exactly, is wrong with “Private Clouds”?

Racks of telecommunications equipment in part ...
Image via Wikipedia

I recently saw an interesting post by Gordon Haff that claims that cloud computing concepts can't really be applied to enterprises smaller in scale than the Googles, Microsofts, and Amazons of the world.

Humbug, I say.  I certainly didn't have that impression when I learned about Azure.

I'll concede that there aren't too many enterprises that have the scale of Google or Microsoft, but there are quite a lot of enterprises that would see benefits of cloud computing concepts in their internal data centers.

You see, the "infinite scalability" promise of cloud computing is only one of the benefits that cloud computing promises.  Most of the concepts we're seeing in cloud computing were already trends in big data centers before they turned into cloud features.

Continue reading "What, exactly, is wrong with “Private Clouds”?"

Sysinternals Tools

SAN FRANCISCO - JANUARY 29: (FILES) Buttons wi...
Image by Getty Images via Daylife

A number of years ago, Microsoft purchased an excellent body of tools called Sysinternals.  They're now (officially) called Windows Sysinternals.

Every once in a while, I am reminded just how useful these tools are - today, it was Process Explorer.  My system was running dog-slow, and I wasn't really sure why.  I tried using Task Manager, but I didn't really understand what I was seeing.  There as an instance of setup.exe that was taking up CPU time, but I hadn't launched any installs.

So I grabbed Process Explorer (I keep it on a USB stick) and fired it up, and I saw immediately (because of how the processes were linked on-screen) that Windows Update was upgrading SQL Server Express.  I still had a slow system, but now I knew why.

If these tools aren't already on your go-to list, you owe it to yourself to check them out and keep them with you at all times.  You'll be glad you did.

Reblog this post [with Zemanta]

Here’s one I got right

Winter
Image by 96dpi via Flickr

I just happened to stumble across a post I wrote just about exactly two years ago to the day:

Ford: Demise of an industry

Normally, I gain some satisfaction from having been right way ahead of the pack.  Not so much in this case, though.

In this case, I look back and I see an industry's incredible collective capacity to see a problem coming, look it in the eye, and keep pouring the coals to the boilers.

Since I wrote that post two years ago, the auto makers have hit the skids in a big way (though, paradoxically, Ford is fairing the best of the US automakers), and we've also seen our banking industry implode.  It turns out that "too big to fail" is too big to steer, doesn't it?

Reblog this post [with Zemanta]

Is your network ready for cloud computing?

Live Mesh
Image via Wikipedia

One of the most impactful things I saw at CodeMash wasn't on the schedule.  I'd dropped in to check out Jeff Blankenburg's presentation on Azure & Windows Live Mesh, but it was clear that the demo gods weren't about to smile on Jeff that day.

The CodeMashers had sucked up the hotel's WiFi bandwidth like it was free beer, and Jeff's demo was sucking wind.  Now, truth be told, I'd seen a similar demo under more favorable conditions, and the demo rocks.  I'm not trying to indict the demo, or even the technology.

What I do question, though, is whether we're collectively thinking about all the architectural implications of the cloud computing movement.

There's a giddy excitement about cloud computing, because scalability in the cloud is super-simple and all but free.  You just twist a knob, and you've got more capacity.  Really -- that's what all the marketing folks tell us, isn't it?

But most of us won't build enterprises entirely in the cloud.  Most of us (or our clients) have systems running on internal networks, and we're going to move portions of our systems to the cloud, or we're going to expand into the cloud for new applications.  That means integration -- not only are we going to drive more traffic through our Internet gateway because we're accessing applications running on the 'Net, we're also going to need to integrate our cloud apps to our enterprise and our enterprise apps to the cloud -- again, all passing through the Internet gateway, and all using relatively fat HTTP requests.

Is your network ready for this extra stress?

A server used for the My Home
Image via Wikipedia

I've seen enterprises operating with chronically poor network performance.  I've seen internal apps running slowly, and I've seen the same apps slow to a crawl when accessed from the Internet.  I've seen the baffled looks on IT Managers' faces when they weren't sure how to attack the problem of network performance.  Networking problems are scary, because it takes a completely different skill set to identify and fix these problems than the skills we've developed as application developers.

Do you have a network engineer that you trust as completely as you trust your DBA's?

Consider what you're going to do the next time your office experiences a "slow internet day".  Now, you're no longer talking about slow Google searches and interruptions to webcasts.  Now, you've enterprise applications that aren't working.  Is your enterprise ready to handle that call?

Reblog this post [with Zemanta]

Death of the Walled Garden

I was listening to NPR's Marketplace last night, and I heard them talking about a settlement that had just been reached in New York.  In this settlement, UnitedHealth Group will close a proprietary database run by their Ingenix subsidiary, and spend $50m to start a new, open database in its place.

The database in question is a gigantic repository of claims and payments, and Ingenix made a tidy living over the years by selling subsets of this data to insurers for really large sums of money.  But for the insurers, the data is crucial, and they were the only game in town.  The barriers to entry for another player were just too prohibitive for someone else to try to break into this business.

So what's the problem?

The problem is that insurers made many of the decisions about how they were going to reimburse for services based on this database, and they were the only ones who could check the numbers.  They were able to mine and manipulate the data to their advantage, and providers and insureds had no choice but to trust that the insurance companies were treating them fairly.

Thus, a lawsuit was born.

But what does this have to do with software?  The Ingenix database was one of the biggest remaining walled data gardens out there, and it's history.  One by one, businesses that were in business only because of proprietary information are going the way of the dodo.

This is a really encouraging development for the health care industry - what's the next walled garden to fall?

Reblog this post [with Zemanta]

CodeMash notes


I'm heading to CodeMash tomorrow, and I hope to come back with all sorts of great news, notes, and ideas.  I'll be scribbling notes throughout the day on my Adesso tablet, and I'll sync them with Evernote to a CodeMash public folder.  You can see all of my scribblings there as I sync - apologies in advance for my chicken scratchings.  As I have time to organize my thoughts, I'll try to get some blog posts out of my trip, too.