Here’s a real treat. I’m working on yanking some old .Net 1.1 code forward to 2.0, including a bunch of DataSets, so I was grabbing these old 1.1 DataSet XSD files, pasting into the new project directory, doing an “Add Existing” in the new project, and letting VS2005 re-generate all the “accessory” files that accompany the XSD. This wasn’t really exciting, but it was easy, and it was working really well until I got to one that just wouldn’t import cleanly.
The first thing I noticed was a build error:
Custom tool error: Failed to generate code. Failed to generate code. Exception has been thrown by the target of an invocation. Input string was not in a correct format. Exception has been thrown by the target of an invocation. Input string was not in a correct format.
Nice. Thanks for all the detail here, cuz’ that just makes the problem obvious, right? Not so much. The next thing I noticed was that I wasn’t getting a .cs file generated for this XSD – clearly a related problem, so maybe I can get a better error message out of the designer. So I right-clicked the XSD and hit “Run Custom Tool”, and got a wealth of new information:
I hadn’t really gotten my hopes up, but still…
As I walked away muttering product management suggestions for Microsoft under my breath, I ran into a co-worker who suggested breaking this beast down and adding one part of the XSD at a time until it barfed. Of course! This ended up being somewhat time-consuming, but it did, in fact, point out the place where the XSD was broken.
The 1.1 schema had a flag that could be used to indicate the value to use when null values turned up. I therefore, had a couple of lines in my XSD that looked like this:
Note the value — empty string, which is not a default one typically associates with int’s. The 1.1 designer seemed perfectly happy with this, but the 2.0 designer wasn’t. I’m still not real thrilled with the punt-on-error approach exhibited by the 2.0 designer – that’s a great way to blow a bunch of time chasing down inane syntax problems.
But lucky for you, you’ll know just where to look, won’t you?
I’m working with a new computer and a new setup where I’m running all development tasks in a VM. It’s been pretty interesting so far – I’ll blog about my experience with this setup later. Right now, though, I’m getting ready to chuck the PC out the window because I can’t get VS2005 to work right.
When I sat down to work on this new machine, VS2005 was already installed – Great! But when I opened an existing solution, it told me it couldn’t load a web project in that solution. No big deal, right? I looked at the project, and sure enough, it was a Web Application Project. Since this is a project type that was made available after VS2005’s release, I figured the project type just wasn’t installed, so I went to find the installer. No sooner than I started that hunt than I saw that Web Application Projects were folded into VS2005 SP1, so I went and grabbed that. In the mean time, I double-checked that I didn’t have Web Application Projects installed by trying to create a new project in VS2005 — no Web Application Project was available.
I started the VS2005 SP1 install, and I was surprised to see that my machine was pretty sure that SP1 was already installed. Since I was also pretty sure I didn’t have the right stuff on my machine, I let SP1 reinstall, and then went and tried to load my solution again. Boom. Ditto on creating a new project. Things were starting to get old.
I spent the better part of a day trying to find the problem, and I finally stumbled on this blog entry from Eric Hammersley. Alas, there were all sorts of people in the same boat. I started working down the list of fixes, repeatedly getting my hopes up and then having them dashed. I mumbled a lot of not-so-nice things about VS2005 under my breath.
My fix turned out to be permissions-related. After I set the permisisons for my Visual Studio install directory to give the Creator-Owner full permissions, then re-ran the SP1 setup, I got my Web Application Project support back – just like magic.
From the looks of this episode, I’d be wary of running any Visual Studio install that you didn’t install yourself. Not exactly the behavior you’d like to see out of a company that’s trying to erase a reputation of security ambivalence, but it is what it is.
Like many people, I know I am quick to slam products when they don’t work well, but fail to give kudos when they do work well. Today, I’m going to throw a tally on the good side of that scorecard for LogMeIn. I’ve used LogMeIn for around a year, now, and I’ve yet to have it let me down. Not too bad, considering where some other software vendors set the bar.
If you haven’t heard of it, LogMeIn is a remote access / remote control product – in fact, a suite of products, really. When I initially set up my account, there were basically just “Free” and “Premium” options, but they’ve added new products at a dizzying pace. They’re up to something like ten products now, including one I didn’t even know about until I looked today. Most of these products seem to be specialized derivatives of their core technology, which is a really nice approach to see – sort of the opposite of the “throw everything in one basket” approach that many products take. Despite the breadth of products, all of my experience has been with the basic free product.
First, the bad news. You have to run an installer on each PC you wish to control, and you need to install an ActiveX control in the browser of any PC you wish to use as a “viewer”. I’d love to see the viewer work without an install (some administrators won’t permit this, of course), but this notwithstanding, I’ve never had the need to install these pieces hinder me.
There’s a lot more good news than bad news, though. Here are just a few of the things I’ve noticed (and liked) so far:
The “host” component prompts you when there’s an update available, but it doesn’t stop running until you upgrade. This can really save you when you’re logging in to a “headless” PC remotely, so the first chance you have to see that the host wants to run an update is when you’ve remoted into the host. It would be a real drag to discover that a whole bunch of PC’s just became unreachable because they were waiting to be upgraded.
I’ve never had any problems traversing firewalls, proxies, etc. File this under “it just works”, and I’m grateful for it.
It works on my Windows Mobile phone! I’m using Win Mobile 6 on a T-Mobile Wing, and it works like a charm. I certainly wouldn’t want to work for any extended time this way, but it’s really, really great to be able to get to the one stupid little button on your desktop that I desperately need to press when I’m sitting in a coffee shop on the other side of town.
Screen scaling. This is one of those little usability features that escapes attention at first glance. Many remote control apps will let you view the host’s screen at exactly the resolution of the host, meaning that if you’re looking at a screen larger than your client’s window, you’re doing a lot of scrolling to see the whole screen. LogMeIn, though, scales your host’s screen to fit in your client’s window – no matter how large the window is. That means that if you’re watching a remote PC to see if a job’s done, or something like that, you can do so in a small window. The resolution suffers a little, of course, when you scrunch the screen down, but it’s really pretty remarkable how readable the screen stays as you resize it. This is a really nice touch.
Did I mention that it just works?
I’ve tried quite a number of other remote-access products over the years, including VNC and Windows Remote Desktop (running over various tunnels and VPNs), and LogMeIn beats them all when it comes to reliability, and when it comes right down to it, that’s the #1 feature for remote access, isn’t it? If you haven’t tried LogMeIn yet, go give them a shot. You won’t be disappointed.
I recently wrapped up development of a new application using Rocky Lhotka’sCSLA Framework, and I really liked it a lot. I think there’s a fair bit of mis-information and confusion circulating about this software, so while it’s all fresh in my memory, I threw together some notes on the things that worked well, and the ones that didn’t work so well, too.
Bear in mind that this material is offered at a high level – if you’re interested in really learning the framework, I’d encourage you to buy the book, download the framework, and dive in – the water’s fine.
One of the obvious benefits of this framework is that Rocky’s written a book to document the framework (multiple, if you count C# and VB variants, plus updates for new releases). The book does a pretty good job of walking you through the construction of the framework, explaining “why” as well as “how”. You can certainly learn the framework without reading the book, but if you like to do your tech learning out of a book, this one is a great read.
The CSLA framework takes a very OO approach to design. Where a lot of code-gen or “factory” approaches lay the entire model open and trust developers to do the right thing, CSLA enables and, in fact, encourages defensive programming with respect to the object model. You get to actually start using scoping modifiers to protect the inner workings of your classes, which is really nice. In my opinion, this encourages a much more robust object model because you can lock down ctors and setters where it makes sense.
This is feasible because you’re using the same objects everywhere you need them, regardless of physical tier, and this is actually one of the places where peoples’ eyes start to glaze over. Rocky actually does a great job of explaining this in his book, but a lot of people see examples with code from multiple logical layers in the same class file, and conclude that CSLA precludes the use of multiple physical tiers.
In fact, the framework actually makes n-tier development much easier and more flexible becuase it abstracts the protocol across layers. I can take the same code, in fact, and configure it to run all on one server (without incurring network traffic across tiers), or as a BL and DL connected via HTTP WS, WCF, or remoting. It really is just a config change to switch among these modes, which is pretty nice.
Some of this flexibility comes from the use of reflection in some spots, and that can put people off, but IMO, the benefits far outweigh the costs unless you’re trying to trade stocks or land the space shuttle. Comparing two applications developed for the same client and deployed into the same environment, the second app (using CSLA) is noticeable snappier, mainly because we’ve got fewer layers of junk to sift everything through, and because we’ve got a lot more control over how we optimize data access.
Data Binding is usually listed as one of the benefits of CSLA, but I’d broaden this to include support for all the MS interfaces that you’d want to support if you were starting from scratch. It does authentication and authorization, for instance, using the same MS interfaces you’d expect to find in use elsewhere. It does validation using the standard interface, and so on. The effect of all this is that stuff just *works*, instead of being a constant struggle to go build something whenever you want to interface with something new.
The CSLA objects have excellent support for “state”, including IsDirty-type infrastructure so the objects know when they need to be saved and what sort of CRUD operation that might be. They’re also capable of supporting n-level undo, though I haven’t needed that.
Finally, the object have good support for validation, including the ability to declaratively write business rules that drive “IsValid” and the ability to save your object. These rules can be reasonably sophisticated, since you’re writing them in code.
I’ve seen a few. First, it seems like I was constantly fighting FUD from people who didn’t know anything about the framework. As I mentioned earlier, there are a lot of misconceptions about this framework – maybe because of its “legacy” roots. I saw very little real merit in these arguments, however – they just don’t hold water when measured against facts.
Second, I saw a couple of places where I got caught building an object based on one base class, and it turned out later that I probably should have been using a different base. At that point, you don’t get any additional help with refactoring that doesn’t already come in the box. There are modeling or code-generation tools that can spew out CLSA code, and my experience here might have been different had I been using one of these. In any event, I was still way ahead using the framework.
Finally, I saw some places where validation didn’t cascade very elegantly down to complex children of parent objects. This isn’t the end of the world, since you can override anything by hand, but it would have been nice to get just a little more help from the framework on this one. At moments like these, it’s hard to resist the urge to just grab the CSLA source code and fix it. That’s an absolute last resort, of course, since it would make upgrades much more difficult.
In case you’re looking for more info on CSLA, I’ve pasted a few representative “community” opinions below – feel free to take a look at any or all of them. FWIW, Rocky tends to do a great job of explaining himself in forum posts, but I’ve yet to see him put together an effective “elevator speech” for CSLA, which is unfortunate.