I'm a fan of simple hooks to summarize complex ideas, and one of my favorite go-to's is simple, memorable and versatile:
Not much to look at by itself, but I've gotten a lot of mileage out of this in some useful contexts. You probably recognize one of the most obvious applications immediately: testing / QA. We're well-trained in applying expected == actual in testing, and most bug-reporting contexts will define a bug as a scenario in which actual behavior doesn't match expected behavior.
Even in this simplest application, all the value here is in how we exercise the framework, and I promise if you really work this equation, you'll have a better handle on bugs immediately. Of the two sides of this equation, "expected" usually seems easier, but is almost always more difficult becuase we most of our expectations are unspoken. If I had a dime for every bug report I've seen where "actual" was well-documented but "expected" existed only in the mind of the reporter, for instance, I'd be writing this from a beach in Hawaii.
Of course, that's where the exercise begins to get interesting. "What were you expecting" is a great place to start here. If you can trace expectations to real documented requirements, it's a short conversation (perhaps followed by another conversation to see why that scenario was missed in autometed testing). It's not unheard-of, though for these bugs to occur in scenarios that weren't explicitly called-out in requirements. If this is the case, be sure to examine "expected" to see if it's reasonable for most users to expect the same thing under those conditions, which is a great segue to my next favorite application of expected == actual.
I'm of the opinion that expected == actual is also a pretty good foundation for understanding usability. I touched on this recently in a post about API usability, and it holds true generally, I believe. In that post, I referenced Steve Krug's Don't Make Me Think, which is both an excellent book and another great oversimplification of UX. Both of these are built on the premise that whenever an application does what a user expects it to do, they don't have to think about it, and they consider it usable.
Mountains have been written about how to accomplish this, of course, and I'm not going to improve on the books that talk about those techniques, but as I pointed out with resepct to API usability, consistent behavior goes a long way. Even devices or applications we consider highly usable have a short learning curve, but one a user is invested in that curve, it's tremendously helpful to leverage that learning consistently -- no surprises!
A final consideration that applies in both these areas - when you find a case where "expected" really isn't well-definded yet, and can't be easily-derived based on other use cases (consistency), this is a golden opportunity. Rather than just taking their version of "expected" at face value, ask why they expected that (when possible). You probably won't be able to invest in a full "nine-why's" exercise, but keep that idea in mind. The more you understand about how those expectations formed, the more closesly you can emulate that thinking as you project other requirements.
Besides these examples, where else might you be able to apply "expected vs. actual"?