This blog has been moved to

Wednesday, December 26, 2007

This Blog has Moved!

I've moved my blog to (a domain I've had for awhile, but never got around to using).

I've managed to transfer all my posts to the new blog (though I've lost all my comments in the process). I plan on disabling my Blogger account soon, but I will probably leave the posts up for now.

If you are subscribing to the RSS feed, the new feed is

Hopefully I will see you on my new site!

Tuesday, December 11, 2007

The Illustrated Catalog of Common Driving Patterns

Over the last 20 years I have had many opportunities to observe my fellow motorists (especially since I started commuting 35 miles or to work everyday). During this time, I have noticed some very common patterns on the road. These patterns are often annoying and sometimes dangerous (though I'm sure we're all occasionally guilty of some of them).


Driving Patterns

Bounce This is perhaps the most commonly used pattern on the road. A "bounce" occurs when you pass a driver, they speed up and pass you, and then slow down to their original speed within a short period of time. Bounce
Freeway Phobia This driver enters the freeway at a significantly slower rate then traffic. Some of the scariest experiences behind the wheel is trying to enter the freeway behind one of these drivers. FreewayPhobia
Frogger frogger The name Frogger comes from the old arcade classic. In this pattern, the driver is attempting to beat traffic by seeking out the fastest lane. Of course heavy traffic can be difficult to predict and in the long run, you usually get the satisfaction of passing them several times. Opportunist
Pace Car Drives slowly in the fast lane. Enough said. PaceCar
Scout Scouts go on ahead to make sure there are no speed traps waiting for the rest of us. Scout
Stalker The stalker can be identified by their uncanny ability to stay in your blind spot. Even when you speed up or slow down, they stay right where they are. This can be especially frustrating when you need to merge into their lane. Stalker
Tailgating This is a well known pattern where a driver follows very closely to the car ahead. The general rule of thumb is to leave 2 seconds of space between you and the car you are following. Tailgater
Road Lord When another driver appears to be attempting to change lanes in front of them, this driver will speed up to prevent them. RoadLord

So next time you are stuck in traffic, try to spot as many patterns as you can. If you know of any patterns that I missed or have a better name for one of my patterns, leave a comment.

The illustrations in this post are courtesy of the GT Challenge arcade game and my wicked Paint.Net skillz :).

Ever notice that anyone going slower than you is an idiot, but anyone going faster is a maniac? - George Carlin

Friday, November 30, 2007

Multi-threading just got a little bit easier

Microsoft has just announced that they have "released an early preview of the Parallel Extensions to the .NET Framework (ParallelFX) technology." I haven't had a chance to check this out yet, but it looks like it will provide a new API for managing threading. What looks especially interesting are the multi-threaded foreach loops.

I don't want to say too much because I really don't know much about it, but check out the announcement on Somasegar's blog, Parallel Extensions to the .NET FX CTP.

Sunday, November 25, 2007

Qualitative vs Quantitative Results in Usability Testing

As regular readers might already know, I am taking a certificate program in User-Centered Design. This quarter I am taking a class on usability testing.

One of the things I've learned in the course is that there seems to be a debate on qualitative vs. quantitative results. For the sake of this post, I'll define them as thus:

Qualitative - descriptive, subjective. Not easy to provide objective measurement.

Quantitative - numerical, objective. In sufficient quantity can provide statistically significant facts.

As you can probably imagine, most people would like to see quantitative results. They are easy to understand and see just how important a particular issue might be (not to mention those cool graphs). However, due to budget and time constraints, it is extremely uncommon for a usability test to have a sufficient number of participants to qualify for statistical significance. Without statistical significance, are quantitative results useful?

image I have to say emphatically no. Quantitative results are completely useless within a usability test. The goal of usability testing should be to find flaws in the design (from the users perspective) of the software. These flaws should be tracked and prioritized the same as any other defect within the application (hopefully you use issue/bug tracking software).

Usability testing should focus on qualitative feedback. There are many techniques that you can use to elicit feedback from a user (for example, thinking out loud). Post study questionnaires can also be useful, but the questions should be open ended. You can ask a participant to rate the software, but only to lead them to the next question which should ask why they rated it the way they did (the actual rating is useless without statistical significance).

Furthermore, when reporting the results of usability tests, it is important not to imply any kind of statistical significance. For example, don't say that 20% of the participants failed to complete a task when you only had five participants. For all you know, that participant is the only person on the planet that would fail or perhaps the other four just got lucky. Even mentioning something like 1 out of 5 can be dangerous. Always make sure that people that are reading your report understand the limitations of the data.

Monday, November 19, 2007

Visual Studio 2008 and .NET Framework 3.5 shipped!

This just in, Microsoft has shipped Visual Studio 2008! Read Somasegar's Weblog for the official announcement.

I can't wait to try it out. I'm looking forward to trying out all the new features such as XML literals in VB.Net, improved Intellisense, LINQ, integrated unit testing, and much more.

Hopefully I will get a chance to purchase a copy from the MS store soon. Are there any MS employees reading this that wouldn't mind if I bought this on their quota? I don't work too far from Redmond :).

Friday, November 09, 2007

.Net 3.5 Poster Available

The Commonly Used Types and Namespaces poster for version 3.5 of the .Net framework is now available.


It main sections are Windows Presentation Foundation/Windows Forms (in green), ASP.NET (in yellow), Communications and Workflow (in orange), Data, XML and LINQ (in purple), and Fundamental (in blue).

The poster lists new types as well as what will be available in the 3.5 version of the compact framework and Silverlight 1.1.

There is a lot of new stuff in here. Here is a quick list of what's new in each section:

  • WPF/WinForms
    • Nothing
  • ASP.Net
    • System.Web.ApplicationServices
      • AuthenticationService
      • ProfileService
      • RoleService
    • System.Web.ClientServices
      • ClientFormsIdentity
      • ClientRolePrincipal
      • ConnectivityStatus
    • System.Web.ClientServices.Providers
      • ClientFormsAuthenticatonMembershipProvider
      • ClientRoleProvider
    • System.Web.Configuration
      • WebConfigurationManager
    • System.Web.UI
      • ScriptManager
      • UpdatePanel
      • UpdateProgress
  • WCF/WF
    • System.Net.PeerToPeer
      • Cloud
      • PeerName
      • PeerNameRecord
      • PeerNameResolver
    • System.Net.PeerToPeer.Collaboration
      • ContractManager
      • PeerApplication
      • PeerCollaboration
      • PeerContact
      • PeerNearMe
    • System.ServiceModel.Persistence
      • PersistenceProvider
      • PersistenceProviderFactory
      • SqlPersistenceProviderFactory
    • System.ServiceModel.Syndication
      • Atom10FeedFormatter
      • Rss20FeedFormatter
      • SyndicationFeed
      • SyndicationItem
    • System.ServiceModel.Web
      • WebGetAttribute
      • WebInvokeAttribute
      • WebOperationContext
      • WebServiceHost
  • Data/XML/LINQ
    • System.Data.Linq
      • DataContext
      • EntityRef<T>
      • EntitySet<T>
      • Table<T>
    • System.Data.Linq.Mapping
      • AttributeMappingSource
      • Metamodel
      • XmlMappingSource
    • System.Xml.Linq
      • XAttribute
      • XDocument
      • XElement
      • XName
      • XNamespace
      • XNode
      • XText
  • Fundamentals
    • System
      • TimeZoneInfo
    • System.AddIn.Contract
      • IContract
      • INativeHandleContract
    • System.AddIn.Hosting
      • AddInProcess
      • AddInStore
      • AddInSecurity
      • AddInToken
    • System.AddIn.Pipeline
      • ContractBase
      • ContractHandle
      • CollectionAdapters
      • FrameworkElementAdapters
    • System.Collections.Generic
      • HashSet<T>
    • System.Diagnostics
      • EventSchemaTraceListener
    • System.Diagnostics.Eventing
      • EventDescriptor
      • EventProvider
      • EventProviderTraceListener
    • System.Diagnostics.Eventing.Reader
      • EventLogInformation
      • EventLogReader
      • EventLogRecord
      • EventLogWatcher
      • EventRecord
      • ProviderMetadata
    • System.Diagnostics.PerformanceData
      • CounterData
      • CounterSet
    • System.IO.Pipes
      • AnonymousPipeClientStream
      • AnonymousPipeServerStream
      • NamedPipeClientStream
      • NamedPipeServerStream
      • PipeSecurity
      • PipeStream
    • System.Linq
      • IQueryable<T>
      • Queryable
    • System.Linq.Expressions
      • Expression<T>
      • Expression
    • System.Runtime.Serialization.Json
      • DataContractJsonSerializer
      • JsonReaderWriterFactory
    • System.Security.Cryptography
      • ECDsaCng
    • System.Threading
      • ReaderWriterLockSlim

I found the add-in support particularly interesting. I have been planning on implementing add-ins for the next major version of our product and am hoping to be able to use the add-in support in .Net 3.5. If you are interesting in finding out more about this, check out the article .NET Application Extensibility and the CLR Add-In Team Blog.

Thursday, November 01, 2007

Creating A Build Process

Jeff Atwood had a great post today about automating your build process (The F5 Key Is Not a Build Process) and it inspired me to finally write this post (I promised it a while ago - Making the Build).

As mentioned in my post Making the Build, an automated build process is very important. However, most software teams have a tight schedule and budget and find it difficult to justify spending time creating a proper build process, especially when nobody on the team has previous experience creating an automated build process.

Before I get to the benefits of an automated build, let me first describe the build for our product.

Our build process is fully automated. We have it scheduled to run on a dedicated machine every night. The build includes many different things, including compiling the source code, building the database from the previous version of our product using scripts, generating some source code, updating assembly attributes (such as version, company, etc), running unit tests, and more (too project specific to bother mentioning).

The benefits that we have found for our team are as follows:

  1. Frequent builds -  Before we automated our build process, it was run manually (though we've always had scripts and utilities to help). Unfortunately it was sometimes weeks between builds. If there was a problem with the build it was very difficult at times to track it down.
  2. Standard builds - Back when we ran our build process manually, it was not always run the same way. Often times steps were skipped because they took too long and didn't seem important (sometimes it wasn't, but when it was, it could take a lot of time to figure out what the problem was).
  3. Build status visibility - Every developer gets an email every day about the status of the build. If the build fails due to something they checked in, it is usually easy to find the problem because it is only a single days worth of code and was checked in only yesterday - hopefully they still remember what they worked on yesterday :).
  4. Easily get the latest build - The build process places the completed build (at least the parts of it the developer cares about) into a shared directory that all developers have access to. To run the build, they can simply copy the build onto their machine and run a simple utility to attach the database. This makes it very easy to debug problems with the build.

If you decide to create an automated build process, here are a few suggestions (some of the advice is specific to building .Net projects, but I think much of it should hold true for other technologies as well):

  1. Fully automated - I can't seem to mention this enough :). The process should be able to run in the middle of the night on an unattended machine without any manual setup or teardown process.
  2. Run regularly - The sooner a problem is found, the easier it is to fix. We build nightly, but many people build even more often than that.
  3. Run in a clean environment - In order to prevent false positives with a build (the build succeeds but shouldn't have), the first task in your build should be to create a clean environment (create a new directory or clear out the directory that you are using). This is especially important in finding circular references.
  4. Use proper tools - You don't want  We use FinalBuilder. The tool provides a visual interface for creating your build. I would not recommend using batch files, MSBuild (as your primary build tool anyway), or NAnt. These can be very difficult to maintain. You don't want to get into a situation where every time you want to make a tweak you have to relearn the whole build process. If your team has a dedicated build engineer, then MSBuild or NAnt are great tools.
  5. K.I.S.S. - Keep It Simple, Stupid. The simpler you make the build process, the more likely it will be for people to use it correctly. For example, in our build process, the only thing developers need to do to integrate their project with the build is to check it into the correct location in VSS. We have a utility that will make sure all the projects get compiled in the correct order based on the references in the project file (Compiling Multiple Projects Without a Solution).
  6. Don't underestimate the effort - It took me a couple of weeks full time to create our current build process. Of course once it's setup, and assuming you used the proper tools, it should only take a few minutes to make adjustments when necessary.

Many people also advocate that build should be able to be run by any developer on their machine. This hasn't been very realistic for our project due to the size of the project and the cost of the software we chose to use (FinalBuilder isn't exactly free). However, it is a goal that I approve of.

If your project doesn't currently have an automated build process, I hope this article has encouraged you to create one. Good luck.

Sunday, October 28, 2007

Blogging Schedule

Regular readers of my blog have probably noticed that my blogging schedule has been a bit erratic lately (not that I've ever stuck to a strict schedule :). Unfortunately I haven't had much free time lately.

I've started taking classes to obtain a certificate in usability from the University of Washington.  At work I recently gave a presentation at a user conference and I'm rewriting the UI framework that our company uses. And my family has taken up some time as well with football, birthdays, etc.

However, I do plan to keep my blog going and will try to fit some short posts into my schedule on occasion. Who knows, maybe someday my writing will improve enough where I have Jeff Atwood's (from Coding Horror) problems.

On another note, what types of articles would you like to see?

My most popular post is Data Binding Classes, Interfaces, and Attributes in Windows Forms 2.0. This post is even more popular than my main page. Perhaps I should write more articles focused on the .Net UI. This is my area of expertise. Perhaps I could write articles on design-time support for Visual Studio? I am certainly aware that this is an area that needs more documentation.

Of course I am also starting to focus more on usability at my work and I'm taking the certificate program. Perhaps I should write more articles that discuss this topic?

What about other topics? Business of software? Cool technology? Developer productivity? Others? I would love to hear from you and get your opinion!

I would especially like to hear from you if you read my blog in an RSS reader. I do not believe that the analytical software that I use (Google Analytics and StatCounter) tracks RSS feeds. So I have no idea if anybody is reading my blog through an RSS reader. Just a "Hi, I read your blog through an RSS feed" would do fine :).

Wednesday, October 24, 2007

Seam Carving

Have you heard of Seam Carving before? Seam Carving allows you to stretch or squish images without distorting them. It's an amazing process that you have to see to believe. The video is just a little over 4 minutes and well worth it (even my wife thought it was cool, though she might have just been pretending).



I learned of this through a blog post by Mike Swanson. He has created a .Net version of this algorithm. I'm really hoping that this makes its way into Paint.Net!

For more details about Seam Carving, check out Seam carving for content-aware image resizing [20 MB PDF].

Thursday, October 04, 2007

Microsoft is releasing the source code for the .Net FX

Microsoft has announced plans to release the source code for the .Net framework (comments and all!). This is awesome news for those of us that stress the framework fairly regularly.

Of course, Reflector gave us the ability to view the code already, but now you will be able to view the source code comments as well as step into the code with VS 2008. In fact, with VS 2008, if you don't already have the source code and you attempt to step into it, it will prompt you to download it automatically.

It's not quite open source, but it's still a big step for Microsoft. VS 2008 is certainly looking like it will be an interesting release! Hopefully it will be ready by the end of the year (please don't make me wait too long!:).

Monday, September 17, 2007

University of Washington User-Centered Design Certificate

I am now officially registered for the University of Washington User-Centered Design certificate program (UCD from here on). I'll be starting classes in October and should finish the program in the Spring (3 quarters). The company I work for offered to pay for training so I readily accepted. I was a bit worried that this would be too much for the budget ($1,800 a quarter for 3 quarters), but I don't think many others in the department are doing much.

The last certificate program I took at the UW (Object-Oriented Analysis and Design Using UML) took almost a year and a half to complete (I was definitely ready to be done with it by the end). It was all done online (except for a single trip to a testing facility for a written test). Being online was convenient, but I did miss the interaction with other students and the professors.

The UCD program, by contrast, is being held in classrooms, once a week, in the evening. Luckily I was able to arrange to take the first of the classes in Bellevue which isn't too far from Kirkland where I work (hopefully traffic won't be a major problem). Unfortunately the rest of the classes are going to be in Seattle, so that will suck trying to get over to the campus during rush-hour traffic. 

I was hoping I could find a program at the UW Bothell. It's only a few minutes from my work and is where I received my bachelors degree (Computing and Software Systems). Unfortunately they don't seem to offer any certificate programs, though maybe I'll get an MBA from there once the kids are out of the house (though their might be a branch campus near where I live by then).

I am hoping that this usability program that I am taking will provide me with plenty of material for my blog. If you are interested in this subject and don't want to spend the time or money on the course, stay tuned.

Sunday, September 16, 2007

Usability Rule #3: Know Thy User

Before you can design an interface that is easy-to-use, you must first know something about the user and how they plan on using the interface. Unless you are building an interface for only yourself, it is unlikely that you will get it right without actually talking to potential users.

So what characteristics are important when designing an interface? Although this list might change based on the interface you are designing, for a line-of-business application, the important characteristics tend to be...

  • Frequency of use - How often will the user use this interface. If they will use it every day, you will probably want to provide a very efficient interface. Focus on keyboard entry, allow entering codes instead of selecting from a list, make sure that frequently used fields are grouped together, etc. If they will use it once a month, the interface should be very easy to learn since they will probably have to re-learn it every time they open the interface.
  • Duration of use - How long will the user use the interface. If they will only use the interface for a few minutes then you can get away with a less-attractive, less-responsive interface. However, if they use the interface all day long, it had better respond very well and be at least somewhat aesthetically pleasing (would you want to stare at an ugly screen all day?).
  • Technical Competence - What kind of computer experience does the user have? Are they a power user? If so, you can provide many advanced bells and whistles (in fact, you probably should). On the other hand, if the user is afraid of turning on the computer, then the interface should be extraordinarily simple or the user doesn't stand a chance.
  • Work Environment - What kind of environment will the user be using the interface in? Will they be sitting in a private office with air conditioning and sound protection, free of distractions? If so, then work environment probably won't have much impact on the design. However, if they work on the shop floor of a manufacturing plant with heavy machinery constantly grinding away and a conveyor belt that moves every minute, this could have a significant impact on your design. In this situation for example, you will want to provide an interface that will allow the user to complete the task in less than a minute and does not rely on audio cues.
  • Hardware - What kind of hardware will the user be using? Hardware includes the computer and any peripherals (such as keyboard, mouse, monitor, printer, scanner, barcode reader, etc). Many users suffer with older, slower computers (you might be surprised by what even some software companies use). The user might not have much memory on their computer, have their monitor set to a low resolution, use a barcode scanner, or many other hardware considerations.
  • Physical Limitations - Does the user have any physical conditions that might limit their use of the interface? This includes eyesight, hearing, and dexterity along with any other physical conditions that might be prevalent amongst your targeted users. Getting an age bracket is probably the most useful information you can gather for this characteristic because that can have a significant impact on their abilities, though work environment could very well play a role here as well (perhaps many users come from years on the shop floor and now have some hearing loss because of it, perhaps they have worked with chemicals and have difficulty controlling the mouse).

As you can imagine, if you are planning on your interface being used by many people, you will probably need to talk to many potential users to get a reasonable understanding of what is common (though if you only talk to a single individual that would still probably be better than having a software developer guess).

So how do you compile this information? Once you've interviewed the users, you can use the information to put together a user profile that includes all of the common and important user characteristics. Persona's have become a common technique for building this profile. A persona is basically a fictional character that you create that typifies the target user. The reason that many people use personas is that it makes it easy to discuss requirements by just mentioning the persona's name.

 Example conversation between two interface designers:

[John] Hey Sally, what do you think of this form I'm designing?

[Sally] Who's it for?

[John] Frank [persona] is going to be using it.

[Sally] It looks good except for that link. I don't think Frank will be able to use it. He's been working with those chemicals for years and would never be able to get the mouse to hover over it. You should make it larger.

As long as both designers agree on the persona, John can immediately see that Sally is right and they can avoid all the discussion usually accompanied by such criticism. Of course, the main power of personas is that John's interface is already mostly correct with no discussion necessary.

One of the best examples of personas I've found on the Internet is from Microsoft. The Dynamics team has put together an excellent list of personas for their product and have made a couple of those personas available on the Internet. Microsoft Dynamics RoleTailored Business Productivity whitepaper

Sunday, September 02, 2007

I Got a Promotion!

Ok, not really, but I got my Evil Mastermind t-shirt from Source Gear. The payment was that I had to post a picture of myself wearing it. I don't really like getting my picture taken, so I created this picture instead (hopefully it's good enough).

If you want the full story for the picture, read the Evil Mastermind comic book.


The picture of me was created with the Simpsonizer. I added the Evil Mastermind image and the text using Paint.Net.

Source Gear (the company that sent me this t-shirt) is run by Eric Sink. Eric has also written a book called Eric Sink on the Business of Software, coined the term Micro-ISV, and has a great blog that I read regularly.

Wednesday, August 15, 2007

VB.Net Intellisense Enhancements

Microsoft is making some great improvements to Intellisense for Visual Basic .Net in Visual Studio 2008. Most of the new features are based around the language enhancements for VB 9, such as LINQ, anonymous types, etc. Though there are many other improvements as well.

  • Filtering - Filters the list as you type so only items that start with the characters that you have typed will be displayed. This is not as useful as Visual Assist X which will include all items that contain the characters you typed in the order you typed them without having to be contiguous (allows you to type the acronym for a member or find all members that contain a certain word).
  • Transparent mode - The Intellisense dropdown becomes semi-transparent when you press the Ctrl key. This allows you to see the code behind the dropdown without having to close Intellisense.
  • Smarter Intellisense - In many cases, only items that are valid are displayed in Intellisense. This includes keywords. This is a very handy mechanism for learning the language as well as remember syntax.

There are many other improvements as well. If you are interested in learning more, check out this Ch 9 video: Visual Basic Intellisense Improvements in VS 2008.

Monday, July 30, 2007

Usability Rule #2: Stick with the K.I.S.S Principle

Keep It Simple, Stupid, or the way I like to phrase it, make the simple things simple and the complex things possible. This is a very common design mistake. Users ask for tons of features and developers work hard at delivering those features, but often times at the expense of a simple to use application.

Of course by simple, I mean "everything should be made as simple as possible, but no simpler" (quote attributed to Albert Einstein). If the interface doesn't make the most common way of performing a task as simple as possible, the interface has failed the user.

An example for good use of this principle is the Remote Desktop connection interface. When you start Remote Desktop, you see a UI with the absolute bare minimum interface for connecting to another computer (figure 1), a single textbox for entering the name of the computer you want to connect to and some buttons to perform what should be easy to understand commands (at least, easy to understand if you know what the application is supposed to do in the first place).

Figure 1: Simple interface for Remote Desktop.
Figure 1: Simple interface for Remote Desktop.

Of course there are a lot of options that you might want to change when connecting to another computer, if you are an advanced user with a special need you can select the Options >> button. This button opens another window (figure 2) that allows you to change the advanced settings of Remote Desktop, you can even save the settings so that you don't need to go through the options dialog again.

Figure 2: Complex interface for Remote Desktop.
Figure 2: Complex interface for Remote Desktop.

The term for this type of interface is progressive disclosure which simply means removing less frequently used interface elements from the primary screen while still providing a means of accessing those elements. This can be in the form of a secondary screen, such as in Remote Desktop, collapsing regions, or providing an Advanced tab within a tabbed interface (this can be seen in the options dialog in the Remote Desktop screenshot, figure 2).

If you want to see how not to design a UI, take a look at Bulk Rename Utility (figure 3). It seems to provide every possible option on the main form, regardless of how often it is used. Bulk Rename Utility is a useful tool intended for computer savvy people who can probably overlook a bad UI if it provides the features they need, but you would never want to put a UI like this in front of a technophobe.

Figure 3: Overly complex interface for Bulk Rename Utility.

So how do you know what the simplest interface is for the user? Asking the users is certainly a good start, though not entirely reliable (the mind has a tendency to assign equal importance to exceptional cases and common ones). Other methods include watching the user actually performing the task and logging feature usage and analyzing the results. Feature logging will probably result in the most comprehensive and accurate view of the current system, but it requires you to have a system in place already and for your users to be willing to share the information that is being logged.

By keeping the interface as simple as possible, the user is able to accomplish the task quicker with higher accuracy, less training, and fewer support calls. By providing a simple interface with the ability to perform more complex tasks you have taken nothing away from the user except a barrier to using the application.

Sunday, July 22, 2007

Usability Rule #1: Consistency is More Important Than Correctness

One of my favorite usability rules is "consistency is more important than correctness." Just to be clear, this is not intended to diminish the importance of correctness, it simply means that if you can't be correct everywhere, you should at least be consistent. Consistency can help lower training costs and reduce mistakes.

As an example of this rule, let's take a complex application with many forms. Dates are displayed on many of these forms using a hard coded format of Month/Day/Year. You have been tasked to create a new form. As a good, conscientious developer, you know that dates should be formatted differently based on the culture of the user and that there is a built in function for formatting the date that actually requires less effort on your part than formatting the date yourself.

What should you do?

  1. Format the date correctly using the built-in culture sensitive formatting.
  2. Use the same hard coded format as every other place in the application.

Of course, the best answer is C, fix all the other places in the application where the date is formatted incorrectly. Unfortunately, that's not an option you. You don't have access to the other source code, there is business logic relying on the date format, nobody has reported it as a problem before, there is not enough time to fix it, we don't have the testing resources to test in different cultures, etc. (I'm sure you have heard plenty of reasons why people don't want to fix incorrect code).

The second best option is B, format the date consistently with all the other dates in the system. If all the dates are formatted wrong and the user is forced to use the software, they will eventually learn how the date is formatted and will expect the date to always be formatted that way. If their culture reverses the month and day (Day/Month/Year) the user may not realize that you are formatting it correctly and read the date incorrectly possibly causing costly mistakes (a customer doesn't get charged for 6 months, the company's domain name expires, a patient doesn't get scheduled for treatment soon enough and dies, etc.).

Of course, data formatting is not the only thing that should be consistent. Here is a list of common things to look for...

  • Formatting of data - date, time, elapsed time, currency, numbers, etc
  • Theme (colors and fonts) 
  • Unit-of-measure - Be consistent where you can and always make sure the unit of measure is clearly marked.
  • Captions - If a field is called XXX on one screen, it should always be called that. Same goes with commands (such as buttons, links, menu items, etc) and any other non-data text displayed to the user. This applies to icons as well.
  • Shortcuts - Common commands should have the same shortcut across the entire application.
  • Messages - Decide on a common style and use it consistently (for example, friendly or professional)
  • Margins - Space your controls consistently. Different controls may require different margins, decide up front what those should be and stick with it.
  • Alignment - Are captions aligned left? right? top? bottom? Are numbers aligned to the left? right?
  • Layout - How do you group controls (group boxes, horizontal rules, tabs)? Where do you place commands that are related to data fields?
  • General flow - How does a user open a record? Save it? Delete it? Create a new one? Access related data? View messages? How are errors handled?
  • Consistent with other applications the user may be familiar with - If a user is already familiar with another application, you can leverage their skill from that application in your own.

Monday, July 16, 2007

SlickRun Review

Summertime has really effected my blogging schedule. I've been spending my freetime working on a website for my neighborhood and building a garden wall. I'm almost done with the wall, but we might end up extending it another 30 feet, so we'll see.

Just recently I've started using a tool called SlickRun. It's a very simple tool that is essentially just a textbox (no form, buttons, etc). It allows you to type in commands similar to the Run (Win+R) dialog built into Windows. The cool thing about SlickRun however, is that you can also add "MagicWords" to it so that you can perform more complex commands with a single word.

SlickRunConfig There are plenty of configuration options for SlickRun to get it to look and work the way you want (see the screenshot to the right). You can even setup SlickRun to run instead of the Windows Run dialog. This is a little tricky since you have to edit the config file (I'm not sure why this setting isn't in the SlickRun config dialog), but once you set it up, when you press Win+R you get SlickRun!

Instructions to setup SlickRun to handle Win+R in Vista
  1. Install SlickRun
  2. Open Windows Explorer (Win+E)
  3. Navigate to your profile directory (C:\Users\<MyUserName>\AppData\Roaming\SlickRun) - AppData is a hidden folder, however, you should be able to type in the path directly.
  4. Open the SlickRun.ini file in your favorite text editor
  5. Locate the GrabWinR setting and change the value to 1 (GrabWinR=1)
  6. Save the changes

Thursday, June 28, 2007

Orcas Workshop: Day Four

Last day of the workshop went well. I'll be sad to leave the prepared lunches, snacks, ice cream, etc at Microsoft, but I won't be sad to leave those horrible chairs (my back is killing me).

Most of today was on Microsoft Office integration with a short bit at the end for Team System. I left a few minutes early hoping to avoid the traffic on my way home, unfortunately my plan didn't work out that well (2 hours!).

Office Integration

  • OBA (Office Business Applications) enable MS Office products to deliver business services in the form of add-ons. It is not a product, but a technical concept. OBA bridges the gap between unstructured processes (individuals making up their own process, such as storing notes in Word documents) and structured processes (line-of-business applications).
  • SharePoint is an important aspect of OBA.
  • VSTO (Visual Studio Office Integration) is the bridge between Visual Studio and Office. It will be shipped with Visual Studio 2008 Professional edition.
  • The visual designers that come with VSTO look great. If you are extending a Word document, you get a designer that looks like Microsoft Word. If you are extending just the ribbon control, you get a designer that looks like the ribbon. You can add controls and hook up event handlers the same way as with WinForms development. Very easy.
  • Debugging is easy too. Simply press F5 and the Office product that you are extending is started and your add-in is loaded and run. You can place breakpoints in your code in order to step through it just like any other VS project.
  • VSTO for Orcas has improved security and deployment. I'm not entirely sure what this means since I'm not a VSTO developer, but if you are, this might be interesting to you.
  • VSTA (Visual Studio Tools for Applications)  is the underlying infrastructure for VSTO. It is essentially managed code macros.
  • The code that you write for VSTO is separate from the document. The assembly must be installed on the client in order to be used. You are also able to deploy it on a remote server, but the client will need to be able to access that server in order to use the macro (it is configured using a manifest file).
  • The Office ribbon control is extensible through XML.

Visual Studio Team System

  • There isn't much new stuff coming out in Orcas for Team System (it is considered a point release for VSTS).
  • Rosario is the next major version of VSTS. It will offer better project level management and test case management. I'm not sure what the release date for this is.
  • Rosario will support dependencies between work items.
  • The source control built-in to VSTS allows for cherry picking changes. This allows you to make a hot fix to production code within the development branch without having to include all of the changes since that was released (of course a person will need to determine if that is reasonable or not).
  • You can setup a proxy for remote offices that will cache source code so that when you request a version that is already cached, you get that instead. This is very useful when dealing with remote locations with slow connections.
  • If after setting up your VSTS project you need to add a field to your work items, Power Tools 1.2 provides the ability to do this graphically. (apparently there is a video that was posted somewhere recently that shows how to do this, I wasn't able to find it though)

PowerPoint Presentation

I downloaded the PowerPoint presentations from the workshop. If you would like me to email you one, just mention which one you want and leave your email address. I don't feel comfortable just putting them on the Internet, but I don't see why Microsoft would care if I shared them (they said it was ok to download them and there wasn't any NDA that I remember anyway). If you want one, please note that they were built with PowerPoint 2007. If you don't have PowerPoint 2007 you will need to download a 28MB Office Compatibility pack.

Well, this ends my Visual Studio 2008 and .Net Framework 3.5 Workshop journal. I hope you enjoyed it. I'm excited to see the final release of VS 2008.

Wednesday, June 27, 2007

Orcas Workshop: Day Three

Three days down, one to go. There was a great presentation on WCF (Windows Communication Foundation) today! I'm really impressed with some of the stuff they provide out of the box for free. They also talked about WF (WorkFlow) and mobile devices.


  • Microsoft says that WCF is the next generation platform for building distributed systems (and I haven't even gotten the chance to use the last generation platform).
  • WCF provides the infrastructure to hook up any number of different protocols to a published service. This can be SOAP, REST, or any number of different protocols (there are a lot of them that WCF understands).
  • End points are essentially the transport layer for calling a service. A single service can have multiple end points defined for it (so it can be called using SOAP, REST, etc). The developer does not have to write any code to hook up these end points, it's handled by WCF. The end points can be created in a post-deployment scenario using configuration files.
  • 3.5 offers a lot of new features such as better tooling support for building and debugging services. There is a service host similar to the ASP.Net debugging host that will run so that you can test your services without having to setup your own server. There is also a WCF client that will allow you to call any of your services, including setting parameters and viewing return values. This will allow you to set break points in  your code and debug into it. These should be available in the beta 2 (I don't believe they are in the beta 1).
  • 3.5 includes 1st class support for REST, syndication, JSON (JavaScript Object Notation), Oasis, and WF (I'm sure I'm missing a bunch!).
  • There is an SDK available for syndication that I believe will run using the 3.0 bits at The name of the SDK is BizTalk Services SDK. You should be able to just reference the assembly that exposes the syndication library in order to use it (sorry, I don't remember exactly which one it is).


I didn't find this presentation as interesting because I already know something about it and it doesn't look like they are adding much for 3.5.

  • WF is the software implementation of the business process or business logic.
  • WF is a lightweight process (as compared to other workflow services available)
  • Provides the infrastructure for maintaining state over a long running process.
  • A workflow can be a flow chart, state diagram, or rules based.
  • WF can be built using either code or markup
  • 3rd party designers available for designing workflows and saving as XML
  • Context Exchange Protocol allows for long running workflows. This works on a similar concept as cookies so that the workflow instance can be re-constituted when new events arrive.
  • WF provides static role based authorization as well as dynamic (code based) authorization.

Mobile Apps

There seems to be a lot of cool new features for mobile devices within the Orcas time frame, especially for developers.

  • The designer surface in VS08 can be made to look like the mobile device that you are targeting.
  • The emulator starts quickly once you have saved the state (otherwise it has to load tons of stuff when you start it).
  • You can create unit tests the same way as you would any other project in VS08. The emulator contains the testing framework. When you run the unit tests, the emulator starts and the unit tests are loaded onto it and run.
  • The emulator can be easily configured using XML to test different deployment scenarios such as RAM, screen resolution, screen orientation, battery level, etc (I'm sure you get the picture).
  • There is a device configuration manager built into VS08 that allows you to change the security context of the device.
  • If you want to connect to devices from a desktop application, you can use the Microsoft.Smartdevice.Connectivity.dll. This will let you discover any devices connected to the computer and connect to them.

Compact Framework

  • In order to make the compact framework fit on the device, many features had to be removed. This includes many libraries that weren't that useful on the device (such as the ASP.Net assemblies) as well as removing many classes and class members from the ones that were left. The end result is around 6MB.
  • Version 3.5 of the compact framework will include WCF capabilities, however, in order to meet the size requirements, much of the WCF was stripped out. This makes it more difficult to use WCF on a mobile device, but at least it's still possible.
  • 3.5 will be supported by all devices that support 2.0. You will just need to deploy the 3.5 piece.
  • Due to the occasionally connected environment and the stripped down version of the WCF, the compact framework uses email to provide reliable transport of messages.
  • 3.5 will support LINQ, but not all implementation of it. LINQ for SQL and LINQ for entities will not be supported. Expression trees are also not supported (the compact framework doesn't support Reflection.Emit which is used by expression trees).
  • There will be a remote debugging tool made available that will make it easier to find memory leaks on a device.

Tuesday, June 26, 2007

Orcas Workshop: Day Two

Another busy day, at the Visual Studio "Orcas" and .NET Framework 3.5 Training Workshop. The first topic was a very high level roadmap of the different Microsoft technologies that are available or are coming available within the next year or so (ASP.Net 2.0, ASP.Net 3.5, Silverlight 1.0, Silverlight 1.1, WPF 3.0, WPF 3.5, etc). I'm not sure who the prime demographic was for this presentation, but I find it hard to believe it was too many people in the room. All the information was already well known to everybody in the room (either because they knew it coming into the conference or because it was discussed yesterday at the conference).

The presentation I found the most interesting today was the one on ASP.Net 3.5. I've listed some of the interesting things I learned today about it below.

New Features for ASP.Net 3.5

Even though I don't work with ASP.Net professionally (at least, not much), I do work with it personally (I've volunteered to maintain my communities website which I am switching to ASP.Net 2.0 tonight!). So I was pretty interested in this topic. It's also important to keep up with this stuff since the Internet is not going away, in fact, it's more likely that desktop development will go away that browser-based development (though I won't be holding my breath for that).

  • Support for multi-targeting. This allows you to use Visual Studio 2008 to build ASP.Net 2.0 websites.
  • Better support for AJAX.
  • Designer support for nested master pages.
  • Much improved CSS support in the designer. There is a Manage Styles window that allows you to view the styles that effect an element on a page.
  • Direct Style Application - This is a feature that allows you to have more control over how styles are applied to you HTML. There is the default automatic mode that will figure out how to set the style based on the context and there is the manual mode that provides more fine grained control over how the style is applied (you can set the target rule to determine what element gets the style set).
  • The designer has a split view so you can view the HTML at the same time as the designer. This allows you to make changes to either the HTML or the designer and see the changes reflected immediately in the other.
  • There is eye dropper support for selecting colors (hopefully the eye dropper works outside of Visual Studio, but if not, I still have Color Cop).
  • CSS Property Grid window shows the CSS styles that effect a particular item. Very cool. Select an element in the designer and the window shows the styles that are being used (this is different from the Manage Styles window however, I don't actually recall how :) ).
  • New ListView control - Provides more control over displaying lists of items. This can be hierarchical (a ListView can be put inside another ListView).
  • JavaScript Intellisense - Wow, this has been needed for awhile and they seem to have done a very good job with it. The Intellisense will show you what scripting elements (methods, properties, etc) that you have available, even in included JavaScript libraries. If you decorate your methods with XML comments, the Intellisense will even provide your comments including suggested data types (if a parameter should be a string, you can include that in the XML comments and it will display the data type in the Intellisense). If you include the data type of the return value, VS08 will give Intellisense on the returned value. Since variables can change types in JavaScript, Intellisense will provide the proper comments based on the context.
  • JavaScript debugging - You can now set breakpoints in JavaScript and debug into the script. While debugging, you have access to data visualizers similar (the same?) to what is available with VB.Net or C# code today (view as string, HTML, etc).


I'm not sure if I will be able to make use of this technology, but it's certainly interesting for it's own sake and who knows, this could become the future of browser based development.

  • Supported on IE, Firefox, and Safari browsers on Win XP, Vista, Windows 2003, Longhorn, and Mac (but I'm not sure what versions of the Mac).
  • Expression suite can be used to create Silverlight applications.
  • RTM for 1.0 by end of summer. The 1.1 beta will be released sometime by the end of 2007, but there is no release date for it yet.
  • 1.0 only allows JavaScript for programming for the client. 1.1 will support a subset of .Net.
  • The download size for 1.0 is around 1MB. The size of 1.1 will be around 4MB.
  • Silverlight provides an HTTP downloader for applications that includes progress, ZIP packaging, and asynchronous HTTP GET.
  • Expression Blend will be available in February 2008.
  • Expression Media Encoder will allow media to be converted to types supported by Silverlight. You can also adjust the quality of the media to improve download performance. You definitely have to check it out. Learn More (I don't think the website does a very good job of showing how cool this really is). I believe a beta version is available for free download on the site as well.
  • They showed a video of Scott Guthrie doing a demo of a Silverlight 1.1 application that showed the power of compiled .Net code vs JavaScript. It's a chess game between .Net and JavaScript. .Net wins every time! There is a better video of it out there, but I couldn't find it.


Blah blah blah. This is a technology that seems very important, but I just haven't been able to get excited about it yet. It's such a huge paradigm shift and there just isn't enough tool support, 3rd party controls, or compelling features available to make this feasible for a large scale, line-of-business application (which is what I write at work). As a developer I would love to dive into this technology, but as a professional, I've got to consider how this will effect the company I work for and the customers we serve. Unfortunately, this means I cannot commit to the time it will take to learn this.

What this means for you is that I tuned out for a large portion of this talk and so my notes are fairly sparse (I've been through it all before anyway at a previous conference that went into way more depth). But what I paid attention to I will share :).

  • xbap is a WPF application that can be deployed from a browser in partial trust mode.
  • Use Visual Studio for code: code editing, events, debugging, deployment, XAML editing (direct)
  • VS08 is still considered an early adopter tool. It doesn't contain all of the features that would be nice to have for building WPF applications, but it will get you there if needed.
  • VS08 beta 2 will provide designer support for automatically generating the default event handler when a control is double clicked (this is not available in beta 1). However, there will be no support for adding non-default events from the designer (such as the property grid in C#). However XAML will allow you to automatically create events using Intellisense (similar to C#).
  • Use Expression Blend for design: designing controls, templates, etc.
  • Expression Blend does not contain any source control integration.

Monday, June 25, 2007

Orcas Workshop: Day One

I have completed day one of the Visual Studio "Orcas" and .NET Framework 3.5 Training Workshop. Today included a great overview of many of the new language features coming in this release. Unfortunately there wasn't enough time to go over all of the new features, but the features that I saw, I like.

Perhaps the defining feature of .Net 3.5 is LINQ (I haven't figured out the "official" way of pronouncing this yet, but I heard it pronounced as link at the workshop today). LINQ stands for Language INtegrated Query and is basically just that. LINQ allows for a developer to query any IEnumerable data source using a strongly typed, compiler validated, and Intellisense enabled syntax. If you are familiar with T-SQL, you will probably be comfortable with LINQ. The biggest difference between T-SQL and LINQ is that a LINQ query starts with the From keyword. There are also numerous other differences, however, given the Intellisense, they are easily discoverable (IMHO, one of the most important aspects of any API). 

[UPDATE 06/26/2007]I forgot to mention a couple of things yesterday. According to one of the speakers, the primary performance goal of LINQ is to be as good as a for loop. However, there was a mention of a team working on a divide-and-conquer algorithm that will make it far faster on multi-proc/core machines. At this point it's just research. There is no date for release and there is a possibility that it won't get released. I should also have mentioned that there appear to be many different implementations of LINQ, there is LINQ for SQL (DLinq), LINQ for Entities (Entity Data Model), LINQ for XML (XLinq), and LINQ for Objects. There might be others as well. I'm not entirely sure of all of the terminology or how they are all used.[/UPDATE]

It appears that C# and VB.Net have widely differing capabilities in regards to LINQ. They both support a basic set of LINQ features, but VB.Net has a lot of keywords that are not available with C#. Many of these differences are based on the philosophy of each language team, but I imagine that developers will eventually demand (and hopefully get) features that one language supports that the other does not.

I thought the C++ section was going to be more interesting than it actually was (if you read my previous post about this conference you would know that I didn't think it was going to be that interesting :). I pretty much surfed the Internet the whole time and didn't get anything interesting out of it (sorry for anybody that is interested in the new improvements for C++).

However, I did pay rapt attention to the new language features for C# 3.0 and VB.Net 9.0. Most of the new features either directly support LINQ or are necessary to enable LINQ but can be used for other things as well. One of the cool things about the latter set of language enhancements is that they seem to be backwards compatible with .Net 2.0, so if you use Visual Studio 2008 to create .Net 2.0 applications using the built in ability to target different versions of the .Net framework, you can use some of the new language features!

Below I have included a list of some of the features that I found interesting. Unfortunately, given how quickly the speakers went over these features, I do not fully understand some of them, so if you find something interesting, you might want to look into it further just to make sure I'm not completely wrong about it ;).

New Language Features for C# 3.0

  • var keyword - Declares a variables that will be created as the type that it is set to. This is particularly useful for LINQ, but it can also be used in foreach...

foreach(var cus in customers) // cus is strongly typed as a Customer type.

  • Object initializers - Allows you to initialize an object by setting properties.

var cus = new Customer() { Name = "Boeing" };

  • Collection initializers - Similar to object initializers, but used to load a collection. Any Add method exposed by the the collection can be used to define the initializer. This allows you to add elements to lists as well as dictionaries or any custom Add method that you create.
  • Auto-implemented properties - Allows you to define a property without having to define the field or the get and/or set. The code is inferred by the property.

public string Name { get; set;}  // this is expanded by the compiler to be a standard property with corresponding field

  • Anonymous types - Allows LINQ to automatically create a type when transforming data. The type is not directly available from code, but you still get Intellisense to help you use the object.
  • Lambda expressions (=> operator) - This is basically a lightweight version of anonymous methods. A lambda expression can be passed as a parameter to another method, just define a delegate that matches the lambda expression.
  • Extension methods - Allows a type to "attach" methods to another type. To do this, add the this keyword in front of the first parameter in a static method.

public static string DoSomething(this string s) // This can be called like this - var something = "Hi".DoSomething()

  • Partial Methods - I didn't quite catch how this works, but it seems like it allows designer generated code to define a method that you can implement.

New Language Features for VB.Net 9.0

  • XML Literals - XML is now a first class feature within the language. You can can embed XML directly in the source code (without using quotes) and get Intellisense to boot (if you import your XML namespace). You can also embed ASP.Net-like code in the XML. This gets really interesting when combined with LINQ.

Dim myXml = <People><Person><%= person.FirstName %></Person></People> ' Example of using XML as a native feature in VB

  • New XML API - The new API is much easier to use than the DOM, especially when creating a new XML document. (if you are looking for keywords to find it on Google, try XElement and/or XAttribute). An XElement is automatically created when using a XML literal.
  • XML properties - .., .@, <> can be used like properties. This is a very cool way to get Intellisense support for XPath-like syntax.
  • XML namespaces - XML namespaces can be imported similar to regular .Net namespace imports.

Imports <xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet">

  • Anonymous types - Similar to C# anonymous types above.
  • Into keyword - Calls LINQ aggregate methods, available when using the Group By (I do not believe this is available in Beta 1 of VS 2008)
  • Group Join keyword - Another VB.Net LINQ feature that allows hierarchical results.
  • Aggregate keyword - Alternative starting keyword to LINQ query that will return a scalar value instead of collection. The scalar value can be a complex type, but you will only get one of them. This is typically used with the aggregation methods such as Sum, Average, etc.

Saturday, June 23, 2007

Compiling Multiple Projects Without a Solution

I've posted a new article on Code Project. It is titled Compiling Multiple Projects Without a Solution. The article shows how to compile a multi-project application without using solution files (yes, it includes source code).

The code creates a MSBuild project file that will allow MSBuild to compile a list of VB.Net or C# projects in the correct order. The list can be loaded based on a root directory that contains .vbproj and/or .csproj files or a text file that can contain a combination of directories, project files, and/or other text files.

Visual Studio "Orcas" and .NET Framework 3.5 Training Workshop

I was invited to the Visual Studio "Orcas" and .NET Framework 3.5 Training Workshop at Microsoft next week (June 25th through June 28th). I'm pretty excited about going, it looks like a high level overview of all the major new features of VS 2008 and .Net 3.5.

I included the agenda for the workshop below (hopefully that's ok with MS, it doesn't say anywhere that I can't share it). I'm most looking forward to .NET Language Integrated Query (LINQ), What’s new in Visual Basic 9.0?, Introduction to Silverlight, Integrating Office & line of business systems using VSTO, and the Company Store Visit, though all the sessions look interesting other than maybe the one on C++ (I am beginning to think I will never program in that language again). 





Monday, June 25



Registration & Breakfast



Welcome & Introductions



Lap around Visual Studio 2008 & .NET Framework 3.5



.NET Language Integrated Query (LINQ)



Using LINQ with relational data






Hands-on lab: LINQ



What’s new in C# 3.0?



What’s new in Visual Basic 9.0?



What’s new in C++ 9.0?

Tuesday, June 26






Introduction to the Microsoft Client Continuum



Building Web Applications with Visual Studio 2008 and the .NET Framework 3.5



Introduction to Silverlight






Essential Windows Presentation Foundation



Building Smart Client Applications with Visual Studio 2008 and the .NET Framework 3.5 using WPF and “Cider”



Hands-on lab



Evening Event: Barbeque at building 20


June 27






Building WCF and WF Applications with the .NET Framework 3.5



Web Programming with WCF






Workflow Services



Hands-on lab




Building Mobile Applications using Visual Studio 2008 and the .NET Compact Framework 3.5



Company Store Visit

Thursday, June 28






Overview of Office Business Applications and VSTO



Extending the Office Fluent UI using VSTO



Hands-on lab






Integrating Office & line of business systems using VSTO



Hands-on lab



What’s new in Visual Studio Team System 2008?

Thursday, June 14, 2007

Making the Build

AutomatedBuild I am of the belief that an automated build process is perhaps the most essential element in producing a quality software application. The build process can be used to enforce good practices within the development team and also detect any issues within the application in a timely manner (the sooner you know about a problem, the less code you have to look at to determine what caused it). Even a small team with a simple application can benefit from an automated build process.

Recently I've gotten the opportunity to recreate the build process for the company I work for. The product is over 5 years old, is fairly large and complex (well over 200 separate .Net projects as well as database and legacy code), and has grown a fairly complicated build process using a combination of batch files and custom executables.

The main reason for recreating the build process is due to the difficulty of maintaining it. There were quite a few batch files and it wasn't easy to run. We averaged a build about once a week (builds should be run several times a day, nightly at most) and the unit tests weren't run very often and if they were, the results often times weren't published (it could be months before you find out a unit test no longer works and then who knows what changed that broke it).

I started building my own build software using Windows Workflow (see my April 28th post, Windows Workflow and Your Next Build System). Although it probably could work and is a great way to learn Windows Workflow, I soon remembered one of my life rules, always use the right tool for the job.

I decided to try FinalBuilder instead and boy am I glad I did. FinalBuilder (FB from here on) is a great build tool. They have dozens (maybe hundreds) of different actions (that's what they call activities or tasks) available to use, such as MSBuild, get from VSS, run SQL, update AssemblyInfo, NUnit, send email, etc. They also allow you to write script (either VBScript or JavaScript) that is run during the build. If that isn't enough for you, you can also create your own custom actions using .Net (or other supported languages).

I've looked at several of the build tools available out there today (especially the free ones :) such as MSBuild and NAnt and the thing that strikes me about these tools is that they seem to be designed for full time build engineers. I don't have time to figure out how to setup these tools let alone write a complete build process using them. Perhaps even more frustrating is that I actually did take the time to learn MSBuild at one point and had done some interesting things with it, however, I couldn't tell you the first thing about MSBuild anymore. I wouldn't use MSBuild to create an entire build process for the same reason I wouldn't use Perl to write an entire ERP system (write only code doesn't work well in a complex, constantly evolving product).

I am not sure if it would have taken longer to create the build process using another technology (such as MSBuild or batch files) or not, but the big benefit in using FB is in the ability to maintain it months later and, if I'm lucky, by somebody other than myself (I definitely don't want to be the "build" guy).

I'm sure there are other great build tools out there. If you know of one, feel free to leave a comment (I always love to find out about new tools :). I plan on writing another article within the next week or two that has more concrete tips for creating a build process (perhaps even some source code), so stay tuned.