Dec 30
Continuous Delivery
icon1 Darrell Mozingo | icon2 Build Management | icon4 December 30th, 2011| icon3No Comments »

I recently finished reading Continuous Delivery. It’s an excellent book that manages to straddle that “keep it broad to help lots of people yet specific enough to actually give value” line pretty well. It covers testing strategies, process management, deployment strategies, and more.

At my former job we had a PowerShell script that would handle our deployment and related tasks. Each type of build – commit, nightly, push, etc. – all worked off its own artifacts that it created right then, duplicating any compilation, testing, or pre-compiling tasks. That eats up a lot of time. Here’s a list of posts where I covered how that script generally works:

The book talks about creating a single set of artifacts from the first commit build, and passing those same artifacts through the pipeline of UI tests, acceptance tests, manual testing, and finally deployment. I really like that idea, as it cuts down on unnecessary rework, and gives you more confidence that this one set of artifacts are truly ready to go live. Sure, the tasks could call the same function to compile the source or run unit tests, so it was effectively the same, but there could have been slight differences where the assemblies produced from the commit build were slightly different than those in the push build.

I also like how they mention getting automation in your project from day one if you’re lucky enough to work on a green-field app. I’ve worked on production deployment scripts for legacy apps and for ones that weren’t production yet, but still a year or so old. The newer an app is and the less baggage it has, the easier it is to get started, and getting started is the hardest part. Once you have a script that just compiles and copies files, you’re 90% of the way there. You can tweak things and add rollback functionality later, but the meat of what’s needed is there.

However you slice it, you have to automate your deployments. If you’re still copying files out by hand, you’re flat out doing it wrong. In the age of PowerShell, there’s really no excuse to not automate your line of business app deployment. The faster deliveries, more transparency, and increased confidence that automation gives you can only lead to one place: the pit of success, and that’s a good place to be.

Dec 3

The first post gave a quick overview of what our deployment script does and why you’d want one, the second post went over pre-deployment steps, and the third post in this series covered the actual site’s deployment. This post will go over a few of the post-deployment steps we take after publishing our site. Like the last posts, most all of this code will probably be pretty self explanatory.


We make heavy use of StructureMap, NHibernate (w/Fluent NHibernate), and AutoMapper in our system, and those guys have some heavy reflection startup costs. Since it’s all done when the app domain starts, we hit each server in our farm to “pre-load” the site for us because that first visit takes a good 30-40 seconds because of those tools.

Since the servers are in a farm, we can’t just go to the site’s URL as we’d only get one box – even multiple loads aren’t guaranteed to move you around to them all. To make sure we’re looking at each server, we fiddle with the build server’s hosts file and point it at each web server. We don’t do parallel builds on our build server, so we thankfully don’t have any issues with other build scripts getting tripped up, but you may want to consider that if it’s applicable to your situation.

properties {
	$hosts_file = "C:\Windows\System32\drivers\etc\hosts"
	$servers_production = @( "server1", "server2" )
	$servers_production_ip = @{ "server1" = ""; "server2" = "" }
function setup_hosts_file_for($server, $url)
	$server_ip = $servers_production_ip[$server]
	echo "Setting hosts file to use $server_ip ($server) for $url."
	"$server_ip $url" | Out-File $hosts_file -Encoding Ascii
function remove_hosts_file_entries
	echo "Removing all hosts file entries and reverting to a clean file."
	" localhost" | Out-File $hosts_file -Encoding Ascii
function make_sure_we_are_pointing_at($server, $url)
	$expected_server_ip = $servers_production_ip[$server]
	$ping_output = & ping -n 1 $url
	$ip_pinged = ($ping_output | Select-String "\[(.*)\]" | Select -ExpandProperty Matches).Groups[1].Value
	if ($ip_pinged -ne $expected_server_ip)
		throw "The site's IP is supposed to be $expected_server_ip, but it's $ip_pinged (for $url). Hosts file problem?"
	echo "Correctly pointing at $ip_pinged for $url."
function stop_dns_caching
	& net stop dnscache
function start_dns_caching
	& net start dnscache

The hosts file allows you to point any request for, say, on your machine to whatever IP you want. So if you wanted to preload for server1, you can put “” in your hosts file, and you’ll always hit that machine. Your load balancing setup might not allow this though. There’s also a method that’ll ping the given URL to make sure it’s going to the proper server, throwing up if it isn’t. The last two methods start/stop the DNS Caching service in Windows, just to help make sure we’re looking at the correct IP for a given URL.

With that setup, we can easily manipulate IE through COM to pull up the site:

properties {
	$live_site_text_in_title = "Our cool site"
	$times_to_try_preloading_sites = 50
function fire_up_ie
	return New-Object -Com InternetExplorer.Application
function preload_url_on_server($server, $url)
	setup_hosts_file_for $server $url
	make_sure_we_are_pointing_at $server $url
	$current_attempt_count = 0
	$3_seconds = 3
	$ie = fire_up_ie
	echo "Pulling up $url in the browser."
	while ($current_attempt_count -lt $times_to_try_preloading_sites)
		pause_for $3_seconds
		$document = $ie.document
		if ($document -ne $null -and $document.readyState -eq "Complete" -and $document.title -match $live_site_text_in_title)
			$time_taken = ($current_attempt_count + 1) * $3_seconds
			echo "Preloaded $url on $server in about $time_taken seconds."
	if ($current_attempt_count -ge $times_to_try_preloading_sites)
		throw "$url (on $server) couldn't be preloaded after a pretty long ass wait. WTF?"

Working with IE’s COM interface is pretty painless in PowerShell. Dynamic languages FTW, aye? We just fire up IE, browse to the URL (which should be pointing to the given server only), and keep checking on IE’s progress until the page is fully loaded the title contains some piece of text we expected it to. Simple and to the point.

The first snippet in Part 3 of this series showed how we deployed the site. You can see there where we temporarily stop the DNS Caching service, then pre-load the site on each server are deploying to it, then reset the hosts file and start the DNS Caching service again.

Testing Error Email Generation

We have some basic code to email exceptions out if our app hits an exception. Nothing fancy. To test our error emails are getting sent OK, I created an obscure URL in the application that’ll just generate a TestErrorEmailException. When our error handler sees that exception, all it does it send the generated error email to a address rather than the normal one. The build script then logs into it’s special GMail accont and checks for the email. This is bar far the chunckiest part of the build script:

properties {
	$email_url = ""
	$error_generation_path = "/SomeObscurePath/GenerateTestErrorEmail/?subject="
	$max_email_check_attemps = 100
function wait_for_browser_to_finish($ie)
	while ($ie.busy -eq $true) {
		pause_for 1 #second
function generate_test_error_emails_on($server, $base_url, $error_email_subject)
	setup_hosts_file_for $server $base_url
	make_sure_we_are_pointing_at $server $base_url
	$error_url = $base_url + $error_generation_path
	$full_error_url = $error_url + $error_email_subject
	$ie = fire_up_ie
	echo "Generating test error email from $full_error_url."
	wait_for_browser_to_finish $ie
function ensure_error_emails_are_working_on($server, $base_url)
	echo "Ensuring error emails are getting sent out correctly on $server."
	$current_datetime = Get-Date -Format MM_dd_yyyy-hh_mm_tt
	$error_email_subject = "Error_" + $server + "_$current_datetime"
	generate_test_error_emails_on $server $base_url $error_email_subject
	check_email_was_sent $error_email_subject
function check_email_was_sent($expected_email_subject)
	echo "Pulling up $email_url in the browser."
	$ie = fire_up_ie
	$ie.navigate($email_url )
	wait_for_browser_to_finish $ie
	logout_of_email $ie
	echo "Logging in to email."
	$ie.document.getElementById("email").value = $security_user
	$ie.document.getElementById("passwd").value = $security_password
	wait_for_browser_to_finish $ie
	echo "Looking for test error email."
	$test_error_email = $null
	for ($i = 1; $i -le $max_email_check_attemps; $i++)
		echo "Attempt #$i checking for the test error email."
		$test_error_email = get_link_containing_text $ie $expected_email_subject
		if ($test_error_email -ne $null)
			echo "Found the test error email."
		pause_for 10 #seconds
		echo "Refreshing the page after a pause."
		click_link_with_text $ie "Refresh"
	if ($test_error_email -eq $null)
		throw "Test error email was never received after $max_email_check_attemps attempts. Problem?"
	echo "Pulling up the test error email."
	wait_for_browser_to_finish $ie
	echo "Deleting test error email."
	click_link_with_text $ie "Delete"
	logout_of_email $ie
function logout_of_email($ie)
	$signout_link = get_link_with_text $ie "Sign out"
	if ($signout_link -ne $null)
		echo "Signing out of email."
		wait_for_browser_to_finish $ie
function click_link_with_text($ie, $text)
	$link = get_link_with_text $ie $text
	$there_are_multiple_links_with_that_text = ($link.length -gt 1)
	if ($there_are_multiple_links_with_that_text)
	wait_for_browser_to_finish $ie
function get_link_with_text($ie, $text)
	return $ie.document.getElementsByTagName("a") | where { $_.innerText -eq $text }
function get_link_containing_text($ie, $text)
	return $ie.document.getElementsByTagName("a") | where { $_.innerText -match $text }

It seriously looks worse than it really is, and most of it is due to navigating around GMail’s interface. So we hit the obscure URL in our app, pass it a subject line for the error email, wait a bit, then log into GMail and check for an email with that same subject line. If we don’t find the email after a waiting period, we blow up the script. Simple as that.

If you know an easier way to do this, I’m all ears!


The two biggest things we do after deploying our site is, for each individual server in the farm, load it up so all the first time reflection stuff can get taken care of and make sure any errors on the site are getting emailed out correctly. While controlling IE through its COM interface is a lot cleaner and easier with PowerShell, there’s still some code for navigating around GMail’s site. Obviously if you use a different setup for your email, you’ll either have to control a different app or access the SMTP server directly.

Unfortunately, the biggest piece for both of these things being helpful is if you can navigate to each server in the farm. If your network setup prevents that, it’s not going to do you much good unless you keep clearing your cookies and revisiting the site a bunch of times in hopes you’ll get each server, or something crazy like that.

So while most of this code is straight forward, I hope it’ll give you a starting point for your deployment script. Like I said in the beginning: it’s a bit painful to initially setup (both creating it and testing it), but we’ve found huge value from having it in place. It’s obviously not as easy as Capistrano, but, meh, it works. Another option for .NET is Web Deploy, a relatively new tool from Microsoft. I haven’t had time to get too deep into it, but it may help for your situation.

Good luck!

Nov 24

In the first post I gave a quick overview of what our deployment script does and why you’d want one, then the second post went over pre-deployment steps. This post will go over the actual deployment steps we take to publish our site. Like the last post, most all of this code will probably be pretty self explanatory.

function deploy_and_prime_site
		foreach ($server in $servers_production)
			deploy_site_to $server
			preload_site_on $server
		foreach ($server in $servers_production)
			ensure_error_emails_are_working_on $server $live_url

This is the function the build target actually calls into. The part you’ll care about here is where it loops through the known production servers and deploys the site to each one in tern. The “preloading” of the site, checking for functioning error emails, and DNS caching stuff is some of the post-deployment steps we take, which I’ll discuss in the next post.

IIS Remote Control

Here’s how we control IIS remotely (this is IIS7 on Windows 2008 R2 – not sure how much changes for different versions):

function execute_on_server($target_server, [scriptblock]$script_block)
	$secure_password = ConvertTo-SecureString $security_password -AsPlainText -Force
	$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $security_full_user, $secure_password
	Invoke-Command -ComputerName $target_server -Credential $credential -ScriptBlock $script_block
function stop_iis_on($target_server)
	echo "Stopping IIS service on $target_server..."
	execute_on_server $target_server { & iisreset /stop }
function start_iis_on($target_server)
	echo "Starting IIS service on $target_server..."
	execute_on_server $target_server { & iisreset /start }

The secret sauce to getting this to work is the execute_on_server function. The actual stop & start methods just execute standard iisreset commands (which is a built-in command line tool w/IIS). So the top function converts our plain text server username & passwords in the build script into a SecureStringPSCredential object. Not the most secure way to do this, I’m sure (hence the -Force parameter), but it’s working for us. After connecting to the remote machine, it executes the given script block with those credentials (like the execute_with_secure_share function from the last post). In order to make this work though, you’ll need to give some lovin’ on your build server and web servers:

  • Make sure all boxes have at least PowerShell 2.0 with WinRM 2.0 (which is what allows the remote machine command execution)
  • On each web server, you’ll need to run this one time command from a PowerShell prompt: Enable-PSRemoting


With that out of the way, the actual deployment part is pretty easy – it’s just copying files after all:

properties {
	$siteWebFolder_name = $solution_name
	$ident_file = "Content\ident.txt"
function pause_for($seconds)
	sleep -s $seconds
function deploy_site_to($server)
	echo "*** Beginning site deployment to $server."
	$compiled_site = "$compiled_site\*"
	$web_share = "\\$server\$share_web"
	$live_site_path = "$web_share\$siteWebFolder_name"
	stop_iis_on $server
	pause_for 10 #seconds, to give IIS time to release file handles.
	execute_with_secure_share $web_share {
		echo "Deleting the existing site files on $server ($live_site_path )."
		delete_directory_with_errors "$live_site_path \*"
		echo "Copying the new site files (from $compiled_site) to $server."
		copy_directory $compiled_site $live_site_path 
		echo "Creating ident file at $live_site_path."
		"$server" > "$live_site_path\$ident_file"
	start_iis_on $server

Stop IIS, give it a few seconds, copy files, start IIS. Like I said – simple. If your situation can’t allow this for some reason (perhaps you have a more complicated load balancing scheme or whatever), you can expand as needed. We actually deploy several sites and a few console apps at the same time so everything’s in sync. The ident file is a simple way for us to find out which server a user’s on for troubleshooting purposes. We can navigate to the url + /Content/ident.txt and it’ll have the server’s name.


Other than the actual remote manipulation of the servers, which we keep to a pretty minimum IIS start & stop, there’s not much to this part of the build either. This code provides a good jumping off point for customization to your setup, as well as some helper methods you can hopefully make use of. The next post will wrap up this series by showing some of the post-deployment steps we take.

Nov 12

In the first post I gave a quick overview of what our deployment script does and why you’d want one. This post will go over some of the pre-deployment steps we take. Most all of this code will probably be pretty self explanatory, but I know just having something to work off of is a huge boost to starting your own, so here ya go.

function modify_web_config_for_production($webConfig)
	echo "Modifying $webConfig for production deployment."
	$xml = [xml](Get-Content $webConfig)
	$root = $xml.get_DocumentElement();
	$root."system.web".compilation.debug = "false"

Given the path to a web.config file, this function switches off the debug flag (and any other changes you’d need). Being a dynamic language, you can access XML keys quite easily. You’ll need the quotes around system.web since there’s the dot in the name though. Also, if you need access to any of the app.settings keys, you can use something like: $xml.selectSingleNode('//appSettings/add[@key="WhateverYourKeyIs"]').value = "false".

function precompile_site($siteToPreCompile, $compiledSite)
	echo "Precompiling $siteToPreCompile."
	$virtual_directory = "/"
	exec { & $tools_aspnetCompiler -nologo -errorstack -fixednames -d -u -v $virtual_directory -p "$siteToPreCompile" $compiledSite }

This little beauty precompiles the site (located in the $siteToPreCompile directory, with the results output to the $compiledSite directory) using the ASP.NET compiler. I prefer to copy the actual compiler executable into the project folder even though it’s installed with the Framework. Not sure why. Anyway, $tools_aspnetCompiler can either point locally, or to C:\Windows\Microsoft.NET\Framework\vwhatever\aspnet_compiler.exe. You can also configure the options being passed into the compiler to suit your needs.

function execute_with_secure_share($share, [scriptblock]$command)
		echo "Mapping share $share"
		exec { & net use $share /user:$security_full_user $security_password }
		& $command
		echo "Unmapping share $share"
		exec { & net use $share /delete }

This is more of a helper method that executes a given script block (think of it as an Action or anonymous code block in C#) while the given share is mapped with some known username and password. This is used to copy out the site, create backups, etc. I’ll leave the $security_full_user & $security_password variable declarations out, if you don’t mind! We just put them in plain text in the build script (I know, *gasp!*).

properties {
	$share_web = "wwwroot"
	$servers_production = @("server1", "server2")
	$live_backup_share = "\\server\LiveSiteBackups"
	$number_of_live_backups_to_keep = 10
function archive_current_live_site
	$current_datetime = Get-Date -Format MM_dd_yyyy-hh_mm_tt
	$one_of_the_production_servers = $servers_production[0]
	$web_share_path = "\\$one_of_the_production_servers\$share_web"
	echo "Archiving $web_share_path"
	$full_backup_path = "$web_share_path\*"
	$full_archive_file = "$live_backup_share\$"
	execute_with_secure_share $web_share_path {
		execute_with_secure_share $live_backup_share {
			exec { & $tools_7zip a $full_archive_file $full_backup_path } 
function delete_extra_live_site_backups
	execute_with_secure_share $live_backup_share {
		$current_backups = Get-ChildItem $live_backup_share -Filter "*.zip" | sort -Property LastWriteTime
		$current_backups_count = $current_backups.Count
		echo "Found $current_backups_count live backups out there, and we're aiming to keep only $number_of_live_backups_to_keep."
		$number_of_backups_to_kill = ($current_backups_count - $number_of_live_backups_to_keep);
		for ($i = 0; $i -lt $number_of_backups_to_kill; $i++)
			$file_to_delete = $current_backups[$i]
			$extra_backup = "$live_backup_share\$file_to_delete"
			echo "Removing old backup file: $extra_backup"
			delete_file $extra_backup

These pair of methods create a backup of the current live site and make sure we’re only keeping a set number of those backups from previous runs, to keep storage and maintenance in check. Nothing too complicated. To create the backup, we just farm out to 7-Zip to compress the directory, which is ran withing nested execute_with_secure_share calls from above, which map the web server file share and backup file share. Likewise, the second method just gets a count of zip files in the storage directory and deletes the oldest ones in there until the total count gets to a specified count.


That’s the basics for what we do pre-deployment. Again, not really that complicated, but it can give you a starting point for your script. I’ll go over our actual deployment steps in the next post, then follow that up with some post-deployment goodness. I know, you can’t wait.

Sep 24

Pushing to production with a script? Crazy talk, right? Well, maybe not. Sure, there are lots of insane corporate setups out there where a script might not completely work, but for the vast majority of companies out there, this is totally within the realm of possibility. Maybe you want to save some steps when you’re deploying, or maybe you want to stop forgetting some of those crazy “best practices” people always talk about (building in release mode? turning off debugging in the web.config? pre-compiling sites?). Whatever the reason, a deployment script is a great solution.

What it does

Our current deployment script will:

  1. Get a completely clean copy of the code base from source control
  2. Build it in release mode
  3. Run all unit tests, slow/integration tests, and UI tests
  4. Switch off debugging in the web.config
  5. Pre-compile the site with the ASP.NET compiler
  6. Archive the current live site to a backup server, just in case (keeping a few previous versions as well)
  7. Deploy the latest copy of our 3rd party tools to each server
  8. XCopy deploy the site to each server in our cluster (taking down IIS first and letting our load balancer get users off that server)
  9. Visit the site with a browser to do all the first time pre-load reflection stuff (NHibernate, AutoMapper, StructureMap, etc)

    1. It’ll actually change its local DNS hosts file to make sure its looking at each server in the cluster too, so that each one is “primed”
  10. Make sure our error emails are working by visiting a test URL that throws an exception (therefore creating an error email), then logging into a special email account and making sure that email was received

OK, so this script takes a while to run (with all the tests taking up a majority of the time), but we gain a lot. A single click in TeamCity kicks the whole thing off, and we’re guaranteed little to nothing is broken in the system thanks to all the tests (unit, integration, and UI), that there’s backup copies if something does happen, and that everything is compiled/configured for production so we’re not missing any easy performance gains. I’d say that’s a win.

How it’s ran

We don’t have have this running in any automatic fashion, but instead run the target by hand from our build server whenever we’re ready. Our build server lets us easily schedule the “build” whenever we need to, though, so we can schedule it late at night so we don’t disrupt access. Our test servers are also being setup right now, so we’ll probably continuously deploy to those when they’re ready (twice a day? every check-in?).

Fail safes

There honestly aren’t a whole lot. As we come across certain failures we’ll add checks to keep them from cropping back up, but I didn’t want to complicate the script with all sorts of edge case checking if it’ll never need to worry about them. You need to apply the KISS and YAGNI principals to your build scripts just like your code. We do a few operations in try/catch blocks to make sure servers are kept down if they’re not deployed to correctly, or our 3rd party tools get cleaned up properly, etc., but not many.

I’m sure that’ll unsettle many of you, but a script like this is going to be highly customized to your environment, and your environment might have a higher chance of certain parts of the script failing, so you’ll need to guard against that. I’d highly recommend starting simple and only building out as situations arise though.

Build server bonuses

We use TeamCity as our build server, so I can’t speak about the others (CC.NET, Hudson, etc) and how much or little of the following benefits they offer.

The two biggest benefits we get, virtually for free, with using TeamCity to run our deployment script include:

  • Auditing – you can see who’s ran the script, and when
  • Tracking – it’ll show you, for each run, which changes were included in that deployment down to a diff of each file and source control comments
    • It’ll also show which changes are pending for the next run: Pending Changes in TeamCity
    • We don’t use a bug tracker that’s supported by TeamCity, but theoretically it can link right to each bug fix that’s included in each deployment

What’s next?

I’m going to show off parts of our build script and how we handle different situations in the next blog post(s). I’m not sure how many it’ll take or how deep I’ll go since much of it is situation specific, but I’ll definitely get you started on the road with your own.

As a heads up, this will all be written in PowerShell. We’ve moved our build script setup to it and it’s what made this deployment scenario possible.


Manual deployment sucks. Automated deployment rocks.

If there’s any way you can script your deployment (or even parts of it), I’d recommend it in a heart beat. It’s a bit bumpy getting it up and running, I won’t lie, but it’s a huge help once it’s stable and working. I’ll take a look at some of the basic pre-deployment steps we take in the next post.

Apr 27

I finally got around to implementing help screens on our site recently. We needed a system that would enable our domain peeps to update the help text directly with no intervention from us, along with being easy to implement and maintain on our end. I ended up using flat HTML files and a jQuery modal dialog (Colorbox), which has support for asynchronously loading those HTML files from disk when needed. The one thing we didn’t want to do with this solution was give our domain peeps production server access or the responsibility of keeping those HTML files up to date on the servers – I could only imagine the chaos that’d ensue from that.

Solution: use our build script & build server to handle it for us.

We gave our domain peeps commit access to the repository – thankfully we’re still on SVN, as I’m sure their heads will explode when we switch to a DVCS. This provides nice versioning and accountability features if someone messes up (imagine that), and gives us a hook for the build server. All help files are contained in a hierarchy under a folder that’s appropriately named HelpFiles. I checked out just that folder from the source tree on their machines and gave them a quick commit/update spiel. We created empty HTML files for them, and they went about their way filling them all in.

Now on to the more interesting part, our build script. As I’ve mentioned previously, we’re using psake. Here’s the relevant properties and task:

properties {
	$scm_hidden_dir = ".svn";
	$executing_directory = new-object System.IO.DirectoryInfo $pwd
	$base_dir = $executing_directory.Parent.FullName
	$source_dir = "$base_dir\src"
	$build_dir = "$base_dir\build"
	$build_tools_dir = "$build_dir\tools"
	$share_web = "wwwroot"
	$servers_production = @("server1", "server2")
	$security_user = "user_with_write_access"
	$security_password = "pa55w0rd"
	$tools_robocopy = "$build_tools_dir\robocopy\robocopy.exe"
	$help_folder = "HelpFiles"
	$help_local_dir = "$source_dir\$project_name.Web\$help_folder"
	$deployTarget_help = "$project_name\$help_folder"
task publish_help {
	foreach ($server in $servers_production)
		$full_server_share = "\\$server\$share_web"
		exec { & net use $full_server_share /user:$security_user $security_password }
		& $tools_robocopy $help_local_dir $full_server_share\$deployTarget_help /xd $scm_hidden_dir /fp /r:2 /mir
		# See page 33 of the help file in the tool's folder for exit code explaination.
		if ($lastexitcode -gt 3)
			Exit $lastexitcode
		exec { & net use $full_server_share /delete }

There’s an array of production server names, which we iterate over and use the net command built into Windows to map its wwwroot share using a different username & password than the current user (this allows the build server to run as an unprivileged user but still access needed resources).

Then we use the surprisingly awesome Robocopy tool from Microsoft, which is basically xcopy on steroids, to copy out the help files themselves. The xd flag is excluding the hidden .svn folders, the fp flag is displaying full path names in the output (for display in the build output from TeamCity later), the r flag is telling it to only retry failed file twice (as opposed to the default million times!), and the mir flag is telling it to mirror the source directory tree to the destination, including empty folders and removing dead files.

We can’t use psake’s built in exec function to run Robocopy, as exec only checks for non-zero return codes before considerng it a failure. Of course, just to be different, Robocopy only fails if its return code is above 3 (1 = one or more files copied successfully, 2 = extra files or folders detected, and there is no 3). So we check the return code ourselves and exit if Robocopy failed. We then delete the share, effectively making the machine forget the username/password associated with it.

With that done, we created a new build configuration in TeamCity and had it check the repository for changes only to the help file directory by adding +:src/Project.Web/HelpFiles/** to the Trigger Patterns field on the Build Triggers configuration step.

That’s pretty much it. Our domain peeps have been pretty receptive to it so far, and they love being able to edit the help files, commit, and see them live only a minute or two later. We loved not having to pull all that text from the database on each page load and not having to create editing/viewing/versioning/etc tools around such a process. It’s a win-win.

Apr 2

A while back I wrote a small series on creating a basic build script and setting up a build server (part 1, part 2, and part 3). I used NAnt and CruiseControl.NET in that series, but alluded to a few other options for each. I recently got around to switching our build script from NAnt to psake, which is written in PowerShell, and switching our build server from CruiseControl.NET to JetBrain’s TeamCity. I’ll give a quick overview of our new build script here, which I’ll use to build on in future posts showing a few of the more interesting things that suddenly became much easier with this setup, and in a few cases, possible at all.

To start with, you’ll want to make sure you have the latest versions of PowerShell (2.0) and psake. Here’s the basics of our build script:

$ErrorActionPreference = 'Stop'
Include ".\functions_general.ps1"
properties {
	$project_name = "MainApplication"
	$build_config = "Debug"
properties { # Directories
	$scm_hidden_dir = ".svn";
	$executing_directory = new-object System.IO.DirectoryInfo $pwd
	$base_dir = $executing_directory.Parent.FullName
	$source_dir = "$base_dir\src"
	$build_dir = "$base_dir\build"
	$tools_dir = "$base_dir\tools"
	$build_tools_dir = "$build_dir\tools"
	$build_artifacts_dir = "$build_dir\artifacts"
	$build_output_dir = "$build_artifacts_dir\output"
	$build_reports_dir = "$build_artifacts_dir\reports"
	$transient_folders = @($build_artifacts_dir, $build_output_dir, $build_reports_dir)
properties { # Executables
	$tools_nunit = "$tools_dir\nunit\nunit-console-x86.exe"
	$tools_gallio = "$tools_dir\Gallio\Gallio.Echo.exe"
	$tools_coverage = "$build_tools_dir\ncover\ncover.console.exe"
	$tools_coverageExplorer = "$build_tools_dir\ncover_explorer\NCoverExplorer.Console.exe"
properties { # Files
	$solution_file = "$source_dir\$project_name.sln"
	$output_unitTests_dll = "$build_output_dir\$project_name.UnitTests.dll"
	$output_unitTests_xml = "$build_reports_dir\UnitTestResults.xml"
	$output_coverage_xml = "$build_reports_dir\NCover.xml"
	$output_coverage_log = "$build_reports_dir\NCover.log"
	$output_coverageExplorer_xml = "$build_reports_dir\NCoverExplorer.xml"
	$output_coverageExplorer_html = "$build_reports_dir\NCover.html"
properties { # Skip coverage attributes
	$skipCoverage_general = "Testing.SkipTestCoverageAttribute"
task default -depends unit_test_coverage
task clean {
	$transient_folders | ForEach-Object { delete_directory $_ }
	$transient_folders | ForEach-Object { create_directory $_ }
task compile -depends clean {
	exec { msbuild $solution_file /p:Configuration=$build_config /p:OutDir=""$build_output_dir\\"" /consoleloggerparameters:ErrorsOnly }
task unit_test_coverage -depends compile {
	exec { & $tools_coverage $tools_nunit $output_unitTests_dll /nologo /xml=$output_unitTests_xml //reg //ea $skipCoverage_general //l $output_coverage_log //x "$output_coverage_xml" //a $project_name }
	exec { & $tools_coverageExplorer $output_coverage_xml /xml:$output_coverageExplorer_xml /html:$output_coverageExplorer_html /project:$project_name /report:ModuleClassFunctionSummary /failMinimum }

As the second line alludes to, you can break functions out into separate files and include them back into the main one. Here’s functions_general.ps1:

function delete_directory($directory_name)
	Remove-Item -Force -Recurse $directory_name -ErrorAction SilentlyContinue
function create_directory($directory_name)
	New-Item $directory_name -ItemType Directory | Out-Null

This script will build our project and run the unit tests, producing a coverage report we can display later inside Team City. Much of this maps loosely one to one against the NAnt version discussed in my past series, and there’s plenty of articles/posts online explaining this stuff in much more detail than I can here. Note that all the pieces that can “fail” the script are wrapped in exec, which will execute the code block (i.e. lambda/anonymous delegate) and basically alert the build server if it fails. Not too difficult, at least for now 🙂

As for getting this to work with Team City, if you specify the runner as a command line and point it at a batch file with these contents:

@echo off
powershell -Command "& { Set-ExecutionPolicy Unrestricted; Import-Module .\build\tools\psake\psake.psm1; $psake.use_exit_on_error = $true; Invoke-psake '.\build\build.ps1' %*; Remove-Module psake}"

You’ll be golden. This batch allows the build server to run the script (perhaps setting unrestricted execution isn’t the smartest from a security standpoint, but oh well), sets up the psake environment, tells psake to raise its warnings in a way that TeamCity can pick up on, executes your build script, and tears down the psake environment. Looks a little complicated, but it’s just a bunch of smaller commands strung together on one line, and you shouldn’t have to look at it again.

Dec 31

In Part 2, I walked you through setting up a build script for your solution. Now we’ll go through setting up a continuous integration server using Cruise Control.NET.

After attending JP’s Nothin’ But .NET course, my outlook on build scripts, CI servers, and what each is capable of doing for a project has been completely altered. I’m going to finish this series for the sake of completeness, but I’ll putting up a post about what I learned at some point in the near future (and I don’t want to spill too much as I know JP is planning on releasing a lot of that stuff this year).

CC.NET Server Setup

Start by grabbing the latest version of Cruise Control .NET and installing it using all the defaults. Assuming everything goes OK, you should see an empty dashboard when browsing to http://localhost/ccnet.

CC.NET Config

I’ll go ahead and assume you’re using Subversion for source control, though switching this example to Visual Source Safe, CVS, SourceVault, or whatever you happen to be using, isn’t hard at all.

The file specifies details for all the projects your build server should be building. Each project gets a project tag, which specifies the name and URL for the project:

<project name="MyExtensions" webURL="http://localhost/ccnet"></project>

Inside the project tag you specify when/where/how the build server should get the source, how to label successful builds, what it should do with the source once it gets it, and who/how to tell people of successes or failures, and much more. A full list of possible tags can be found on the main CC.NET documentation site, but we’ll walk through a basic setup. One thing to note is you must restart the CC.NET process every time you update this config file, otherwise the changes won’t take effect.

Start by defining a working and artifact directory, where the actual source code and CC.NET reports will live, respectively. I prefer to keep them separated out in their own folders for clarity:


Next you’ll specify all the basic information needed for Cruise Control to access and checkout your repository in the sourcecontrol section. As I previously mentioned, there’s lots of source code providers bundled with Cruise Control, and even more available on the net. The executable is a pretty standard location, and is where the normal SVN installer puts it (and I usually check-in the installer with the rest of the CC.NET files):

<sourcecontrol type="svn">
	<executable>C:\Program Files\Subversion\bin\svn.exe</executable>

The trigger section will define when Cruise Control should kick off the build process. I’ve defined two below, one every night at 10PM, and one that will poll Subversion every 2 minutes for a fresh commit and begin only if it finds one:

	<intervalTrigger name="continuous" seconds="120" />
	<scheduleTrigger time="22:00" buildCondition="ForceBuild" />

The tasks section will tell Cruise Control what to do once it gets a copy of the source code. Here we’ll use the built in NAnt task, which needs a base directory to execute in, and a path to the NAnt executable (which we’ve convienently commited right along with the source). With no target defined for the NAnt build, it’ll run the default one, which for us is build-server:


The publishers section specifies, among other things, what to do with all the build script’s output, and who to notify for build success and failures.

For our config, we’ll use the merge tag underneath the publishers section to tell Cruise Control to combine all of our xml output files, including the ones from NCover and NAnt itself:

	<!-- All file paths are relative to the WorkingDirectory node at the top of the script. -->

We’ll also tell Cruise Control where to output the complete build report from each build, which is uses for display on its web page (so we’ll store them in C:\BuildServer\Artifacts\MyExtensions\BuildReports):

<!-- Relative to the ArtifactDirectory node at the top of the script. -->
<xmllogger logDir="BuildReports" />

The last tag we need, again underneath the publishers section, is the email tag. It’s pretty self explanatory, defining an email server and address to mail from/to. One point of note is the user name’s defined in the users section must match the user names from Subversion:

<email from="" mailhost="" includeDetails="true">
		<user name="svnUsername" address="" />
	<groups />

Extra Files

CC.NETThere’s also a dashboard.cfg file, which specifies how the web site displays build information for all the projects on the server (an example of which is shown on the right). I customized this one to include only needed report links and ignore others. This file, along with a few needed images, XSL formatting files, and instructions on where they should all be copied is included in the download at the end of post.


The previous two articles gave you an overview of setting up a build script and continous integration server and actually walked through setting up a very simplistic build script for your company’s possible extension/utility library. This article gives you a quick run down of setting up Cruise Control .NET to run that build script after getting source updates, and emailing any needed developers about failures.

This is by no means complete, only an introduction to get you started. Windows and web based projects are totally different, and when you get into running nightly integration/smoke tests, production deployment, product packaging, etc, you can imagine how it gets pretty complicated. The best advise I can give for these situations is to look at popular open source products to get ideas. For example, Subtext has some awesome automation setup in both the build script and build server configuration. Definitely worth a gander.

The completed skeleton project setup with this build server configuration and everything else you’ll need, can be downloaded here.

Sep 26

In Part 1 I talked, quite generally, about what automated builds and continuous integration servers are. In this part I’ll walk you through setting up a simple automated build script for your company’s extension library.

Setting Up Your Project

Project ExplorerFor a while now I’ve been creating an Internal folder under the main project, which has folders for the Tools (NAnt, MbUnit, etc), documentation (if needed), libraries, etc. This has been working out well, but I’m probably going to switch the method used by many open source projects, where the top level directory has a src (for your actual source code), lib (for reference assemblies), and bin (for tools) folders. See the image at the right for my current layout.

Notice that those tools, NAnt/MbUnit/NCover/etc, are actually checked into the project. They’re not sitting in my Program Files directory or on some network share. Each project has a copy of all the tools it needs (and everything those tools need to run), which enables not only the build server to pull down everything it needs from source control, but new developers as well. One checkout command and they’re good to build and run the project. This is definitely a time saver, and, if nothing else, I highly recommend implementing this practice or one similar.

For reference, I’ll be using NAnt 0.85 (available here), MbUnit 2.4.197 (available here), NCover 1.5.8 (one of the last free versions available before they became a commercial product – and while this version doesn’t support some of the newer stuff in C# 3 as their commercial version does, it’ll work for our purposes – available here), and NCoverExplorer (which is now also commercial and integrated into NCover, but I have the latest freely copy available here).

The Build Script

Create a file in the root of your project to hold the actual NAnt configuration, which will build your project and run its unit tests. I usually name the file

A couple of quick pointers for working with NAnt:

  1. The word artifacts, as in most build systems, refers to anything produced by the build system itself, such as reports, executables, installation files, documentation, etc.
  2. Variable declaration & use:
    <property name="build.dir" value="bin" />
    <delete dir="${build.dir}" />
  3. Method declaration (normally called targets):
    <target name="compile" description="Compiles the project using MSBuild."></target>
  4. Outputting text to the screen:
    <echo message="Outputting this message to the screen." />

You start a NAnt build script with

tags, which specifies the project name and the default target (method) to run when one isn’t specified by the calling application:

<project name="MyExtensions" default="build-server"

Now for the meat of the build script. Let’s start off with a few basic parameter declarations:

<property name="build.project" value="MyExtensions" />
<property name="build.dir" value="${build.project}\bin" />
<property name="build.config" value="Release" />
<property name="build.fullPath" value="${build.dir}\${build.config}" />
<property name="build.toolPath" value="C:\WINDOWS\Microsoft.NET\Framework\v3.5\msbuild.exe" />
<property name="tools.dir" value="${build.project}\Internal\Tools" />
<property name="build.testBuildDir" value="${build.project}.UnitTests\bin" />
<property name="reports.dir" value="${build.dir}\Reports" />
<property name="reports.ncover" value="${reports.dir}\NCover-Report.xml" />
<property name="build.outputPath" value="\\fileServer\Assemblies\MyExtensions" />

These specify the path to the build directory, where the MsBuild executable is on the machine (which we’ll use in a later section), where the tools are located, and where to output various artifacts. All of these paths are relative to where your build script is located, so if you placed it in your root folder with your Visual Studio solution file, these paths should work out.

Next we’ll specify a four targets (methods); two to act as convenience targets that call out to other targets, one that cleans the current build artifacts, and the last one that compiles the project by using the Visual Studio solution file:

<target name="build-server" depends="clean, compile, unitTests, ncoverexplorer-report publishOutput"
	description="In addition to the normal full build, it copies the solution output to a specified network share." />
<target name="full-build" depends="clean, compile, unitTests, ncoverexplorer-report"
		description="Does a full build of the project and runs unit tests." />
<target name="clean" description="Destroys the directory where all assemblies/reports are generated.">
		failonerror="false" />
		failonerror="false" />
<target name="compile" description="Compiles the project using the MSBuild executable.">
	<echo message="Using MSBuild to build configuration: ${build.config}" />
	<exec program="${build.toolPath}"
		commandline="${build.project}.sln /p:Configuration=${build.config} /nologo /noconsolelogger /noautoresponse" />

Notice that the build-server and full-build targets use the depends attribute, which will call out to each of the specified targets, in order. The build-server and full-build targets are identical except for the last target call, publishOutput. Discussed below, this target will copy the library’s build outputs to our file share for everyone to access. Since we only want to do this on the build server, and not when ran locally, we’ll name the target differently.

The clean target just deletes the bin directory if it exists, and the compile target will call out to the actual MsBuild.exe to compile the solution.

There are built-in NAnt tasks (or available in the NAnt Contrib project) that will compile solutions and do a few of the other tasks that I’m doing by hand, such as running NUnit and building installers. I prefer this method, though, for more control over what’s getting called and less breakage when upgrading various tools.

OK, so I go and say that, and now show you the unitTests target, which uses a custom NCover task. I made an exception for this step, since NCover normally requires a special COM object to be registered before it’s ran, which I had no interest of doing through a script. The custom task takes care of all that:

<target name="unitTests" description="Runs all needed unit tests with MbUnit and checks coverage with NCover.">
		unless="${directory::exists(property::get-value('reports.dir'))}" />
	<!-- Call NCover, which will call MbUnit to run the tests.
		While MbUnit runs, NCover does its work.
			- To add additional unit test libraries, add the full path to the unit test DLL
			   at the end of the commandLineArgs attribute, separating it with a space
			   and being mindful of the ${build.config} variable.
			   Do NOT wrap this line, as NCover will fail.
			- To add a new assembly you want to check coverage on, add the assembly
			   name at the end of the assemblyList attribute, separating them with a
			   semi-colon.  -->
	<ncover program="${tools.dir}\NCover\NCover.Console.exe"
			commandLineArgs="/report-folder:${reports.dir} /report-name-format:MbUnit-Report /report-type:Xml ${build.testBuildDir}\${build.config}\${build.project}.UnitTests.dll" />

As the comment says, the NCover task will set itself up as needed, then call MbUnit to run through the unit tests, while it basically keeps an eye on what parts of your code are getting called. NCover then produces a report listing each function point (usually equal to a line in your code) that was hit while the unit tests ran. More function points being called == higher code coverage percentage.

This next target will call out to NCoverExplorer, which simply takes in the NCover report made in the previous target and generates a report of its own for use in its GUI app, along with a nice little HTML report for display in CruiseControl.NET’s interface later on:

<target name="ncoverexplorer-report" description="Produces a condensed report in XML format from NCover.">
	<exec program="NCoverExplorer.Console.exe" basedir="${tools.dir}\NCoverExplorer">
		<arg value="/xml:${reports.dir}\NCoverExplorer-Report.xml" />
		<arg value="/html:${reports.dir}\NCoverExplorer-Report.html" />
		<arg value="/project:&quot;${build.project}&quot;" />
		<!-- Minimum coverage for a "passed" test in %. -->
		<arg value="/minCoverage:95" />
		<!-- Show the highest level of detail in the report. -->
		<arg value="/report:5" />
		<arg value="${reports.ncover}" />

Now we simply copy the project’s output (or in our case, the .dll from the extension library) to an output directory. I usually have it copy it to a commonly accessible file share for easier access:

<target name="publishOutput" description="Publishes the solution's output by copying it to a specified directory.">
	<copy todir="${build.outputPath}" overwrite="true">
		<fileset basedir="${build.fullPath}">
			<include name="${build.project}.dll" />
			<include name="${build.project}.pdb" />

An optional last step is to create a batch file in the root of the project which simply calls out to the NAnt executable, passing in your new build file as a parameter. This batch file can then be ran to kick off the full build script by calling build.bat full-build:

@MyExtensions\Internal\Tools\NAnt\NAnt.exe %*

Which runs through and results in a nice little “BUILD SUCCEEDED” message:

Build Script

Gives me the warm and fuzzies every time.

Well, that’s pretty much it. A very basic build script, but it gets the job done. I’d recommend poking around the build scripts of some of the more popular open source projects to get better idea of what these scripts are really capable of automating for you. Take a look at Ninject’s for building a public framework that targets different platforms, or Subtext’s for building a website solution.

A skeleton project setup with this build script, complete with the needed tools and everything, can be downloaded here.

In Part 3 I’ll go over setting up a basic build server using Cruise Control.NET. The build server basically just calls out to this build script, so, thankfully, the bulk of the work is already done.

Aug 28

Ah yes, automated build scripts and continuous integration servers. They form the foundation of any software project, or rather they should, but how would one go about setting them up? Before we get to that, let’s differentiate a little first.

Build Scripts

These are simply scripts that another program parses and executes to build your project, usually doing everything from wiping your build directory to running unit test and integration tests, to possibly creating and destroying test databases. Build scripts can range from simple batch files to more complex NAnt scripts.

Actually, you may not realize it, but you’re probably using build scripts already. Starting with Visual Studio 2005, MSBuild has been used behind the scenes to automatically build your solution when you hit Build -> Build Solution. MSBuild can also be used independently in much the same way as NAnt scripts, and in fact many people consider these two build systems to be the most mature/robust for the .NET environment. They both consist of a lot of XML, though, so put on your goggles before taking a gander at any examples. There’s also the Boo Build System (though I think its been renamed to Bake due to its original initials) that’s based on the Boo language, psake based on Powershell, FinalBuilder for a graphical approach, and rake that’s built on Ruby, among many others.

So build script are read by a build system, and executed. Complicated batch scripts, basically. They can be run locally, and many people actually opt to run these scripts instead of using Visual Studio’s build command once they get a good script setup, or they can be run automatically by other programs. I haven’t gotten to the point of replacing Visual Studio’s build command yet, but I can see its benefits.

Continuous Integration Servers

These little beauties generally run on their own box, and can either poll your source code repository (in whatever form it may come, be it Visual Source Safe, Subversion, Git, etc.) or run on a schedule, basically kicking off your build script whenever it sees changes. For instance, if you check in an update, the build server will see that update, clean its local copy of source code, do a full update of the source code locally, then run the build script you normally run on your box, building the code and running all sorts of tests. It can then go a step further and start copying the output to a staging server for your customers or testing folks to take a sneak peek at.

Continuous Integration (CI) servers come in quite a few flavors. One of the more popular in the .NET world is Cruise Control .NET (CC.NET), though it too has a heavy reliance on XML. JetBrains (the guys that make ReSharper) have released TeamCity as a free download (for up to 20 user accounts and build configurations, and 3 build agents), which has an awesome web interface and lets you get a server setup in no time. It has built in support for quite a few features, and even comes with a plug-in for Visual Studio that lets you run a fake build locally on your machine before doing a code check-in. There are quite a few other CI servers out there, but these are the only two I’ve had time to play around with.

Setting Up a Build Script and CI Server

New with C# 3.0 comes extension methods, which I’m sure everyone’s heard of, and I’m equally sure that everyone has a small collection of handy ones in some sort of extension library. This library is probably shared across projects, and any developer wishing to use it in their project needs to do a get latest from the source code repository, build the solution, find the compiled assembly on their machine, and copy the it into their project. This repeats if they want to update their project’s copy, too.

Seems like a lot of steps just to use or update the library, huh? Let’s tidy that up a little, by:

  1. Setting up a build script using NAnt, which will:
    • Clean the /bin folder
    • Do a full recompile of the source code
    • Run FxCop to check for out of place coding standards
    • Run MbUnit (my testing framework of choice)
    • Run NCover (using the latest freely available copy)
    • Run NCoverExplorer, which will generate a neat little XML file you can use to graphically see your code coverage (again, using the latest freely available copy)
    • Be able to run locally on each developer’s machine, if they so choose
  2. Setting up a continuous integration server using Cruise Control .NET, which will:
    • Both periodically poll the source control server for any new commits, along with just running at a set time every night
    • Clean its source code copy and run an update from the source control repository
    • Run the build script previously created
    • Email any developer that checked in code during this run with an update on the fail/pass status of the build
    • Allow any developer running a handy-dandy desktop app to instantly see the status of the build server (success, failure, building, etc)

Alright, alright, so this might not seem like it’ll really tidy up anything right now, just add a crap load of work to our plates, but trust me, it’s not as bad as it looks. A lot of this can be heavily templated across projects too, so once you cut your teeth on it, it’s tremendously easier to setup again going forward.

In Part 2, I’ll talk about setting up the build script using NAnt.