Dec 31

In Part 2, I walked you through setting up a build script for your solution. Now we’ll go through setting up a continuous integration server using Cruise Control.NET.

After attending JP’s Nothin’ But .NET course, my outlook on build scripts, CI servers, and what each is capable of doing for a project has been completely altered. I’m going to finish this series for the sake of completeness, but I’ll putting up a post about what I learned at some point in the near future (and I don’t want to spill too much as I know JP is planning on releasing a lot of that stuff this year).

CC.NET Server Setup

Start by grabbing the latest version of Cruise Control .NET and installing it using all the defaults. Assuming everything goes OK, you should see an empty dashboard when browsing to http://localhost/ccnet.

CC.NET Config

I’ll go ahead and assume you’re using Subversion for source control, though switching this example to Visual Source Safe, CVS, SourceVault, or whatever you happen to be using, isn’t hard at all.

The cc.net file specifies details for all the projects your build server should be building. Each project gets a project tag, which specifies the name and URL for the project:

<project name="MyExtensions" webURL="http://localhost/ccnet"></project>

Inside the project tag you specify when/where/how the build server should get the source, how to label successful builds, what it should do with the source once it gets it, and who/how to tell people of successes or failures, and much more. A full list of possible tags can be found on the main CC.NET documentation site, but we’ll walk through a basic setup. One thing to note is you must restart the CC.NET process every time you update this config file, otherwise the changes won’t take effect.

Start by defining a working and artifact directory, where the actual source code and CC.NET reports will live, respectively. I prefer to keep them separated out in their own folders for clarity:

<workingDirectory>C:\BuildServer\Projects\MyExtensions</workingDirectory>
<artifactDirectory>C:\BuildServer\Artifacts\MyExtensions</artifactDirectory>

Next you’ll specify all the basic information needed for Cruise Control to access and checkout your repository in the sourcecontrol section. As I previously mentioned, there’s lots of source code providers bundled with Cruise Control, and even more available on the net. The executable is a pretty standard location, and is where the normal SVN installer puts it (and I usually check-in the installer with the rest of the CC.NET files):

<sourcecontrol type="svn">
	<executable>C:\Program Files\Subversion\bin\svn.exe</executable>
<trunkUrl>svn://svnServer/MyExtensions/trunk</trunkUrl>
	<username>BuildServer</username>
<password>password</password>
</sourcecontrol>

The trigger section will define when Cruise Control should kick off the build process. I’ve defined two below, one every night at 10PM, and one that will poll Subversion every 2 minutes for a fresh commit and begin only if it finds one:

<triggers>
	<intervalTrigger name="continuous" seconds="120" />
	<scheduleTrigger time="22:00" buildCondition="ForceBuild" />
</triggers>

The tasks section will tell Cruise Control what to do once it gets a copy of the source code. Here we’ll use the built in NAnt task, which needs a base directory to execute in, and a path to the NAnt executable (which we’ve convienently commited right along with the source). With no target defined for the NAnt build, it’ll run the default one, which for us is build-server:

<tasks>
	<nant>
		<baseDirectory>C:\BuildServer\Projects\MyExtensions</baseDirectory>
		<executable>MyExtensions\Internal\Tools\NAnt\NAnt.exe</executable>
	</nant>
</tasks>

The publishers section specifies, among other things, what to do with all the build script’s output, and who to notify for build success and failures.

For our config, we’ll use the merge tag underneath the publishers section to tell Cruise Control to combine all of our xml output files, including the ones from NCover and NAnt itself:

<merge>
	<!-- All file paths are relative to the WorkingDirectory node at the top of the script. -->
	<files>
		<file>MyExtensions\bin\Reports\*.xml</file>
	</files>
</merge>

We’ll also tell Cruise Control where to output the complete build report from each build, which is uses for display on its web page (so we’ll store them in C:\BuildServer\Artifacts\MyExtensions\BuildReports):

<!-- Relative to the ArtifactDirectory node at the top of the script. -->
<xmllogger logDir="BuildReports" />

The last tag we need, again underneath the publishers section, is the email tag. It’s pretty self explanatory, defining an email server and address to mail from/to. One point of note is the user name’s defined in the users section must match the user names from Subversion:

<email from="build@yourcompany.com" mailhost="mail.yourcompany.com" includeDetails="true">
	<users>
		<user name="svnUsername" address="your@email.com" />
	</users>
	<groups />
</email>

Extra Files

CC.NETThere’s also a dashboard.cfg file, which specifies how the web site displays build information for all the projects on the server (an example of which is shown on the right). I customized this one to include only needed report links and ignore others. This file, along with a few needed images, XSL formatting files, and instructions on where they should all be copied is included in the download at the end of post.

Conclusion

The previous two articles gave you an overview of setting up a build script and continous integration server and actually walked through setting up a very simplistic build script for your company’s possible extension/utility library. This article gives you a quick run down of setting up Cruise Control .NET to run that build script after getting source updates, and emailing any needed developers about failures.

This is by no means complete, only an introduction to get you started. Windows and web based projects are totally different, and when you get into running nightly integration/smoke tests, production deployment, product packaging, etc, you can imagine how it gets pretty complicated. The best advise I can give for these situations is to look at popular open source products to get ideas. For example, Subtext has some awesome automation setup in both the build script and build server configuration. Definitely worth a gander.

The completed skeleton project setup with this build server configuration and everything else you’ll need, can be downloaded here.

Dec 3
Nothin’ But .NET Training
icon1 Darrell Mozingo | icon2 Reviews | icon4 December 3rd, 2008| icon33 Comments »

When you think about corporate training courses, you pretty much picture a few 8-5 classes with a nice lunch break, generally easy workloads, and a somewhat low learning curve, right? At least that’s what I always knew them to be until last week. After taking J.P. Boodhoo’s Nothin’ But .NET boot camp the week of November 17th in Philadelphia PA, though, that rosy picture of easy days with only a relatively low amount of learning has been replaced with one of intense training and pushing the boundaries of what I thought I was capable of.

Plenty of others have written mini reviews about this latest course, and mine isn’t going to be substantially different. It was an utterly amazing course, plain and simple.

He started us out by sending some prep material a week or so before the class began. Among other things, it included a project with about 18 failing specs (unit tests in behavior driven development terms) that we had to get passing as OO-ish as possible. Simple enough, or so it seemed. I got them all passing in what I thought was a pretty nice solution, at least until the class started.

  • Day 1: 8:30am ~ 11pm. After just abut everyone’s machine was up and running, we talked about some basic design patterns (adapter, decorator, etc) and applied them to the prep exercise together. About mid day JP wrote out a basic fluent interface he wanted for querying and sorting in the exercise and left it to us to implement, then walked us through it together. The end result for the exercise was nowhere near what I came up with before the class and really got me thinking about the way we design applications at work and what we consider “good”. Very cool intro to fluent interfaces though. Didn’t do any TDD that day, as he wanted to focus on some of the fundamentals.
  • Day 2: 9am ~ 11:30pm. We started working on the front controller architecture (a slight twist on the common MVC style you see in Monorail and ASP.NET MVC) for the web store we’d be working on the rest of the week. From here out it was all TDD using JP’s custom testing wrapper framework, which he mentioned he’s working on release publicly soon. We also looked at a basic logging implementation. The way he was able to craft a lot of these tests in a TDD manor really opened my eyes at how to tackle a lot of tough problems I’d hit in the past.
  • Day 3: 9am ~ 1:30am. Pretty much finished up the front controller implementation, which was cool to see as it gave me a tremendously deep understanding into how the bigger .NET MVC frameworks out there work, more than any blog post or series really could. We’d been pairing on and off until this point, but we actually broke into teams later in the day to get the whole front controller working to the point where we could display a page, so team interaction was heavily stressed here.
  • Day 4: 9am ~ 3:30am. Broke off into larger teams to complete the bulk of the web store, including a full end-to-end (database to browser) experience and getting everything configured in a nice, fluent, manor. I worked on a fluent interface for the routing configuration, among other things, and the other guys in my group worked on ones for the ORM, IoC container, and object mapping. There was just so much to do it was mind boggling. It was great running into a problem and having JP come over to explain it in a way that made absolute sense, too. His ability to walk through a problem and get a solution is amazing.
  • Day 5: 9am ~ 2:30am. Went over domain driven design (DDD) using a standard shopping cart scenario in the morning, then finished working in our teams on the store front the rest of the day, tweaking our fluent interfaces and adding features. We didn’t get to where we wanted to, but I think we were all happy with what we came up with. The momentum from the rest of the week really took a beating after lunch, unfortunately, but we plowed through it.

The material we covered in the class alone makes the course worth it, and that’s not even counting the fantastic food we had free of charge all week (steak houses, chinese, subs, full course Italian – all thanks to Brian Donahue), the awesome discussions, both technical and non, with everyone during meals, the screen casts from the week, and the laughs (which there was plenty of!).

You also have to factor in JP’s motivational ability. The passion he has for development was evient from the first day, and never once slowed down. You can tell this guys absolutely loves what he does, and his ability to share that with everyone and have some rub off is truely special. I came away from the class not only with a new found technical outlook on my code base, but a personal & professional outlook on life. Seriously, it did that much. I tried talking JP into a motivational speech circuit, but it didn’t sound too likely 🙂

JP talked how he really wanted everyone to stay in touch after the course ended, even to the point of having an alumni get together later next year. It’s cool how much you get to know these other people in the class after such an intense week. I recommended to JP that he have the future classes work on a social networking site instead of the web storefront during the course. Not only has the whole web storefront sorta been played to death in examples, but it’d give everyone an actual tangible tool to stay in contact after the course ends, knowing they built it.

So, if you have the chance to take this course, even if you have to beg and gravel on your knees for the funds, do it. It’s easily worth every penny and more.