Feb 24
Canton Software Craftsmanship
icon1 Darrell Mozingo | icon2 Events | icon4 February 24th, 2011| icon3No Comments »

Brandon Joyce and I are starting a Software Craftsmanship group in the Canton area. Appropriately titled Canton Software Craftsmanship, it’ll be the first Monday of every month starting at 6pm in Stark State’s auditorium. You can get more information at the group’s website, here.

If you’re interested in attending, please register so we have a head count for the provided pizza & drinks.

Hope to see you there!

Feb 1
Taskie – Lowering the entrance barrier
icon1 Darrell Mozingo | icon2 Taskie | icon4 February 1st, 2011| icon3No Comments »

One more step to lowering the entrance barrier for Taskie is complete:

Taskie NuGet Feed

Taskie is available on the official NuGet package feed. Build configurations are pending on CodeBetter’s TeamCity server as well.

Next up: ability to log when tasks run, and query that information for later use.

Jan 6
icon1 Darrell Mozingo | icon2 Goals | icon4 January 6th, 2011| icon31 Comment »

Man, that was a quick year.


  1. Code Complete – Steve McConnellNot started.
  2. Patterns of Enterprise Application Architecture – Martin Fowler
  3. Applying Domain-Driven Design and Patterns – Jimmy Nilsson
  4. Working Effectively With Legacy Code – Michael Feathers

Tools/Techniques/Processes @ Work

  • Move from SVN to Git (and learn more about Git in the process). – I’ve been using Git for side projects, but not at work quite yet, so I’ll call this half complete.
  • Move from CC.NET to Team City for our build server.
  • Build a more robust build script and management process – including production deployment and database migration scenarios.


  • Develop an idea I was given for an open source project and get it live to see what happens.Taskie
  • At least 24 blog posts (I’m not going to say 2 per month as I’m getting married this summer and I’m certain I won’t be able to maintain a schedule around it). – I got 18 out – not too bad.
  • At least 3 feature/patch submissions to open source projects. – Not started.


  • Get a version 1 out there on at least 1 of the 3 product ideas I have floating around. – Not started.
  • Keep working on a good working knowledge of Ruby & Ruby on Rails (and use it to build the product mentioned above). – More reading and playing, but no actual project.

So that’s an overall success rate of 58% assuming each goal has equal weight (which they really don’t, but it saves time). Not that impressive, but probably a bit better than average. My goals are pretty similar to these for 2011, just sub out some technologies and books. I’ve grown tired of updating them and I’m sure you’ve grown tired of ignoring the posts in your feed reader, so I’ll just call it quits with publicly posting these goals from here out.

You’re welcome 🙂

Jan 4
Taskie – now with NuGet goodness
icon1 Darrell Mozingo | icon2 Taskie | icon4 January 4th, 2011| icon3No Comments »


In addition to a few small bug fixes, Taskie is now filled with NuGet-y goodness (or should that be the other way around?). Anyway, you can grab the package to use locally right here (with instructions on how to host it locally). I’m working on getting it into the main NuGet package feed in the next week or two.

As part of this release, I’m merging StructureMap into the main assembly so you only have to worry about a single Taskie.dll assembly. I’m also going to see about getting this project on Code Better’s Team City setup for some continuous building lovin’. When that’s done, I’ll provide packages for both merged and unmerged flavors, just in case you need them separated out for whatever reason.

Next up on the list is adding task logging, along with the ability to see when a given task was last run (some of our accounting procedures depend upon the last date they were successfully run, so knowing this programatically is a must for us, and will come in handy for future features as well). Not sure how I’m going to go about it – perhaps a pluggable interface for your own implementation, a connection string you provide, or maybe Taskie’s own internal Sqlite/Raven database. I’ll have to play around with the options a bit.

Dec 27
Testing tips
icon1 Darrell Mozingo | icon2 Testing | icon4 December 27th, 2010| icon3No Comments »

Just some quick testing tips I’ve found helpful over the last few years.


Don’t name variables company1 or company2. There’s a reason you’re creating two of them – why? Names like companyMatchingCriteria or companyWithCorrectAddressAndWrongPhoneNumber make a lot more sense when reading the tests later. When it comes to testing, readability is paramount, even more so than perhaps the production code.

Unreadable tests lead to developers ignoring them, which leads to false positives, which leads to deleting the tests, which leads to, um, purgatory, I suppose. An alternative is the good hearted developer that spends a disproportionate amount of time understanding and fixing a handful of tests when they only changed a few lines of production code. Either option isn’t appealing, and goes against one of the reasons for testing in the first place.


When naming tests, either the test names themselves, variable names, or whatever – always go after the business intent rather than the technical reasons. So rather than Should_break_out_of_the_loop_when_an_employee_address_is_null, for example, try something like Should_not_process_employees_that_have_not_entered_their_address. You can picture how these would mean the same thing in the production code (probably a simple null check), but one talks about breaking out of loops and null values (technical), while the other talks about not processing and non-entered values (businesses). The differences often aren’t this obvious either, and I know we developers love talking technical, so it’s pretty easy to let that creep into our testing language.

This helps in a few ways:

  1. Writing the code – if you can’t pin a business reason to a certain bit of code to exist, it probably shouldn’t. I know it’s always tempting to throw extra checks in here and there, but if the businesses doesn’t need it for a certain reason, it shouldn’t exist (exceptions obviously exist). Maybe you’re checking for null employee addresses, but when talking to the business folks, they want the user to enter an addresses when they create the employee. This leads to the employee never existing without an address, and negates the need for the check in the first place. If you were just checking for a null, you’d never think to ask this and it’d always be there.
  2. Maintaining the code – I hate reading code that does a bunch of checks (null, certain values, empty strings, etc), and you come to figure out after working with it for a while that the checks aren’t even needed because of invariants in the system (i.e. those values can never fall into that state). It’s just extra code to read, mentally parse, consider in different situations, and perpetuate – “well, that method checks this for null, so I should too”.
  3. Talking with the business folks – when they come to you and ask what happens if the employee hasn’t entered an address yet, you can look through the tests and see they’re not processed this this location or that for whatever reason. This saves you from having to look for null checks in the testing names and figuring out what it means in different situations. This is a bit of a contrived example for this point, but you get the idea. The tests correspond to how the business people think about things.

So, business intent in test naming = good, technical jargon = bad. Again, exceptions do exist, so this isn’t set in stone all the time.

See a theme with all my recent tips? Naming. That’s why Phil Karlton famously said:

“There are only two hard things in Computer Science: cache invalidation and naming things”

Very true.

Dec 22
Are focused tests really worth it?
icon1 Darrell Mozingo | icon2 Testing | icon4 December 22nd, 2010| icon3No Comments »

We recently had the requirement to start filling in fillable PDF’s. The fields in fillable PDF’s are just string names, with text boxes that get string values, check boxes that have special values, etc. I decided to create model classes to represent each PDF, then mapping classes to map each of the model’s properties to a field in the PDF. I ended up with something like:

public class PdfModel
	public string Name { get; set; }
	public Money Amount { get; set; }
	public bool Sent { get; set; }
	public string StateAbbreviation { get; set; }
public class PdfModelMapping : PdfMappingBase<PdfModel>
	protected override void CreateMap()
		Map(x => x.Name).To("name_field");
		Map(x => x.Amount, DollarsTo("dollar_field").CentsTo("cents_field"));
		Map(x => x.Set).To("sent_field");
		Map(x => x.StateAbbreviation, m =>
							m.Map(x => x.ToCharArray()[0]).To("state_first_letter_field");
							m.Map(x => x.ToCharArray()[1]).To("state_second_letter_field");

Any similarity to a popular open source tool is completely coincidental. Hah! Anyway, it’s working well so far. When I set out to write this, I started with a single fixture for the PdfMappingBase class above. I made a small mapping for a single property, then another one for a check box, then another one for a multiple field mapping, etc. I found that while I ended up with around 10 supporting classes, every line of code in them existed to fulfill one of those easy tests in the mapping base fixture.

So I test drove the overall thing, but not each piece. There’s no tests for the individual classes that make up this mapping system, but there’s also not a single line not covered by a test (either technically by just hitting it, or meaningfully with a test to explain why it’s there). Is that wrong? I’m thinking no.

Developing this seemed very natural. I created a simple test that showed how I wanted the end API to look like:

public class When_mapping_a_single_text_box_property : SpecBase
	IEnumerable<PdfField> _fieldsFromMapping;
	readonly TestPdfModel _model = new TestPdfModel { Name = "name_value" };
	protected override void because()
		_fieldsFromMapping = new SinglePropertyPdfMapping().GetAllFieldsFrom(_model);
	public void Should_only_have_one_field_mapping()
	public void Should_set_the_field_name_based_on_the_mapping_definition()
	public void Should_set_the_value_from_the_model()
	private class SinglePropertyPdfMapping : PdfMappingBase<TestPdfModel>
		protected override void CreateMap()
			Map(x => x.Name).To("field_name");

Then I just created the bare minimum to get it compiling & passing, refactored, and moved on to the next part of the API. Rinse & repeat. Again, I test drove the whole shebang in a top-down way, but not the individual classes themselves. This whole thing isn’t going out to any resources, so it runs fast and all that jive. The only draw back I can see if being hard to pin down problems in the future – having to navigate through a dozen or so classes to find why a test is failing probably won’t be fun. On the upside, I’ve found refactoring on the whole much easier, as the tests only look at the entry point to the whole API. I can change how the classes interact through each of their own public interfaces pretty easy, without having to update tests that may be looking at that one specific class.

Thoughts? I know taken too far this is a bad idea, but what about this situation? Think I should add tests for each of the supporting classes?

Dec 16
Introducing Taskie
icon1 Darrell Mozingo | icon2 Taskie | icon4 December 16th, 2010| icon35 Comments »

A little over a year ago I offhandedly mentioned the scheduled task program I wrote for one our products at work. Well, I’m finally releasing a stripped down version as open source.

Taskie is a super simple way to create and manage .NET scheduled task applications, built with dependency injection at its core.


You’re always going to need to do some kind of back-end processing in your apps, and you basically have two choices for them: a command line app, or a service. When it came time for that decision on our current project, we decided we never much cared for the deployment story with services (even with the awesomeness that Topshelf brings to the tableshelf), and our server geeks didn’t like them much either for whatever reason. We’re all used to console apps though, and they work, so we stuck with them.

But adding a new projecting to the solution for each scheduled task exe we needed? Parsing command line arguments? Not having dependency injection available? Having to deploy all that junk? No thanks!

So I whipped up Taskie. It handles all the boiler plate crud and eases deployment for us and the server geeks. Once you have it setup, whenever you need a new scheduled task you just slap in a class, implement a simple interface, and Taskie handles the rest.

Getting Started

UPDATE: The first version I published wouldn’t work if you were using StructureMap (as Taskie uses that internally). The assemblies linked below are now updated to work correctly in that situation.

  1. Download the Taskie assemblies
  2. Add a console application to your solution
  3. Add a reference to Taskie.dll in the console application project, and set it to build against the full .NET 4.0 Framework (not the default Client Profile)
  4. Implement Taskie.ITaskieServiceLocator in your application, using your dependency injection tool of choice. These methods should be single line implementations.
    public interface ITaskieServiceLocator
    	INSTANCE GetInstance<INSTANCE>();
    	IEnumerable<INSTANCE> GetAllInstances<INSTANCE>();
  5. Inside Program.cs, within your console application, initialize your dependency injection tool however you normally would and call TaskieRunner.RunWith(), passing the command line arguments and an instance of your implementation of IServiceLocator, like this:
    public static void Main(string[] args)
    	TaskieRunner.RunWith(args, new ServiceLocator());
  6. Add a class that implements Taskie.ITask somewhere in your main project, name it “FooTask” (where Foo is whatever you want, but it must end with Task), and make sure your dependency injection tool knows about it (either through auto discovery or explicity registered):
    public interface ITask
    	void Run();

That’s it! Taskie is all setup and ready to roll. Running your console application with no command line arguments will show a usage screen listing any tasks that ready to run. Run the executable with “/run Foo” and it’ll run whatever you have in the Run method on your FooTask class.

Taskie Usage Screen

A few optional things you can do:

  1. Tag your task class with the TaskDescription attribute, providing it a string description to display on the usage screen (as seen above)
  2. Implement Taskie.ITaskieApplication to run any code before and after Taskie does its thing (such as setting up your NHibernate session)
    public interface ITaskieApplication
    	void Startup();
    	void Shutdown();

Future Plans

A few of the things I’m thinking about for the future:

  • NuGet package!
  • ILMerge everything into one assembly file
  • Create a way to schedule tasks on the server (either a fluent interface or through XML/text files), then have an MSBuild/NAnt/PowerShell/rake package that’ll remotely set those scheduled tasks up
  • Ability to log when tasks run & finish, along with a built-in task to ensure tasks are running when they should and can report when they don’t (using the definitions mentioned above)
  • Error reporting from tasks – emails, a pluggable interface, etc.
  • An ITransactionalTask interface that provides a roll back method to cleanly implement that functionality when needed

Check out the source code on GitHub. A sample application is included.

This is my first open source project and first experience with Git, so please go easy on me 🙂

If you have any suggestions for anything (especially on how I can ease the getting started process), I’m all ears!

Dec 3

The first post gave a quick overview of what our deployment script does and why you’d want one, the second post went over pre-deployment steps, and the third post in this series covered the actual site’s deployment. This post will go over a few of the post-deployment steps we take after publishing our site. Like the last posts, most all of this code will probably be pretty self explanatory.


We make heavy use of StructureMap, NHibernate (w/Fluent NHibernate), and AutoMapper in our system, and those guys have some heavy reflection startup costs. Since it’s all done when the app domain starts, we hit each server in our farm to “pre-load” the site for us because that first visit takes a good 30-40 seconds because of those tools.

Since the servers are in a farm, we can’t just go to the site’s URL as we’d only get one box – even multiple loads aren’t guaranteed to move you around to them all. To make sure we’re looking at each server, we fiddle with the build server’s hosts file and point it at each web server. We don’t do parallel builds on our build server, so we thankfully don’t have any issues with other build scripts getting tripped up, but you may want to consider that if it’s applicable to your situation.

properties {
	$hosts_file = "C:\Windows\System32\drivers\etc\hosts"
	$servers_production = @( "server1", "server2" )
	$servers_production_ip = @{ "server1" = ""; "server2" = "" }
function setup_hosts_file_for($server, $url)
	$server_ip = $servers_production_ip[$server]
	echo "Setting hosts file to use $server_ip ($server) for $url."
	"$server_ip $url" | Out-File $hosts_file -Encoding Ascii
function remove_hosts_file_entries
	echo "Removing all hosts file entries and reverting to a clean file."
	" localhost" | Out-File $hosts_file -Encoding Ascii
function make_sure_we_are_pointing_at($server, $url)
	$expected_server_ip = $servers_production_ip[$server]
	$ping_output = & ping -n 1 $url
	$ip_pinged = ($ping_output | Select-String "\[(.*)\]" | Select -ExpandProperty Matches).Groups[1].Value
	if ($ip_pinged -ne $expected_server_ip)
		throw "The site's IP is supposed to be $expected_server_ip, but it's $ip_pinged (for $url). Hosts file problem?"
	echo "Correctly pointing at $ip_pinged for $url."
function stop_dns_caching
	& net stop dnscache
function start_dns_caching
	& net start dnscache

The hosts file allows you to point any request for, say, www.asdf.com on your machine to whatever IP you want. So if you wanted to preload www.asdf.com for server1, you can put “ www.asdf.com” in your hosts file, and you’ll always hit that machine. Your load balancing setup might not allow this though. There’s also a method that’ll ping the given URL to make sure it’s going to the proper server, throwing up if it isn’t. The last two methods start/stop the DNS Caching service in Windows, just to help make sure we’re looking at the correct IP for a given URL.

With that setup, we can easily manipulate IE through COM to pull up the site:

properties {
	$live_site_text_in_title = "Our cool site"
	$times_to_try_preloading_sites = 50
function fire_up_ie
	return New-Object -Com InternetExplorer.Application
function preload_url_on_server($server, $url)
	setup_hosts_file_for $server $url
	make_sure_we_are_pointing_at $server $url
	$current_attempt_count = 0
	$3_seconds = 3
	$ie = fire_up_ie
	echo "Pulling up $url in the browser."
	while ($current_attempt_count -lt $times_to_try_preloading_sites)
		pause_for $3_seconds
		$document = $ie.document
		if ($document -ne $null -and $document.readyState -eq "Complete" -and $document.title -match $live_site_text_in_title)
			$time_taken = ($current_attempt_count + 1) * $3_seconds
			echo "Preloaded $url on $server in about $time_taken seconds."
	if ($current_attempt_count -ge $times_to_try_preloading_sites)
		throw "$url (on $server) couldn't be preloaded after a pretty long ass wait. WTF?"

Working with IE’s COM interface is pretty painless in PowerShell. Dynamic languages FTW, aye? We just fire up IE, browse to the URL (which should be pointing to the given server only), and keep checking on IE’s progress until the page is fully loaded the title contains some piece of text we expected it to. Simple and to the point.

The first snippet in Part 3 of this series showed how we deployed the site. You can see there where we temporarily stop the DNS Caching service, then pre-load the site on each server are deploying to it, then reset the hosts file and start the DNS Caching service again.

Testing Error Email Generation

We have some basic code to email exceptions out if our app hits an exception. Nothing fancy. To test our error emails are getting sent OK, I created an obscure URL in the application that’ll just generate a TestErrorEmailException. When our error handler sees that exception, all it does it send the generated error email to a buildserver@domain.com address rather than the normal one. The build script then logs into it’s special GMail accont and checks for the email. This is bar far the chunckiest part of the build script:

properties {
	$email_url = "mail.ourdomain.com"
	$error_generation_path = "/SomeObscurePath/GenerateTestErrorEmail/?subject="
	$max_email_check_attemps = 100
function wait_for_browser_to_finish($ie)
	while ($ie.busy -eq $true) {
		pause_for 1 #second
function generate_test_error_emails_on($server, $base_url, $error_email_subject)
	setup_hosts_file_for $server $base_url
	make_sure_we_are_pointing_at $server $base_url
	$error_url = $base_url + $error_generation_path
	$full_error_url = $error_url + $error_email_subject
	$ie = fire_up_ie
	echo "Generating test error email from $full_error_url."
	wait_for_browser_to_finish $ie
function ensure_error_emails_are_working_on($server, $base_url)
	echo "Ensuring error emails are getting sent out correctly on $server."
	$current_datetime = Get-Date -Format MM_dd_yyyy-hh_mm_tt
	$error_email_subject = "Error_" + $server + "_$current_datetime"
	generate_test_error_emails_on $server $base_url $error_email_subject
	check_email_was_sent $error_email_subject
function check_email_was_sent($expected_email_subject)
	echo "Pulling up $email_url in the browser."
	$ie = fire_up_ie
	$ie.navigate($email_url )
	wait_for_browser_to_finish $ie
	logout_of_email $ie
	echo "Logging in to email."
	$ie.document.getElementById("email").value = $security_user
	$ie.document.getElementById("passwd").value = $security_password
	wait_for_browser_to_finish $ie
	echo "Looking for test error email."
	$test_error_email = $null
	for ($i = 1; $i -le $max_email_check_attemps; $i++)
		echo "Attempt #$i checking for the test error email."
		$test_error_email = get_link_containing_text $ie $expected_email_subject
		if ($test_error_email -ne $null)
			echo "Found the test error email."
		pause_for 10 #seconds
		echo "Refreshing the page after a pause."
		click_link_with_text $ie "Refresh"
	if ($test_error_email -eq $null)
		throw "Test error email was never received after $max_email_check_attemps attempts. Problem?"
	echo "Pulling up the test error email."
	wait_for_browser_to_finish $ie
	echo "Deleting test error email."
	click_link_with_text $ie "Delete"
	logout_of_email $ie
function logout_of_email($ie)
	$signout_link = get_link_with_text $ie "Sign out"
	if ($signout_link -ne $null)
		echo "Signing out of email."
		wait_for_browser_to_finish $ie
function click_link_with_text($ie, $text)
	$link = get_link_with_text $ie $text
	$there_are_multiple_links_with_that_text = ($link.length -gt 1)
	if ($there_are_multiple_links_with_that_text)
	wait_for_browser_to_finish $ie
function get_link_with_text($ie, $text)
	return $ie.document.getElementsByTagName("a") | where { $_.innerText -eq $text }
function get_link_containing_text($ie, $text)
	return $ie.document.getElementsByTagName("a") | where { $_.innerText -match $text }

It seriously looks worse than it really is, and most of it is due to navigating around GMail’s interface. So we hit the obscure URL in our app, pass it a subject line for the error email, wait a bit, then log into GMail and check for an email with that same subject line. If we don’t find the email after a waiting period, we blow up the script. Simple as that.

If you know an easier way to do this, I’m all ears!


The two biggest things we do after deploying our site is, for each individual server in the farm, load it up so all the first time reflection stuff can get taken care of and make sure any errors on the site are getting emailed out correctly. While controlling IE through its COM interface is a lot cleaner and easier with PowerShell, there’s still some code for navigating around GMail’s site. Obviously if you use a different setup for your email, you’ll either have to control a different app or access the SMTP server directly.

Unfortunately, the biggest piece for both of these things being helpful is if you can navigate to each server in the farm. If your network setup prevents that, it’s not going to do you much good unless you keep clearing your cookies and revisiting the site a bunch of times in hopes you’ll get each server, or something crazy like that.

So while most of this code is straight forward, I hope it’ll give you a starting point for your deployment script. Like I said in the beginning: it’s a bit painful to initially setup (both creating it and testing it), but we’ve found huge value from having it in place. It’s obviously not as easy as Capistrano, but, meh, it works. Another option for .NET is Web Deploy, a relatively new tool from Microsoft. I haven’t had time to get too deep into it, but it may help for your situation.

Good luck!

Nov 24

In the first post I gave a quick overview of what our deployment script does and why you’d want one, then the second post went over pre-deployment steps. This post will go over the actual deployment steps we take to publish our site. Like the last post, most all of this code will probably be pretty self explanatory.

function deploy_and_prime_site
		foreach ($server in $servers_production)
			deploy_site_to $server
			preload_site_on $server
		foreach ($server in $servers_production)
			ensure_error_emails_are_working_on $server $live_url

This is the function the build target actually calls into. The part you’ll care about here is where it loops through the known production servers and deploys the site to each one in tern. The “preloading” of the site, checking for functioning error emails, and DNS caching stuff is some of the post-deployment steps we take, which I’ll discuss in the next post.

IIS Remote Control

Here’s how we control IIS remotely (this is IIS7 on Windows 2008 R2 – not sure how much changes for different versions):

function execute_on_server($target_server, [scriptblock]$script_block)
	$secure_password = ConvertTo-SecureString $security_password -AsPlainText -Force
	$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $security_full_user, $secure_password
	Invoke-Command -ComputerName $target_server -Credential $credential -ScriptBlock $script_block
function stop_iis_on($target_server)
	echo "Stopping IIS service on $target_server..."
	execute_on_server $target_server { & iisreset /stop }
function start_iis_on($target_server)
	echo "Starting IIS service on $target_server..."
	execute_on_server $target_server { & iisreset /start }

The secret sauce to getting this to work is the execute_on_server function. The actual stop & start methods just execute standard iisreset commands (which is a built-in command line tool w/IIS). So the top function converts our plain text server username & passwords in the build script into a SecureStringPSCredential object. Not the most secure way to do this, I’m sure (hence the -Force parameter), but it’s working for us. After connecting to the remote machine, it executes the given script block with those credentials (like the execute_with_secure_share function from the last post). In order to make this work though, you’ll need to give some lovin’ on your build server and web servers:

  • Make sure all boxes have at least PowerShell 2.0 with WinRM 2.0 (which is what allows the remote machine command execution)
  • On each web server, you’ll need to run this one time command from a PowerShell prompt: Enable-PSRemoting


With that out of the way, the actual deployment part is pretty easy – it’s just copying files after all:

properties {
	$siteWebFolder_name = $solution_name
	$ident_file = "Content\ident.txt"
function pause_for($seconds)
	sleep -s $seconds
function deploy_site_to($server)
	echo "*** Beginning site deployment to $server."
	$compiled_site = "$compiled_site\*"
	$web_share = "\\$server\$share_web"
	$live_site_path = "$web_share\$siteWebFolder_name"
	stop_iis_on $server
	pause_for 10 #seconds, to give IIS time to release file handles.
	execute_with_secure_share $web_share {
		echo "Deleting the existing site files on $server ($live_site_path )."
		delete_directory_with_errors "$live_site_path \*"
		echo "Copying the new site files (from $compiled_site) to $server."
		copy_directory $compiled_site $live_site_path 
		echo "Creating ident file at $live_site_path."
		"$server" > "$live_site_path\$ident_file"
	start_iis_on $server

Stop IIS, give it a few seconds, copy files, start IIS. Like I said – simple. If your situation can’t allow this for some reason (perhaps you have a more complicated load balancing scheme or whatever), you can expand as needed. We actually deploy several sites and a few console apps at the same time so everything’s in sync. The ident file is a simple way for us to find out which server a user’s on for troubleshooting purposes. We can navigate to the url + /Content/ident.txt and it’ll have the server’s name.


Other than the actual remote manipulation of the servers, which we keep to a pretty minimum IIS start & stop, there’s not much to this part of the build either. This code provides a good jumping off point for customization to your setup, as well as some helper methods you can hopefully make use of. The next post will wrap up this series by showing some of the post-deployment steps we take.

Nov 18
How *not* to hash passwords
icon1 Darrell Mozingo | icon2 Misc. | icon4 November 18th, 2010| icon3No Comments »

We were stupid back in the day (OK, a year or two, but who’s counting?). When we started our latest project it was a given that we’d be hashing passwords for storage. The most obvious and easiest way to do it was the good ‘ol (password + hash).GetHashCode(). Done and done. We moved on to the next feature and never gave it a second thought.

As it turns out though, using GetHashCode() for password hashing purposes is, well, pretty stupid and irresponsible. GetHashCode() was never intended to be stable across .NET versions or even architectures (x86 vs x64), and apparently the framework spec documents call this out. In fact, its results have changed slightly between .NET 3.5 and 4.0, which is what we were just upgrading to when I noticed this. Similar changes aparently occurred between 1.1 and 2.0 too.

For example, the GetHashCode() hash of the string “password” from .NET 3.5 is -733234769, while the hash from that exact same string in .NET 4.0 is -231203086. Scary, huh?

In light of that, we switched to using the SHA512Managed class to generate our hashes. Switching our code over wasn’t an issue (DRY for the win!), but having to email our customers to enter new passwords and security questions, which we also hashed the same way, wasn’t exactly fun. Not knowing their passwords apparently does have a downside! Here’s how we’re generating our hash codes now:

private const string _passwordSalt = "some_long_random_string";
public static string CalculateSaltedHash(string text)
	var inputBytes = Encoding.UTF8.GetBytes(text + _passwordSalt);
	var hash = new SHA512Managed().ComputeHash(inputBytes);
	return Convert.ToBase64String(hash);

Yay? Nay?

« Previous Entries Next Entries »