Sep 15
Getting started with TDD
icon1 Darrell Mozingo | icon2 Musings, Testing | icon4 September 15th, 2011| icon3No Comments »

When I first read about TDD and saw all the super simple examples that litter the inter-tubes, like the calculator that does nothing but add and subtract, I thought the whole thing was pretty stupid and its approach to development was too naive. Thankfully I didn’t write the practice off – I started trying it, plugging away here and there. One thing I eventually figured out was that TDD is a lot like math. You start out easy (addition/subtraction), and continue building on those fundamentals as you get used to it.

So my suggestion to those starting down the TDD path is: don’t brush it off. Start simple. Do the simple calculator, the stack, or the bowling game. Don’t start thinking about how to mix in databases, UI’s, web servers, and all that other crud with the tests. Yes, these examples are easy, and yes they ignore a lot of stuff you need to use in your daily job, but that’s sort of the point. They’ll seem weird and contrived at first, but that’s OK. It serves a very real purpose. TDD has been around for a good while now, it’s not some fad that’s going away. People use it and get real value out of it.

The basic practice examples getting you used to the TDD flow – red, green, refactor. That’s the whole point of things like kata’s. Convert that flow into muscle memory. Get it ingrained in your brain, so when you start learning the more advanced practices (DIP, IoC containers, mocking, etc), you’ll just be building on that same basic flow. Write a failing test, make it pass, clean up. You don’t want to abandon that once you start learning more and going faster.

It seems everyone gets the red-green-refactor part down when they’re doing the simple examples, but forget it once they start working on production code. Sure, you don’t always know what your code is going to do or look like, but that’s why we have the tests. If you can’t even begin to imagine how your tests will work, write some throw away spike code. Get it working functionally, then delete it all and start again using TDD. You’ll be surprised how it changes.

Good luck with your journey. If you’re in the Canton area, don’t forget to check out the monthly Canton Software Craftsmanship meetup. There are experienced people there that are eager to help you out.

Dec 27
Testing tips
icon1 Darrell Mozingo | icon2 Testing | icon4 December 27th, 2010| icon3No Comments »

Just some quick testing tips I’ve found helpful over the last few years.

Naming

Don’t name variables company1 or company2. There’s a reason you’re creating two of them – why? Names like companyMatchingCriteria or companyWithCorrectAddressAndWrongPhoneNumber make a lot more sense when reading the tests later. When it comes to testing, readability is paramount, even more so than perhaps the production code.

Unreadable tests lead to developers ignoring them, which leads to false positives, which leads to deleting the tests, which leads to, um, purgatory, I suppose. An alternative is the good hearted developer that spends a disproportionate amount of time understanding and fixing a handful of tests when they only changed a few lines of production code. Either option isn’t appealing, and goes against one of the reasons for testing in the first place.

Intent

When naming tests, either the test names themselves, variable names, or whatever – always go after the business intent rather than the technical reasons. So rather than Should_break_out_of_the_loop_when_an_employee_address_is_null, for example, try something like Should_not_process_employees_that_have_not_entered_their_address. You can picture how these would mean the same thing in the production code (probably a simple null check), but one talks about breaking out of loops and null values (technical), while the other talks about not processing and non-entered values (businesses). The differences often aren’t this obvious either, and I know we developers love talking technical, so it’s pretty easy to let that creep into our testing language.

This helps in a few ways:

  1. Writing the code – if you can’t pin a business reason to a certain bit of code to exist, it probably shouldn’t. I know it’s always tempting to throw extra checks in here and there, but if the businesses doesn’t need it for a certain reason, it shouldn’t exist (exceptions obviously exist). Maybe you’re checking for null employee addresses, but when talking to the business folks, they want the user to enter an addresses when they create the employee. This leads to the employee never existing without an address, and negates the need for the check in the first place. If you were just checking for a null, you’d never think to ask this and it’d always be there.
  2. Maintaining the code – I hate reading code that does a bunch of checks (null, certain values, empty strings, etc), and you come to figure out after working with it for a while that the checks aren’t even needed because of invariants in the system (i.e. those values can never fall into that state). It’s just extra code to read, mentally parse, consider in different situations, and perpetuate – “well, that method checks this for null, so I should too”.
  3. Talking with the business folks – when they come to you and ask what happens if the employee hasn’t entered an address yet, you can look through the tests and see they’re not processed this this location or that for whatever reason. This saves you from having to look for null checks in the testing names and figuring out what it means in different situations. This is a bit of a contrived example for this point, but you get the idea. The tests correspond to how the business people think about things.

So, business intent in test naming = good, technical jargon = bad. Again, exceptions do exist, so this isn’t set in stone all the time.

See a theme with all my recent tips? Naming. That’s why Phil Karlton famously said:

“There are only two hard things in Computer Science: cache invalidation and naming things”

Very true.

Dec 22
Are focused tests really worth it?
icon1 Darrell Mozingo | icon2 Testing | icon4 December 22nd, 2010| icon3No Comments »

We recently had the requirement to start filling in fillable PDF’s. The fields in fillable PDF’s are just string names, with text boxes that get string values, check boxes that have special values, etc. I decided to create model classes to represent each PDF, then mapping classes to map each of the model’s properties to a field in the PDF. I ended up with something like:

public class PdfModel
{
	public string Name { get; set; }
	public Money Amount { get; set; }
	public bool Sent { get; set; }
	public string StateAbbreviation { get; set; }
}
 
public class PdfModelMapping : PdfMappingBase<PdfModel>
{
	protected override void CreateMap()
	{
		Map(x => x.Name).To("name_field");
		Map(x => x.Amount, DollarsTo("dollar_field").CentsTo("cents_field"));
		Map(x => x.Set).To("sent_field");
 
		Map(x => x.StateAbbreviation, m =>
						{
							m.Map(x => x.ToCharArray()[0]).To("state_first_letter_field");
							m.Map(x => x.ToCharArray()[1]).To("state_second_letter_field");
						});
	}
}

Any similarity to a popular open source tool is completely coincidental. Hah! Anyway, it’s working well so far. When I set out to write this, I started with a single fixture for the PdfMappingBase class above. I made a small mapping for a single property, then another one for a check box, then another one for a multiple field mapping, etc. I found that while I ended up with around 10 supporting classes, every line of code in them existed to fulfill one of those easy tests in the mapping base fixture.

So I test drove the overall thing, but not each piece. There’s no tests for the individual classes that make up this mapping system, but there’s also not a single line not covered by a test (either technically by just hitting it, or meaningfully with a test to explain why it’s there). Is that wrong? I’m thinking no.

Developing this seemed very natural. I created a simple test that showed how I wanted the end API to look like:

[TestFixture]
public class When_mapping_a_single_text_box_property : SpecBase
{
	IEnumerable<PdfField> _fieldsFromMapping;
	readonly TestPdfModel _model = new TestPdfModel { Name = "name_value" };
 
	protected override void because()
	{
		_fieldsFromMapping = new SinglePropertyPdfMapping().GetAllFieldsFrom(_model);
	}
 
	[Test]
	public void Should_only_have_one_field_mapping()
	{
		_fieldsFromMapping.Count().ShouldEqual(1);
	}
 
	[Test]
	public void Should_set_the_field_name_based_on_the_mapping_definition()
	{
		_fieldsFromMapping.First().FieldName.ShouldEqual("field_name");
	}
 
	[Test]
	public void Should_set_the_value_from_the_model()
	{
		_fieldsFromMapping.First().TextBoxValue.ShouldEqual("name_value");
	}
 
	private class SinglePropertyPdfMapping : PdfMappingBase<TestPdfModel>
	{
		protected override void CreateMap()
		{
			Map(x => x.Name).To("field_name");
		}
	}
}

Then I just created the bare minimum to get it compiling & passing, refactored, and moved on to the next part of the API. Rinse & repeat. Again, I test drove the whole shebang in a top-down way, but not the individual classes themselves. This whole thing isn’t going out to any resources, so it runs fast and all that jive. The only draw back I can see if being hard to pin down problems in the future – having to navigate through a dozen or so classes to find why a test is failing probably won’t be fun. On the upside, I’ve found refactoring on the whole much easier, as the tests only look at the entry point to the whole API. I can change how the classes interact through each of their own public interfaces pretty easy, without having to update tests that may be looking at that one specific class.

Thoughts? I know taken too far this is a bad idea, but what about this situation? Think I should add tests for each of the supporting classes?

Aug 19
When’s the best time to write tests?
icon1 Darrell Mozingo | icon2 Testing | icon4 August 19th, 2009| icon31 Comment »

I often hear people get apathetic about testing, especially on brown field applications. “There’s already so much untested code,” they say, or “we’ll get to it when we start this next portion of the application, we swear.” Obviously this gets put off to the start of the next feature, and the next, and the next, ad infinitum. All the while, there’s more untested code piling up, helping their first argument, and the code base on a whole is becoming more rigid and jumbled up, almost guaranteeing they’ll never have the time to untangle it and add tests after the fact.

Fear not, though, I have the perfect answer! It’s actually a simple, definitive, mathematical proof. For a given application A, that has x lines of code and has been in development for n months with a client base of C clients, we derive the best time to start writing tests, wt as thus:

Right time to test equation

What’d I say? Simple.

Ok, ok, a little sarcastic. Seriously though, there’s never a better time then right now, this very moment. Even a huge UI & database interacting integration test using something like WaitN. It’s better than nothing. Like I said, the longer you wait, the more untested code you accumulate, the harder/scary it is to change things, and the more code you’ll need to eventually test. Even if it’s only a portion of your app that you’ve put off testing (for us, it’d be the UI and controllers/services) – just do it. They might not look pretty at first, but you can refine that later. They’ll expose pain points that need refactoring in the classes you’re testing, and the tests themselves might even require some object builders, helper methods, or base test classes, but it’s all worth it.

Don’t wait for tomorrow, the next iteration, or the next big version. Do it right now!

Jul 31
Starting Down The BDD Path
icon1 Darrell Mozingo | icon2 Testing | icon4 July 31st, 2009| icon32 Comments »

Behavior Driven Development‘s (BDD) meaning has, until recently, skipped right past me. I’d read about it and used it heavily for a week during JP’s Nothin’ But .NET Boot Camp, but when it came down to really seeing the value in it over the normal bunch of tests per test fixture TDD style, well, I simply didn’t. The assertion naming with extension methods (value.ShouldEqual(10) vs. Assert.AreEqual(10, value)) and sentence-like test names (Should_calculate_the_sum_of_all_hours_worked vs Method_Intial_Expected) were pretty neat and we’ve been using them for a while now, but all the rest was lost on me. I mean, a whole class just for one or two assertions? Seemed like a lot.

That was, however, until I realized some of our test SetUp methods were literally several pages long. Sure, all of our tests after that were only a half dozen or so lines and it all totally made sense to us when we wrote it, but I found having to go back into these tests to add/modify behavior was proving difficult. I honestly feel the classes and the tests themselves were following the Single Responsibility Principal pretty good, but these few classes just needed a lot of context to set them up before checking the outputs. There wasn’t really an easy way around it – either we have the huge setup with shorter tests, or we have more fairly large tests.

Another example of the situation breaking down was a few of our test fixtures, where we’d have the SetUp method setup a context (which, again, was a bit large), but each test would slightly the context for its need. The result is needing to look in two places to get the whole picture of the context, and even taking into account some tests that would override certain parts of the context for their own needs. It wasn’t pretty.

While trying to figure out which pieces of the setup context applied to the specific test I was modifying, I knew there had to be a better way. While watching a presentation on InfoQ by Jeremy D. Miller, The Joys and Pains of a Long Lived Codebase, Jeremy talked a bit about how his testing strategy has evolved, and how he’d come to accept BDD after staying away from it. He talked about how important the context of a test was to understanding what it was doing, and how he resorts to copy & paste for parts of the context if he has to in order to keep it easily readable. That part really clicked with me, and I decided to give BDD a honest shot in our current project.

There’s plenty of existing BDD frameworks for .NET, including Machine.Specification, NBehave, Develop With Passion.BDD, and xUnit BDD Extensions, but I wanted to keep it simple for now as we integrate it with our existing project, and the other devs on my team had never used the syntax before (and I only had one intense week of exposure), so I didn’t want to clutter it up too much for the time being.

In light of that, I created a super simple specification base class:

public class SpecBase
{
	[TestFixtureSetUp]
	public void Once_before_any_specification_is_ran()
	{
		infastructure_setup();
		context();
		because();
	}
 
	protected virtual void infastructure_setup()
	{
	}
 
	protected virtual void context()
	{
	}
 
	protected virtual void because()
	{
	}
}

I wasn’t kidding – there’s not much to it at all. The infastructure_setup method allows me to create base classes for testing services/controllers/mapper, where I can setup our AutoMocking container and create the system under test as neeeded. For example, here’s the base spec class we use for testing our services:

public class ServiceSpecBase<INTERFACE, SERVICE> : SpecBase
	where SERVICE : class, INTERFACE
{
	protected RhinoAutoMocker<SERVICE> _serviceMocks;
	protected INTERFACE _service;
 
	protected override void infastructure_setup()
	{
		_serviceMocks = new RhinoAutoMocker<SERVICE>();
		_service = _serviceMocks.ClassUnderTest;
	}
}

The auto mocker (from StructureMap, in this case), just makes an empty dynamic mock for each argument of a given constructor. Our services generally take in a good half dozen objects, so this saves us from having to create them by hand (via something like var mockRepository = MockRepository.GenerateMock()). The system under test is then created after the automocker is initialized (I don’t generally like the generic “sut” variable name if I can avoid it – you’ll see I’m using _service for this class as the service is always the system under test for anything using this base class).

Here’s an example specification using this new SpecBase class:

[TestFixture]
public class When_hiring_an_unemployed_person : SpecBase
{
	private readonly Company _company = new Company();
	private readonly Person _person = new Person();
 
	protected override void context()
	{
		_person.IsEmployed = false;
	}
 
	protected override void because()
	{
		_company.Hire(_person);
	}
 
	[Test]
	public void Should_increase_the_number_of_employees_in_the_company_by_one()
	{
		_company.Employees.Count().ShouldEqual(1);
	}
 
	[Test]
	public void Should_mark_the_person_as_employed()
	{
		_person.IsEmployed.ShouldBeTrue();
	}
}

This example doesn’t really show how well BDD has started helping reduce the complexity of some of our tests by explicitly naming the context they’re running in and making them easier to read. As with every other example on the Internet, this one isn’t quite complex enough to really show the benefits, but I hope you at least catch a glimpse of them. I also realize this might not be “correct” BDD styling, and that I should be leveraging share contexts with a base class more (for that mater, I should be using an actual framework for this), but it’s serving the purpose well, and it’s a simple first step to introducing it to the code base and my team. It’ll evolve – always does.

Another great resource I found helpful was Rob Conery’s Kona episode 3, where he explains BDD and converts some tests to using them.

Jun 18
KISS Is Hard
icon1 Darrell Mozingo | icon2 Testing | icon4 June 18th, 2008| icon3No Comments »

KISS LogoNo, I’m not referring to the band, but the KISS principle (Keep It Simple, Stupid), and its close cousin, the idea of YAGNI (You Ain’t Gonna Need It).

They’re hard. Sure, they might seem easy at first glance, but they are both deceptively hard. This is especially true if you have any sort of background in your problem domain, and let’s face it, most of us do if we’re writing the usual problem tracking or invoice tracking applications. You know what I’m talking about: you start with an empty solution in front of you, tasked with creating your companies next, say, customer management system, and you start walking through the new application in your mind.

“Well,” you say to yourself, “I’m going to need a Customer object, a customer repository to store the object, a Job object, maybe an invoice object, tables and repositories for each of those, and I’m sure they’re going to ask for filtering next, so I might as well save myself the time and throw in a few specification and filtering classes.” This process goes on for a while and before you know if you’ve pumped out a few dozen classes and added all sorts of neat functionality.

Odds are, though, some, if not most, of those classes will end up either going unused or be heavily modified before the features are finished. Hence the above two principles (or ideas or whatever you want to call them). Wait until the last reasonable moment to add additional complexity to your application, but do so with a bit of judgment. Sometimes you simply know you’re going to need something, like a database back-end, so don’t start with text files just for the sake of keeping it simple. For the vast majority of decisions, though, you should use the simplest implementation until you find justifiable evidence that proves you need something more complex.

KISS and YAGNI go hand-in-hand with test driven development. Write your test, then write the simplest code you can to make the test pass. The code can always be refactored later to extract classes, interfaces, patterns, et cetera. I understand that your training as a developer is hard to resist – that urge to create things you’re pretty sure you’ll need while you’re working in a particular area. Resist it.

I’m starting on a new project at work, and it’s a particularly large project at that. It lends itself incredibly well to TDD, and so far the YAGNI ideal and TDD practice has proven themselves very versatile and helpful. As we’re unsure on how to split up the work load this early in the project, we’re working together (3 developers) on a machine with a projector. We’re constantly reminding each other not to over complicate things and toss in hooks and features we might someday need. Trust me, I literally mean almost every feature we add we’re reminding each other to go with the simpler implementation, because it really is that hard to overcome. We’re trying to stick to today, not tomorrow, by making it simple, making it fast, and making it easy to understand. I think we’re doing a great job of it so far.

We’re also well assured that the dozens and dozens of unit tests we have, along with the multitude of user generated acceptance/integration tests, will give us the safety net we need to refactor and introduce new features as we move forward. I, for one, am quite looking forward to what comes next.

May 30
MbUnit’s ThreadedRepeat Attribute
icon1 Darrell Mozingo | icon2 Testing | icon4 May 30th, 2008| icon3No Comments »

I ran into some old code in a utility library the other day that would open an XML file on a network share, read a few settings, and close it. This particular piece of code is called quite often in many situations, and often in larger loops, as the calling developer just sees a string being returned and is oblivious to the fact that it’s pretty damn expensive to get that string.

So I figured I’d go ahead and implement some quick caching around this by storing the strings in generic static Dictionary, especially as the method itself was static. As I’m still trudging up the steep hill that is the learning curve of Test Driven Development (TDD), I thought I’d get a failing test in there first, then make it pass. An interface for dependency injection and a mock or two later and it works. All is good and well in the world.

Unfortunately, it didn’t take long for me to realize there were some serious threading issues going on. Whoops. So I started writing up a unit test to fail on the bug before I fixed it, and in the process of creating a bunch of worker threads to hit the method at the same time, I stumbled across a nifty feature in MbUnit: the ThreadedRepeate Attribute. Behold, a fake example:

[Test]
[ThreadedRepeat(5)]
public void Should_handle_multithreaded_access()
{
	Assert.IsNotEmpty(MyClass.GetExpensiveString());
}

Just like the normal [Repeat(5)] attribute, which would simply call the test 5 consecutive times back to back, the [ThreadedRepeat(5)] attribute will call the test 5 times in parallel, firing off a separate thread for each one.

Pretty freakin’ nifty if you ask me, and a whole lot easier than having to write your own code to spin up a bunch of worker threads.