Jun 26
The Pit of Success
icon1 Darrell Mozingo | icon2 Design Principles | icon4 June 26th, 2011| icon31 Comment »

Pit of Success

I’m a huge believer in the Pit of Success. Quite a few have written about it before, though not always in development terms. Put simply, there’s two pits you can create in your application through conventions, infrastructure, culture, tools, etc: success and failure. I obviously choose the former.

The Pit of Success is when you and the other developers on your team have to think less about the mundane stuff and when there’s only one easy development path to follow. Less thinking about crap = more thinking about business problems = faster software with less bugs. In general, if I see something that’s going to be in a lot of classes/pages and it has a decent bit of setup and baggage for it, I instantly picture another developer forgetting to bring all that along when they start new features or refactor. If things break (visually or programatically) when that happens, there’s a problem. Same goes for huge chunks of documentation explaining how to use a certain feature elsewhere in the system – time to make it easier to use! Here’s a few examples of how we’ve dug out a Pit of Success on our current project:

  • Need to create a new schedule task? Drop in a class and implement a simple interface. That same principal goes for a slew of other areas – missing information checks for users, HTTP handlers, sample data for geographic areas, etc. You don’t have to go hunting down a master class to add these new things to, just create the class and you’re golden.
  • Security in our system isn’t that complex yet, so we’re able to consolidate everything in a nice tidy ActionFilter. It’s applied to our custom controller, and we have a unit test that makes sure all Controllers in the system inherit from that custom one. So by following the rules (on your own or with the help of a broken test), you get security handled for you auto-magically.
  • We continuously deploy with our build server, so it takes care of not only making sure all our unit/integration tests pass, but that all the needed debug settings are flipped, sites are pre-compiled, everything still works once it’s live, etc. That saves us from remembering to do all that every time we push live, which is almost constantly these days.
  • We completely agree with Chad Myers, Jeremy Miller, et al: if we’re working with a static language, make the best of it. Everything in our system is strongly typed, from text box ids in HTML/Javascript/UI tests to URLs and help points. You shouldn’t have to remember to go hunting and pecking through the whole system when you want to rename something, just rename it with ReSharper and move on. Same with finding where something is being referenced. The harder it is to rename things, the less they get renamed, and the crustier the system gets.
  • We started creating one off modal dialogs to present information to the user. They looked great, but needed a lot of baggage and duplication to do it, so we overrode the default alert and confirm dialogs with our modal ones. Now there’s not only nothing to add to your page to get this, but in most cases you don’t even have to remember we’re overriding it! There’s a forthcoming post that’ll cover what we did in more detail.
  • We have a unit test that’ll scan through all of our test files (which end with *Fixture), and make sure there’s a file name that matches (sans the Fixture part) in the corresponding directory structure in the main assembly. We constantly move files around when refactoring, and forgetting to move or rename their test files is a pain, so this test gently reminds us. Note we don’t always follow a one-class-per-fixture setup, but even when we don’t, we stick them in a matching fixture class for easy grouping and ReSharper discoverability.

It’s worth noting we didn’t set out from day one to build all this stuff. Its all grown over time as the project and our stakeholder’s needs have changed. We always strive to keep KISS in mind (even if it is hard) and not build anything until it’s absolutely needed. Don’t try to create infrastructure to handle everything for you when a project’s in its infancy. Harvest it out later.

There’s also exceptions to all of these rules. Is automatic security always the right thing to do? No – if you need highly configurable security, put it out in the open and remember to set it on each request. Don’t force things into the infrastructure if they’re fighting you and have lots of exception cases. Perhaps there’s another route of attack that’ll solve the problem and still keep you circling around the Pit of Success.

You’ll create your own Pit of Success on your project just by falling into the bigger Pit of Success that is the SOLID principals. The majority of the examples up there were arrived at by just adhering to the Open/Closed Principal or the Single Responsibility Principal. They create a sort of recursive pit, I suppose.

What have you done on your projects to help create a Pit of Success?

May 21
OCP in action
icon1 Darrell Mozingo | icon2 Design Principles | icon4 May 21st, 2010| icon3No Comments »

What is OCP?

The Open/Closed Principle is the O in the nearly infamous SOLID set of design principles. The gist of the principle is that any given class should be open for extension, but closed for modification. In more practical terms, you should be able to make most classes do something new or different without actually changing a single line of code within the class itself.

That’s simple! Right? Ummm, sure. How, exactly, is a class supposed to be able to do more stuff without being modified? Well, there’s lots of ways, if you think about it. I mean, that’s sort of what inheritance and polymorphism were made for. There’s also other pretty neat ways, too. One of them shows the true beauty of inversion of control, and dependency injection frameworks.

Lets say you have a collection of customers. You need to loop through these people, running a few different checks on them (only preferred customers can carry a negative balance, credit cards on file aren’t expired, etc), and adding any resulting errors from those checks to another collection for display later. Nothing too terribly complicated.

Solving without considering OCP

First we have to load up the customers and send them on through:

private static void Main(string[] args)
{
	var customers = get_customers_from_somewhere();
 
	var check_runner = new Check_runner();
	var warnings = check_runner.run_checks_on(customers);
 
	foreach (var warning in warnings)
	{
		Console.WriteLine(warning);
	}
 
	Console.ReadLine();
}
 
private static IEnumerable<Customer> get_customers_from_somewhere()
{						// database, webservice, whatever.
	return new[]
	       	{
	       		new Customer
	       			{
					name = "Joe Smith",
	       				credit_card = new Credit_card { is_valid = true },
	       				balance = 100,
	       				is_preferred = true
	       			},
	       		new Customer
	       			{
					name = "Nathan Hawes",
	       				credit_card = new Credit_card { is_valid = false },
	       				balance = 0,
	       				is_preferred = true
	       			},
	       		new Customer
	       			{
					name = "Melinda Plunkett",
	       				credit_card = new Credit_card { is_valid = true },
	       				balance = -100,
	       				is_preferred = false
	       			}
	       	};
}

The class running all the checks:

public class Check_runner
{
	private static readonly IList<string> _warnings = new List<string>();
 
	public IEnumerable<string> run_checks_on(IEnumerable<Customer> customers)
	{
		foreach (var customer in customers)
		{
			check_that_only_preferred_customer_can_have_a_negative_balance(customer);
			check_that_on_file_credit_card_is_not_expired(customer);
			// additional checks in the future...
		}
 
		return _warnings;
	}
 
	private static void check_that_on_file_credit_card_is_not_expired(Customer customer)
	{
		if (customer.credit_card.is_valid)
		{
			return;
		}
 
		_warnings.Add("Credit card expired for customer: " + customer.name);
	}
 
	private static void check_that_only_preferred_customer_can_have_a_negative_balance(Customer customer)
	{
		if (customer.is_preferred || customer.balance >= 0)
		{
			return;
		}
 
		_warnings.Add("Negative balance for non preferred customer: " + customer.name);
	}
}

Pretty standard. Loop through the customers, calling a separate private method for each check you need to preform, adding a message for that check’s error to a collection that ultimately gets returned to the caller for display.

The problems with the first approach

At first blush, this way of running the checks might seem very simple and understandable, but it starts to break down for a few different reasons:

  • New checks could get pretty complicated, requiring access to other expensive objects (repositories, web services, file I/O, etc). Even if only one check needed a certain dependency, the whole Check_runner class is now burdened with that dependency.
  • Every new check requires you to open up the Check_runner class and making a modification. Opening a class and modifying it? That’s pretty much the definition of an OCP violation. Modifying a class always introduces the possibility for regression bugs. No matter how small the possibility or bug, they’re there.
  • Testing this thing as it gets larger and larger is going to be a pain in the rear. It’ll also get much harder, especially if outside dependencies are brought in (having to setup multiple dependencies when the one check you’re testing doesn’t even use them isn’t fun, or clear to read later).

One possible solution

There’s a few different ways to go about fixing this. My suggestion would be to break each check into its own individual class, with the Check_runner taking them all in and looping through them, running each as it goes. It sounds a little more black-magicy than it really is. I’m going to show all the code first, then go over the benefits of an approach like this later on. Lets start by defining an interface for these check classes:

public interface ICustomer_check
{
	string buildWarningFor(Customer customer);
	bool failsFor(Customer customer);
}

Now we can define a single check, which knows when it fails for a given customer, and how to build a warning message for that failure. The check classes for the two checks that are ran above would be a simple conversion of the existing code:

public class Negative_balance_check : ICustomer_check
{
	public string buildWarningFor(Customer customer)
	{
		return "Negative balance for non preferred customer: " + customer.name;
	}
 
	public bool failsFor(Customer customer)
	{
		return (!customer.is_preferred && customer.balance < 0);
	}
}
 
public class Expired_credit_card_check : ICustomer_check
{
	public string buildWarningFor(Customer customer)
	{
		return "Credit card expired for customer: " + customer.name;
	}
 
	public bool failsFor(Customer customer)
	{
		return (customer.credit_card.is_valid == false);
	}
}

Now the Check_runner just has to loop through all of the ICustomer_check implementations and run them:

public class Check_runner
{
	private readonly IEnumerable<ICustomer_check> _customer_checks;
 
	public Check_runner(IEnumerable<ICustomer_check> customerChecks)
	{
		_customer_checks = customerChecks;
	}
 
	public IEnumerable<string> run_checks_on(IEnumerable<Customer> customers)
	{
		var warnings = new List<string>();
 
		foreach (var customer in customers)
		{
			foreach (var check in _customer_checks)
			{
				if (check.failsFor(customer))
				{
					warnings.Add(check.buildWarningFor(customer));
				}
			}
		}
 
		return warnings;
	}
}

Again, pretty simple and focused. Where does that enumeration of ICustomer_check implementations come from though? The missing key: our dependency injection framework. I’ll use StructureMap for this example. After downloading that and referencing the assembly, we’ll modify our main method to set it up:

private static void Main(string[] args)
{
	ObjectFactory.Initialize(y => y.Scan(x =>
	                                     {
	                                     	x.TheCallingAssembly();
						x.AddAllTypesOf<ICustomer_check>();
	                                     }));
 
	var customers = get_customers_from_somewhere();
 
	var check_runner = ObjectFactory.GetInstance<Check_runner>();
	var warnings = check_runner.run_checks_on(customers);
 
	foreach (var warning in warnings)
	{
		Console.WriteLine(warning);
	}
 
	Console.ReadLine();
}

We fire up StructureMap, telling it to scan the calling assembly and find all implementations of ICustomer_check. When we ask for an instance of Check_runner, StructureMap knows to provide all the implementations it found of ICustomer_check to Check_runner‘s constructor argument in a list. Since this is the outer most edge of the application, we’ll interact with the dependency injection framework here instead of inside Check_runner.

Benefits

So perhaps other than the StructureMap related code (if you don’t already know the basics of it), nothing I’ve done here has really complicated the system. It’s still a few primitive classes working together in a fairly obvious way. What benefits do we gain from these changes though?

  • Each piece of the system now has a single, specific responsibility. You can look at each check and quickly figure out what its purpose is. The runner simply takes in all the checks and runs them (funny how its name now follows its responsibility too).
  • The check classes can now take in their own dependencies. Need an ICustomerRepository or ICustomerAccountService for something? List it in the constructor. Each check is getting pulled from the container, so their dependencies will get filled as well. Checks will also only take on what each one needs, as opposed to requiring dependencies they might not have before.
  • With decreased responsibilities, each piece now becomes much easier to test. Supply a list of dummy checks and dummy customer to make sure the runner is doing its job. Same for the checks themselves. In fact, too many tests for a class is a smell that class is doing too much in the first place.
  • The point of the article: no more OCP violations! Future requirements for different kinds of checks now become almost mind numbingly easy. Slap in a new class and implement ICustomer_check. That’s it – the container will take care of the rest. Virtually no possibility of introducing a regression bug and messing up one of the other checks by adding a new one.

Conclusion

One thing to remember when looking for OCP violations in your code base is that “closed for modifications” should be taken within context. Fixing bugs, adding complete new features, etc, will obviously require modifications to something. You’re not going to create every class in your system and never touch them again. Within reason, you should apply the Open-Closed Principle to your code as much as possible. It makes it simpler to understand on the micro and macro level once your familiar with some of the more common patterns, and it helps reduce the possibility of introducing bugs from future additions.

The code from this post is available here.

Dec 23

I think I can safely say I finally “get” dependency injection (DI) and the need for a framework (such as StructureMap, Ninject, Windsor, or any other). More importantly, I think I finally get the best way to use it in an application. Its taken me a bit to get to this point, and almost everything I’ve read and heard on the subject was very hand-wavy, at least to me. So here’s my attempt at demystifying the subject along with a straight forward way to go about using it in your application, something I wish I could have heard a while ago.

What is it?

Many objects have an outside dependency of some sort. Instead of creating the dependency inside your class (by doing something like myDependency = new Dependency()), you want these dependencies to be “injected” in, usually by the constructor:

public class OrderProcessingService
{
	private readonly IRepository _repository;
 
	public OrderProcessingService(IRepository repository)
	{
		_repository = repository;
	}
}

That’s it. Seriously. It’s not hard to grasp, and you’re probably already doing it, but the trick for me was figuring out how to actually go about using this in any sort of sane and recommended way, as you’ll notice the requirement is now on the caller to provide an instance of IRepository. If you want more details on this pattern, there’s plenty out there.

Why should I bother using it?

  1. It takes handling the dependency’s life cycle out of your hands. Perhaps you want your database access class to stick around for a whole web request, another object to be a singleton, and another to be per thread when used in a Windows app, but per request in a Web app? Using a proper DI framework/container, you don’t have to worry about writing anything to support that, and changing the lifespan of a given object is a one line (and usually even enumeration value) change.
  2. It loosens up your code. You’re no longer “newing” up your data access or web service classes right in the middle of your operation. Swapping out implementations of, say, an interface, is a simple operation that’s located in one place. Just stick the dependency in your constructor, and you’re good to go. I’ve actually used this in quite a few places beyond the academic and largely unused-in-the-real-world “switching from Ms SQL to Oracle” examples, too.
  3. Greatly eases and simplifies unit testing. In many cases, using dependency injection is the only way to unit test portions of your code base (unless you use a certain tool to do it for you). By taking your dependencies in your constructor, you’re giving your unit tests a seam to inject fake implementation of these dependencies. This lets you skip actually hitting the database, web service, hard drive, or anything else that would kill the running time of a unit test or be almost impossible to setup and control in a repeatable manner.

If you’re looking for more detailed reasons, you’ll again want to refer to the gobs of information already out there.

How can I use it?

Ah, now for the juicy part I know you’re all dying to hear: how the hell to actually use the pattern in conjunction with one of the tools I mentioned at the beginning of the post. Before giving away the answer, let’s quickly go over the three primary ways to use a DI container in your app:

  1. Service locator: this pattern is generally considered a no-no, as it still burys your dependencies deep inside your code. Sure, you can swap them out when needed for unit testing, but they’re still very opaque and will almost certainly get very hard to work with, very fast:

    public void ProcessOrder()
    {
    	var repository = IoC.Resolve<IRepository>();
     
    	// Do stuff with repository.
    }

    In the above example, IoC.Resolve is a simple static method that delegates to whatever DI framework you’re using. Callers won’t know about this dependency, though, and without fully boot strapping your framework in your unit test (icky) or injecting a fake into your container, the call will either throw or return null, neither of which you want to be checking for everywhere.

  2. Poor-man’s dependency injection: This is a slight twist on normal constructor DI. You have one empty constructor for most of the program to use, which delegates to a “loaded” constructor that unit tests use. While making the dependencies clear, this removes lifetime management from the container’s hands, and also gets ugly when you start changing dependencies around. This is usually used in conjunction with the service locator pattern above:

    public class OrderProcessingService
    {
    	private readonly IRepository _repository;
     
    	public OrderProcessingService() : this(IoC.Resolve<IRepository>())
    	{
    	}
     
    	public OrderProcessingService(IRepository repository)
    	{
    		_repository = repository;
    	}
    }
  3. True dependency injection: Classes generally have only one constructor which takes in all the required dependencies (see the first code snippet at the top of the post).

#3, true dependency injection, is the one I had no idea how to go about setting up in my app. Everyone said not to use the service locator pattern or poor-man’s dependency injection, but how was I supposed to not use them and still get everything injected in? It seems like I was never supposed to call my DI container’s Resolve method. So what gives? Every time someone got close to answering it, it seemed like they’d blow off the question. Ugh.

After enough playing around, reading, and looking at other open source projects, though, it finally clicked: only call Resolve at the furthest edges of your application, and as few times as possible. So what does that mean and where should you be calling it in your app? Well… it depends.

Wait, isn’t that just another cop-out?

Well, yes and no. Yes in the fact that I’m not giving a solid answer, no in the sense that it really does depend on your application: what frameworks you’re using, how you have its architecture setup, etc.

Just so you can’t say I’m not providing anything solid, here’s how I’m using it in our current app:

  1. We’re using ASP.NET MVC & StructureMap, so we’re using a custom controller factory that creates controllers from the container. This means we can create an EmployeeService, extract an interface for it named IEmployeeService, put it as a requirement in the controller’s constructor, and it’s satisfied magically at run time. Even cooler, everything down the object graph from EmployeeService (say, an EmployeeRepository, or LoggingService, or EmailService, or anything else you need) gets their dependencies all satisfied automagically, too. We can stick a constructor argument in virtually anywhere and it’s taken care of for us, without giving it a second thought! Each web request, this all gets built out.
  2. We have a basic home brewed scheduled task framework that, given the name of a task you want to run (say /run Emailer runs the EmailerTask), instantiates the requested task class, and runs it. We use the container at the point of instantiation, effectively treating each “task” class as a controller from above, filling all the dependencies it needs down the object graph.
  3. We also fire off the scheduled task app by doing an IoC.Resolve<IApplication>().Run() in the console app’s Main method, giving the app everything it needs.

In all, we call IoC.Resolve only 4 times in our app, and it handles everything for us. We usually forget it’s even there, and take its services for granted when working with legacy applications that don’t have it.

Now, what if you’re using WebForms? Well, you’re not *completely* out of luck. It’s a pain, to be sure, but still doable.

Wrapping up

I hope this helped cleared up dependency injection for you a bit. Just remember to use the actual Resolve call of your container in as few places as possible in your application, and only on the “outside edges” of the app. Look at where you do all your main object creation (your web forms, Windows forms, controllers, WCF factories, Silverlight pages, etc). Stick the call in there, and forget about it.

Good luck.