Unit Testing – Thoughts from an opinioniated developer

Getting started

Unit testing is one of those things that we know we should be doing. However, time and project pressures often result in various reasons why many don’t do it.

Having worked and consulted in a variety of teams I often find myself dealing with situations like this by asking the following questions:

  1. How do you know your software is working?
  2. How do you know if your software is broken?
  3. How do you know if someone elses change won’t break your hard work?

Generally, these are the three important questions that I try to answer when writing my own unit tests. Whether for Sitecore or not. The content of these tests is to try and prove it. Initially, writing a quality unit test can be hard. A quick search on Google will tell you that a unit test should be small, repeatable, focused and relevant.

That’s a useful guideline but we need a bit more information to get us started. To reinforce this, lets take an aside and consider a project I worked on for a major UK supermarket retailer.

In this project, the customer mandated 95% code coverage across the entire codebase. So the developers were focused on reaching this percentage and would be testing constructors and the like. More importantly they were jumping through hoops to test all their conditional branching, magic flags and so on. They were doing this
as they had code colouring turned on and could see which lines weren’t covered.

So, what to take away from this? Well firstly they weren’t writing their code to be testable. This should be our first rule:

Write your code to be testable

Next lets consider how they were getting code coverage. They were looking at how to test each individual line rather than consider what the code as a whole was attempting to do. If we stand back, we can see how the code is expected to operate under normal conditions. So that’s our second rule:

Test your code under normal conditions with expected parameters

By now we should have an idea of what our third rule should be! Now we start to think about how our code can break. What if a parameter is null? What if we have an unexpected value somewhere? We need to test the code under unexpected conditions and understand where and how it can break.

Test your code under unexpected conditions and with worse case scenarios

This is a very useful step. If for nothing else its something that you might have to cover at code review. It also means you can diagnose a production system more easily because in handling your unexpected conditions you have the opportunity to log, throw custom exceptions, raise alarm bells (and so on).

At this point, you should be in a position to have quite a reasonable level of code coverage but more importantly the tests in place are relevant, focused and they should be repeatable.

So lets consider the next rule of unit testing:

What am I actually testing?

No code sits in isolation. We access databases, save to files on disk, send messages, create pictures and so on. The list is endless. Because of this, we are always going to be calling third party code and integrating with third party systems.

So should we test them? No. Should we test our integration with them? Possibly. Should we test our calls to the integration layer? Yes. Provided we have abstracted our integration behind our own interfaces (such as a repository object or service object).

Encapsulate what ‘varies’ behind an interface, code to that interface and then write unit tests for code that depends on that interface according to the rules above. And if you only get 85% code coverage well the remaining 10% is a judgement call between you and your customer/employer!

So how do we write testable code?

Well written code is something that takes time and experience to produce. To start with testable code we could go back and cover writing code according to SOLID principles. Lets put that at the back of our minds for a moment and consider it in more abstract terms.

One method of writing testable code is to write the test first. This puts us into the mind set of thinking about how the contract will be used by calling code. Here we have an idea of our parameters, return types and the like. In fact many unit test advocates would argue that this is the only way to do it. Of course in reality it depends on where you work and how defined your designs/stories are.

Another tool on our development shelf is to consider how we might build or refactor the code internally. One example is switching on magic strings or enum values. Can we replace this with polymorphic calls and remove conditionals? If so, we can make our test cases easier to produce because we don’t have to setup our unit test to have mocked data that handles both sides of the conditional.

If we don’t do that, our test is inherently harder to write. So its slower to produce and as such a barrier to producing a unit test.

SOLID code

Part of SOLID design is to create classes that have single responsibilities. Then we create a contract for them and depend on that contract. Letting us change the behaviour of our application as long as the contract is maintained. In a unit testing scenario, this lets us insert fakes/mocks more easily.

In writing testable code its important to strike a balance point between breaking the problem up into contracts you depend on and having an explosion of classes. One code smell is a constructor with many dependencies. If you are in this situation its not always bad – but for greenfield code its certainly something you should look at and review with colleagues.

Not only does this make your unit test harder to setup it can also mean your interface contracts are not fit for consumption. At the lowest level in your API, this is fine but as you go up towards the consumer I strongly recommend that the interfaces define key application ‘operations’ rather than lots of smaller methods.

This is obviously project dependant and some projects you want a simple external interface and in others you don’t. But even so, its worth having at the back of your mind.

Summary

These are just some of my thoughts about unit testing and how it relates to software design. I’ll follow these up with some thoughts on designing your software and my take on bottom-up vs top-down design and how that affects your project scalability.

Advertisements

One thought on “Unit Testing – Thoughts from an opinioniated developer

  1. Pingback: Sitecore conventions – build your own pipelines | CardinalCore

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s