Testing, testing... One, two, three


Testing, testing... One, two, three


Today I would like to talk about tests and why they are important. In my short-ish career as a software engineer I have come across developers who wrote tests for their code religiously, as well as people who were extremely reluctant to create tests and, admittedly, I have been one of them. It took a while, but I’ve changed my attitude towards tests.

Why Test Things?

I can sort of understand the logic behind that reluctance. The first product I worked on was twenty-something years old, with a pretty huge codebase, not a lot of tests; developers were focusing on delivering new features requested by the customers and fixing existing bugs. Writing tests for existing functionality was considered a waste of valuable resources, because first you’d have to familiarise yourself with the codebase sufficiently to understand what was happening and then try and come up with a way to test sizeable chunks of code. For me, personally, the lack of tests meant that I wasn’t quite sure that my addition to the code didn’t break something. But, I suppose it sort of worked, because our releases were something like once a year and we had a team of testers ceaselessly poking the product with sticks, checking if it breaks.

So the moral here is that having tests gives you more confidence that your change didn’t just make the whole thing explode in the most violent of ways. And, quite frankly, tests allow you to see whether the functionality you added actually does what you envisioned it to do.

It’s easy to take a look at an isolated tiny chunk of code like this:

	public boolean needToExit() {
		boolean keyIsPressed = false;
		if (KeyListener.isEscapePressed()) {
			keyIsPressed = true;
		return keyIsPressed;

and decide that it’s pretty straightforward and you absolutely understand how it’s working, so why bother testing it. Well, because in any program you’re writing, no code is isolated. The interrelationships of conditional statements and while loops and other things very quickly make the code complex and it is this complexity that makes it hard to anticipate where things might break.

Consider a simple platformer game, the main game loop will need to contain: input listeners and will update enemy, NPC and character positions based on that input; it will need to draw NPCs, enemies and the character in their corresponding positions; do collision detection to see if any of the characters are interacting, etc. In isolation each of these things might seem straightforward and trivial, it’s their dependence on each other that gives rise to complexity.

Really the best way to convince yourself to do tests is to start doing them early. And the earliest possible time to start writing a test is before you wrote any actual code. So yeah, Test Driven Development is cool. And you never get a chance to think that annoying thought: ‘Eh, I’ve already written the functionality, it’s self-explanatory, why bother testing it?’.

Unit Tests

As is obvious from the name, Unit Tests test units of code (best explanation… ever). To me a single unit of code is a method, which is why I prefer to write tiny methods; but it is not unusual (albeit a bit weird) to consider a whole class a unit.

My main interest in unit tests stems from two key points:

Writing a unit test means solving a problem before it even becomes a problem. As a counter-example, consider bad code going into production: customers come across it, report a bug, the company finds a developer to fix it, the developer spends a while trying to reproduce it, then tracking it down, then fixing it; not only could that time have been spent on writing new features, the company is also risking losing credibility points with the customers. Because real-life examples are cool, remember the old Apple Maps bug?

Unit tests are also a great way to tell if your new code has broken anything. Regardless of whether you’ve added new functionality or decided to refactor the whole codebase for fun - running the tests afterwards gives you confidence that nothing’s horribly broken. It also can be quite comforting to look at the code coverage metrics and realize that your tests cover most of the functionality you’ve written. And that combination of confidence + comfort means that you’re more willing to work with code, improving it when and as you wish. It is a relief to not be afraid of your code.

Admittedly, not everything can be or needs to be tested, and if that’s the case it can be stubbed or mocked.

I used to not like tests, massively, the book that changed my mind was Rob Martin’s ‘Clean Code’. Amazing stuff.

Integration Tests

Imagine you’re making a car. Because I don’t know much about the automotive industry - I picture making a real-life car sort of like you would make one out of non-branded, totally nondescript plastic building blocks. You test individual parts, you make sure that the wheels spin and the engine works, then you put everything together and it turns out that half of the wheels spin in one direction and the other half in the opposite direction.

That’s why you need integration testing. What integration testing is all about is also pretty self-evident from the name, but what the heck. It’s about taking small units of code and mushing integrating them together into bigger pieces and making sure everything still works the way it should.

Amongst the popular approaches to integration testing are:

  • Risky/Hardest first: you go off and test the risky/hard modules first;
  • Top-down: you start off with the top-level integrated modules and go down the individual module branches;
  • Bottom-up: you start with low level components and gradually make your way up the chain to the top;
  • Sandwich: an attempt at marrying top-down and bottom-up.

This gentleman explains integration testing far better than I currently can, I heavily suggest reading his post.

Acceptance Testing

It’s crazy how self-descriptive the names of the testing paradigms are…

Way before you got to this point you must have sat down with the client and the client explained what they needed. Their needs might have been written down as a document, represented through personas or user journeys, or some guy called Reginald might’ve told you that he totally remembered everything the client said and for some reason you believed him. Classic Reginald.

Some companies tend to put a wall between the developers and the client, others make a regular dialogue between them possible. What happens is - the product gets tested against customer requirements. And in the first case acceptance testing can be the much dreaded moment of truth, in the latter it’s less of a surprise, since the customer will generally be more-or-less aware of what’s happening to the product as it’s happening.

There are different types of acceptance testing. Some of them are:

  • Alpha and Beta testing: Software can be tested manually by internal users or a selection of external users;
  • Contract and regulation acceptance testing: It can be tested against specified requirements manually, or through automated tests;
  • Operational acceptance testing: Software can also be tested to see if it can deal with failures, if backups are working properly, if the security checks are in place, in other words - everything that’s not directly related to the actual functionality;
  • Black box testing: Can be tested without providing any implementation details to the tester.

Here is a pretty good, more in-depth explanation of what acceptance tests are all about, albeit by a company that does those for a living, so the article does contain a sales pitch; alternatively you can try the Wikipedia articles for acceptance testing and operational acceptance testing.

The Triangle

This post would be incomplete without The Testing Triangle.

Its message is simple, unit tests are at the base of software development and should be the most numerous, they should make up roughly 70% of overall tests with their code coverage getting as close to 100% as possible (which, in real life, will never happen, by the way). Integration tests are expected to take up to 20% of overall tests (with coverage going up to 60%). And, finally, acceptance tests are there to bravely conquer the remaining 10%.

I am yet to come across a company that actually does this.

To Summarise

Delivering software without testing is like painting someone’s portrait in complete darkness with something that sort of feels like a brush, and then, when the client isn’t happy - throwing someone who’s never touched that portrait at fixing the painting one brush stroke at a time.

Tests are good. Different types of tests are even better. We generally expect the planes we’re flying on, the trains we use for commute to work, the bridges we drive on, our cars - to be tested before they go into exploitation, how is software that is handling our finances, that is running on medical equipment, that is storing our personal data any different?

NB: Looking through my last week’s post made me think about my unit tests enough to attend ‘The secret to getting your software right early, and then keeping it that way’ talk by Paul Stringer that was run at the excellent Skills Matter venue, and two days later I found myself discussing testing with a friend. It only seemed fair to write a blog post on the subject. It might look a bit muffled, but bear with me, we’ll get there.