Test driven development and goals
Nov. 3rd, 2011 11:25 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Test driven development (TDD) is a concept in computer programming, and I'd like to take time to explain it for non-technical people, as I think it's a useful start for wider discussions about goals.
In an old-fashioned computer/IT project, to make a new web site or software program, you'd start by deciding what you wanted, writing some sort of design or requirements, and then the developers would write the code. After they'd written the code, testers would come along and test it before you used it for real.
Many IT projects still work this way, but there are a few disadvantages. If the testers find a bug at the last minute, you have to go back to the developers to fix it, and then they have to re-test a load of stuff to check nothing else got broken in the process of fixing it. And although it's great for a project that you do all at once (known as "waterfall" or "big bang" approach), it gets messy if you then have a new version with new features, and have to figure out what to test again. The AO3, like many other modern web sites, works on an "agile" approach instead of waterfall, and has lots of little versions or releases with new features in each one.
So along comes automated testing, the solution to all our problems! Well, in theory - it means the poor overworked testers can sit back, watch the computer do all the work, and have a nice break. And it means you don't have to worry about what to retest - you can just make it retest ALL THE THINGS! But in practice, it's not quite that simple. Because you still need to tell the computer what to automate, and sometimes that's nearly as much work as just testing it manually. And, for many automated systems, they need technical skills to set them up, so it's more work for the developers to write the automated tests, and they're testing their own work so more likely to miss bugs - just like editing your own writing, there are some things you just don't catch. If you're writing what you want it to do after you've already decided what it's going to do, that's like writing the design after you've already coded it - writing the questions when you already know the answers.
Test driven development is meant to solve some of this. In the ideal case, the tests can be written in natural language instead of fancy code, so your "some sort of design or requirements" gets magically turned into your tests. Usually it needs a bit of work from a coder to do that magic, but it's a good start. And then the coder has some tests to start with, before they've even written the code. Of course, if they run the tests at this point, they'll all fail - "Error: shiny new feature doesn't exist yet". But now the coder can write their code knowing immediately what it's meant to do, and can check their own work by running the test once they've done it. And then you also have a lovely set of automated tests, which can be run on a regular basis by the computer.
But the reason I really wanted to talk about this, is that a lot of my day job involves looking at things like what makes a good test, or a well written requirement - how to have things that a computer can check, that are black and white, pass or fail, but also make sure that it actually does what you want it to do, which is all about shades of grey. I'll talk more about that in Part 2.
In an old-fashioned computer/IT project, to make a new web site or software program, you'd start by deciding what you wanted, writing some sort of design or requirements, and then the developers would write the code. After they'd written the code, testers would come along and test it before you used it for real.
Many IT projects still work this way, but there are a few disadvantages. If the testers find a bug at the last minute, you have to go back to the developers to fix it, and then they have to re-test a load of stuff to check nothing else got broken in the process of fixing it. And although it's great for a project that you do all at once (known as "waterfall" or "big bang" approach), it gets messy if you then have a new version with new features, and have to figure out what to test again. The AO3, like many other modern web sites, works on an "agile" approach instead of waterfall, and has lots of little versions or releases with new features in each one.
So along comes automated testing, the solution to all our problems! Well, in theory - it means the poor overworked testers can sit back, watch the computer do all the work, and have a nice break. And it means you don't have to worry about what to retest - you can just make it retest ALL THE THINGS! But in practice, it's not quite that simple. Because you still need to tell the computer what to automate, and sometimes that's nearly as much work as just testing it manually. And, for many automated systems, they need technical skills to set them up, so it's more work for the developers to write the automated tests, and they're testing their own work so more likely to miss bugs - just like editing your own writing, there are some things you just don't catch. If you're writing what you want it to do after you've already decided what it's going to do, that's like writing the design after you've already coded it - writing the questions when you already know the answers.
Test driven development is meant to solve some of this. In the ideal case, the tests can be written in natural language instead of fancy code, so your "some sort of design or requirements" gets magically turned into your tests. Usually it needs a bit of work from a coder to do that magic, but it's a good start. And then the coder has some tests to start with, before they've even written the code. Of course, if they run the tests at this point, they'll all fail - "Error: shiny new feature doesn't exist yet". But now the coder can write their code knowing immediately what it's meant to do, and can check their own work by running the test once they've done it. And then you also have a lovely set of automated tests, which can be run on a regular basis by the computer.
But the reason I really wanted to talk about this, is that a lot of my day job involves looking at things like what makes a good test, or a well written requirement - how to have things that a computer can check, that are black and white, pass or fail, but also make sure that it actually does what you want it to do, which is all about shades of grey. I'll talk more about that in Part 2.
no subject
Date: 2011-11-03 08:00 pm (UTC)no subject
Date: 2011-11-03 09:47 pm (UTC)