On Sunday I posted up an article stating that I aspire to be a software test engineer. Now, that doesn't mean that is what I want my business card to say, but I do want to have the traits that would would make a good STE (Microsoft's term not mine). One of my long held beliefs is that testers MUST (in the IETF RFC sense) have a software development background and be actively involved in cutting code.

Going on what I have seen so far in my career that requirement would easily wipe out about 80% (pulling largish number out of the air) of the people that call themselves testers off the playing field. Given that I think I need to explain myself a little bit.

The purpose of testing is to increase quality and ensure that as much as possible each new release of software meets end-user requirements. As the scope of applications increase its very difficult to completely test the surface area of any system without employing some kind of automation technique (unless you are willing to hire an army of testers - literally). There are lots of different forms of testing, and they all require different sets of skills - and its these diverse skill requirements which I think make a card carrying member of the code cutters guild the only acceptable choice for testing position.

For example, unit testing usually requires someone who is intimately familiar with the platform and tools that are being used to build the software. Some methodologies say that developers should be writing their own unit tests, others suggest that hat should be worn by another person while another camp says "testing? we don't need no stinking testing!". Regardless of who does it, they are going to need to know how to cut code, but not only that, they are going to need to be experienced enough to know how other developers think and find ways to trip them up when they make mistakes. I will make the comment that I think that many developers who think they are writing unit tests aren't - quite often they are writing drivers.

What about performance testing? Funny story. I recently had a client tell me that the job of his testers were to run the automated stress and performance tests and report the statistics back to the developers and customers. It kinda reminded me of that guy from Office Space (you have seen Office Space haven't you) sitting in the office with the two Bobs explaining how his job involved taking the specifications (physically) from the customers and giving them to the programmers. The truth of the matter is that performance testing is a highly technical task - not a mathematical one.

Even getting software installed into the performance testing environment in the early stages can be a task that only someone with some pretty serious systems experience can do. I've found that just this phase can be enough for some testers to throw up their hands in disgust. If you spend the first eight months of a one year project to get a functioning setup package then by the time you actually hit the system with some load it might be too late to do something about problems discovered and the project ends up with a crisis. Poor performance is generally the result of simple mistakes made in design and implementation, if caught early they aren't a major issue and can be fixed.

Assuming you have gotten into performance testing successfully then you are likely going to have a whole bunch of low hanging fruit performance problems such as database connections not being cleaned up and pools being exhausted - every application has them. If you have a good tester with a developer background they can help provide very specific information about where these probelms exist - often right down to the line number. Thats very helpful to developers when it comes time to fix up the problems - especially when they know and can trust the developers instincts because they have a similar background.

UI and functionality testing is another big category. This is infact more of a challenge even with some of the robot tools that are out there. I've found that people with an appreciation of the Windows message pump architecture seem to handle this stuff better, especially when you need to write your own tools to handle special scenarios.

I think I'm starting to rant now (maybe I started out ranting?) so I will sign-off now, but my final thought is that developers can become testers once they have reached a certain level of maturity and any of the test domain specific knowledge can be acquired. Its much hard and time consuming to go the other way.