June 29, 2015 - Jon A. Cruz
How to Make Software Testing Work for You
Testing is a vital aspect of the software development process. Most software developers should already know this, but if you find yourself working with people who disagree, it might be time to step back and take stock of the situation. Being familiar with common terminology is a good first step (if this topic is new to you, I advise you to read that article), but decisions about the details of testing implementation can often get tricky. The first among these decisions is deciding when it’s necessary to test.
Is it Time to Test?
For developers, this question has a simple answer: Yes. If you are coding, you should be testing. Between unit testing and system/integration testing, most development phases should be covered by some sort of test. Different phases might involve different types of testing, but some form of testing should be going on throughout. With that said, everything that can be reasonably automated should be to reduce the amount of time taken away from development. In fact, well-designed, automated unit tests can quickly result in overall time savings due to more bugs being caught early. This makes it easier to explore alternate development options and fix bugs once you spot them, in addition to other benefits.
Code that will require more than 2-3 days to develop will most likely be finished sooner if it is started with unit tests as a part of the workflow, and unit testing should be carried out consistently for any new code that’s written. Additionally, it can even help during the design process; code that exercises an API that is under development can help spot design issues early on in the development process, which can help the structure and intent come together more quickly.
Higher level testing, such as integration and system testing, should be carried out periodically, before committing major features, and whenever schedule pressure starts to bear. Periodic tests, specifically any that require a large amount of time to perform, seem to offer the best payoff when the interval is a week or less. Often a Monday-Wednesday-Friday cycle can be efficient because it allows you to start each week with the knowledge that the code base is clean, and head off for the weekend with a clear conscious, knowing that things are still happy. Additionally, Wednesday checks provide enough time to enlist help and clean up problems before the weekend hits, thus helping avoid Friday afternoon crunches.
Finally, any acceptance test, either manual or automatic, should be completed before any builds are handed off from engineering to QA. The tests done by the engineering and QA groups can be the same, have some overlap, or differ entirely, depending on the organization and groups involved. However, it’s important for developers to have a clear understanding of the time required to ensure such testing is done.
What Should be Tested?
During your initial decisions of what should be tested, you should ignore acceptance testing. This is not to say that acceptance testing isn’t important, but rather it will normally be determined at a much higher level. These decisions should be made by collaboration of all pertinent groups including Engineering, QA, Marketing, and others. If not… well, that might be an opportunity to make improvements to your process.
Developers will gain the most by focusing on system testing and automated unit/integration testing. For system tests, the entire development group should identify the scenarios needed to cover the minimum requirements of smoke testing, in addition to anything else that can be included in the time alloted. The focus should be on the entire system from end to end, and should be able to trust that unit testing will catch a different set of problems.
Unit tests provide the biggest payoff for developers. Once an automated framework is in place, each new bit of code should be tested. Inputs for each ‘unit’ (function, class, etc) should be checked for edge conditions, scale, and known problem areas. Unit testing should be very fast, and normally should take less time to run than it takes to build the components in question.
Here are some questions you should be asking while developing unit tests:
- Are there tests covering all functions/routines in the code?
- Did tests pass in edge conditions for variables?
- What happens when invalid inputs (Null pointers, negative numbers, non-ASCII strings, etc.) are passed in?
- Do tests have good names so that when they fail it is clear what went wrong?
- Are test split up to have individual concepts in different tests, even if they test the same function?
- Did everything that was just coded get a unit test?
Here’s How to Start Testing
A more polished system can incorporate various analysis tools to improve code coverage, but I’ll end with a quick method to jump-start unit testing. A good practice that is simple to implement in nearly any development environment, and that I’ve found to work surprisingly well, is to start with any brand new code or bug fixes then bring up the unit test program in your IDE of choice. Next, set breakpoints on every line of the new/fixed code, and run the unit test. Remove every breakpoint that it stops at, and when the run ends you will have the remaining untested code highlighted with breakpoints. It should be a fairly simple matter to add additional tests to cover the remaining aspects of the code.
Employ this method and you should see the coverage and helpfulness of your unit tests improve significantly in a very short time. Of course, doing this on a code base that has not been thoroughly tested previously might be a daunting task. In those situations, I recommend you start by placing a breakpoint at the start of each function. Run initial passes to get a minimal amount of testing completed on each, and then follow-up with additional passes to improve the coverage of each function in detail.
All software projects should implement tests similar to what I’ve presented in this article because they will drastically improve the overall quality of the code base. The initial setup might require a relatively significant amount of time, but the improvements seen as a result of this effort should more than compensate for the expense. If your organization doesn’t take testing seriously, perhaps it’s time to change some people’s minds.
About Jon A. Cruz
Jon Cruz is no longer employed by Samsung. We thank him for all of his great work and wish him the best in his future endeavors. Jon Cruz was a Senior Engineer at Samsung's Open Source Group. He has been working in Open Source for many years, and shipped his first linux-based appliance in 1996. He has collaborated on Wayland and is an Inkscape core developer and Board Member. He has also participated as a mentor in Google’s Summer of Code since its inception. Finally, Jon is an international speaker, presenting at linux.conf.au, OSCON and elsewhere.