A comprehensive testing strategy checks that software performs what it's supposed to do, both from a functional and non-functional perspective. Testing aims to prevent bugs and contributes to the overall level of trust users have in a platform, reducing the risk of reputational damage and revenue loss.
Software engineers and QA analysts work both together and separately to plan, design, and validate results against a suite of test cases:
- Developers achieve correctness of the executing code by writing unit and integration tests, checking the outcome of different inputs and workflow steps. Working to capture and fix issues early in the software lifecycle, developers debug code, removing faults that can cause an application to fail to achieve its necessary function.
- QA analysts confirm everything works as expected with acceptance tests and that a version of the software is ready for deployment. They are responsible for checking the correctness of the workflow, design, and usability of an application and monitoring and reporting progress on test metrics.
In this article, I introduce the main concepts and vocabulary used in software testing. I will leave more detailed discussions about specific test categories and best practices to later articles.
There are five main kinds of testing:
A unit is the tiniest testable component of software code — typically having one or more inputs and a single output — and is the first phase of software testing. The goal is to verify that each part of the program works as intended and tests only one method against mock data.
The following testing phase is Integration Testing, aiming to verify that one or more interdependent application modules work together correctly. It also attempts to find flaws in the interaction of these interconnected units. For example, think about the communication between two applications over a middleware messaging queue. An integration test would check that the other application processes a request raised on one application.
An acceptance (or functional) test tells us whether a usage scenario has either passed or failed.
These tests define:
- Steps required
- Input parameters
- Expected results
- Initial data
- Browsers / operating system to test against
Acceptance tests bridge the gap between developers, testers, and users, ensuring a high level of understanding about what’s required. Of course, passing the acceptance test doesn’t necessarily mean that the software has no bugs.
Regression testing checks that the earlier designed and validated functionality continues to work after changing something, such as modifying a workflow or fixing a bug.
System tests assess the software from the perspective of the end-user:
- Stress testing evaluates the robustness of software by testing it outside the usual operating limits. For example, a concert tickets website handles a surge in customers when a band announces its tour dates.
- Security testing uncovers vulnerabilities in an application’s security structures, using a combination of scanning software and functional testing by a security consultant. Security tests also check for holes in authorization, e.g., the application stops users without the correct user groups performing actions beyond their role limitations.
A comprehensive testing approach includes both White Box and Black Box testing.
White Box testing checks what happens inside the software—also called Open Box or Clear Box testing; this testing approach checks:
- The code handles inputs and their respective outputs.
- The code has the expected internal state.
- The code execution goes through the correct branches.
- Modules raise and handle exceptions correctly.
- Potential security flaws are handled correctly by defensive programming (see OWASP), also known as penetration (or pen) testing, which is beyond the scope of this article.
This approach necessitates the tester to have extensive knowledge of the code and requires the expertise of a developer to prevent costly bugs from moving up through QA, UAT, and production environments.
Unit and integration testing are examples of White Box testing. Such automated tests have many advantages, including having a computer run many tests quickly against all possible code branches. This kind of testing is ideal for checking that new features haven’t broken the existing code. However, unit tests are expensive at the outset because they can typically be done only by a developer. They will easily double the time it takes to complete a new feature.
Black Box means testing from a user’s perspective, i.e., somebody who can’t see into the internal workings of the code. Also known as Functionality or Behavioral testing, the analyst supplies inputs to a piece of software and checks the outputs to validate:
- Inputs are manipulated and stored.
- Outputs are correct.
- The application handles human error and returns the appropriate validation messages.
A QA analyst can take a functional approach to Black Box testing, checking the system’s business requirements meets the acceptance criteria. The analyst can also check other things like the application’s performance and use regression tests to ensure that new functionality hasn’t broken anything, i.e., something that used to work but now doesn’t.
According to the system's users, a good testing strategy focuses on prioritizing what should work and address the most significant risks. It’s important to understand that testing is a balancing act between the level of risk and time and money on the other hand. Testing should be seen as a means to an end, helping to discover problems as part of quality assurance. However, it's simply too expensive and unrealistic to cover every possible scenario and be sure there are no bugs in almost all cases.
Thanks for reading! Let me know what you think in the comments section below, and don’t forget to subscribe. 👍