Let's review the four steps in detail.
So developers and other stakeholders of the project do the "planning and building," while testers do the check part of the cycle. Therefore, software testing is done in check part of the PDCA cyle.
White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.
There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.
The above figure shows how both types of testers view an accounting application during testing. Black box testers view the basic accounting application. While during white box testing the tester knows the internal structure of the application. In most scenarios white box testing is done by developers as they know the internals of the application. In black box testing we check the overall functionality of the application while in white box testing we do code reviews, view the architecture, remove bad code practices, and do component level testing.
- Definition: The first step any organization needs to do is define one unique definition for testing within the organization so that everyone is of the same mindset.
- How to achieve: How are we going to achieve our objective? Is there going to be a testing committee, will there be compulsory test plans which need to be executed, etc?.
- Evaluate: After testing is implemented in a project how do we evaluate it? Are we going to derive metrics of defects per phase, per programmer, etc. Finally, it's important to let everyone know how testing has added value to the project?.
- Standards: Finally, what are the standards we want to achieve by testing? For instance, we can say that more than 20 defects per KLOC will be considered below standard and code review should be done for it.
The following diagram shows the most common inputs used to prepare acceptance test plans.
When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues with the current system. Configuration management is done using baselines.
But that's a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect in maintenance is ten times more costly than fixing it during execution.
In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also review the design document from the architecture and the correctness perspectives. In the build and execution phase we can execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way. i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and follow the regression testing.
Therefore, Testing should occur in conjunction with each phase of the software development.
- Isolated test team
- Outsource - we can hire external testing resources and do testing for our project.
- Inside test team
- Developers as testers
- QA/QC team.
- Minor: Very low impact but does not affect operations on a large scale.
- Major: Affects operations on a very large scale.
- Critical: Brings the system to a halt and stops the show.
- Central/Project test plan: This is the main test plan which outlines the complete test strategy of the software project. This document should be prepared before the start of the project and is used until the end of the software development lifecycle.
- Acceptance test plan: This test plan is normally prepared with the end customer. This document commences during the requirement phase and is completed at final delivery.
- System test plan: This test plan starts during the design phase and proceeds until the end of the project.
- Integration and unit test plan: Both of these test plans start during the execution phase and continue until the final delivery.
Black box test cases do not require system understanding but white box testing needs more structural understanding. And structural understanding is clearer i00n the later part of project, i.e., while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.
Integration Tests - Testing performed on groups of modules to ensure that data and control are passed properly between modules.
System testing - Testing a predetermined combination of tests that, when executed successfully meets requirements.
Acceptance testing - Testing to ensure that the system meets the needs of the organization and the end user or customer (i.e., validates that the right system was built).
The following figure shows a test log and is followed by a sample test log.
If the tester gets involved right from the requirement phase then requirement traceability is one of the important reports that can detail what kind of test coverage the test cases have.
For instance, you can define entry criteria that the customer should provide the requirement document or acceptance plan. If this entry criteria is not met then you will not start the project. On the other end, you can also define exit criteria for your project. For instance, one of the common exit criteria in projects is that the customer has successfully executed the acceptance test plan.
A masked defect is an existing defect that hasn't yet caused a failure just because another defect has prevented that part of the code from being executed.
Alpha and beta testing has different meanings to different people. Alpha testing is the acceptance testing done at the development site. Some organizations have a different visualization of alpha testing. They consider alpha testing as testing which is conducted on early, unstable versions of software. On the contrary beta testing is acceptance testing conducted at the customer end.
In short, the difference between beta testing and alpha testing is the location where the tests are done.
- Statement coverage: This coverage ensures that each line of source code has been executed and tested.
- Decision coverage: This coverage ensures that every decision (true/false) in the source code has been executed and tested.
- Path coverage: In this coverage we ensure that every possible route through a given part of code is executed and tested.
For instance, if a defect is identified during requirement and design we only need to change the documentation, but if identified during the maintenance phase we not only need to fix the defect, but also change our test plans, do regression testing, and change all documentation. This is why a defect should be identified/removed in earlier phases and the testing department should be involved right from the requirement phase and not after the execution phase.
From the user we need the following data:
- The first thing we need is the acceptance test plan from the end user. The acceptance test defines the entire test which the product has to pass so that it can go into production.
- We also need the requirement document from the customer. In normal scenarios the customer never writes a formal document until he is really sure of his requirements. But at some point the customer should sign saying yes this is what he wants.
- The customer should also define the risky sections of the project. For instance, in a normal accounting project if a voucher entry screen does not work that will stop the accounting functionality completely. But if reports are not derived the accounting department can use it for some time. The customer is the right person to say which section will affect him the most. With this feedback the testers can prepare a proper test plan for those areas and test it thoroughly.
- The customer should also provide proper data for testing. Feeding proper data during testing is very important. In many scenarios testers key in wrong data and expect results which are of no interest to the customer.
There are five tasks for every workbench:
- Input: Every task needs some defined input and entrance criteria. So for every workbench we need defined inputs. Input forms the first steps of the workbench.
- Execute: This is the main task of the workbench which will transform the input into the expected output.
- Check: Check steps assure that the output after execution meets the desired result.
- Production output: If the check is right the production output forms the exit criteria of the workbench.
- Rework: During the check step if the output is not as desired then we need to again start from the execute step.
- Pilot: The actual production system is installed at a single or limited number of users. Pilot basically means that the product is actually rolled out to limited users for real work.
- Gradual Implementation: In this implementation we ship the entire product to the limited users or all users at the customer end. Here, the developers get instant feedback from the recipients which allow them to make changes before the product is available. But the downside is that developers and testers maintain more than one version at one time.
- Phased Implementation: In this implementation the product is rolled out to all users in incrementally. That means each successive rollout has some added functionality. So as new functionality comes in, new installations occur and the customer tests them progressively. The benefit of this kind of rollout is that customers can start using the functionality and provide valuable feedback progressively. The only issue here is that with each rollout and added functionality the integration becomes more complicated.
- Parallel Implementation: In these types of rollouts the existing application is run side by side with the new application. If there are any issues with the new application we again move back to the old application. One of the biggest problems with parallel implementation is we need extra hardware, software, and resources.
System testing checks that the system that was specified has been delivered. Acceptance testing checks that the system will deliver what was requested. The customer should always do Acceptance testing and not the developer.
The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgement. This testing is more about ensuring that the software is delivered as defined by the customer. It's like getting a green light from the customer that the software meets expectations and is ready to be used.
The following figure shows the difference between regression and confirmation testing.
If we fix a defect in an existing application we use confirmation testing to test if the defect is removed. It's very possible because of this defect or changes to the application that other sections of the application are affected. So to ensure that no other section is affected we can use regression testing to confirm this.