Software Inspection Guiding Principles


Testing Shows the Presence of Defects

Each application must pass through a search of testing phases such as system integration testing, user acceptance testing and beta testing, etc. before they are released into production. Some form of defects will always be found, Regardless of how much testing one conducts.

The core purpose of the testing team should focus on finding the defects in an application. The inspection team must use different methods to discover as many errors as they can. It helps in reducing the number of undiscovered errors in a software application. Even though the testing team fails to find any defects, that doesn't mean the software is 100% perfect.

Let's say an eCommerce application undergoes several testing phases and passes all testing with flying colors. Although the app succeeded in the production environment, the real end-user is yet to use it. Maybe the customer will use a rare feature that the testing might have overlooked, thinking no one uses it.

Early testing

Testing at the early stage of SDLC enables testers to find defects during the requirement analysis phase or documentation. What testers must do is conduct a testing process as soon as the requirements are finalized. Fixing defects during the early stages is almost ten times cheaper than fixing a bug at the later stage.

The testing team must test the coding integration before adding new codes to the existing coding structure. Moreover, testers must run further testing, assuring the proper integration of the modified codes. This is the place to introduce the 1:10:100 rule. The rule states that fixing defects during user acceptance testing costs ten times more than fixing them at the development phase. The cost will increase 100 times if the defect goes undetected during post-release.

For successful early testing, organizations can appoint a separate team to handle the requirement process. It would be much better to select a representative for each testing phase. The testers then look into each requirement and do modifications if needed. Organizations must hire experienced testers who can define the criteria, specify intent and prepare their test cases with sheer accuracy.

Exhaustive testing is not possible

This principle states that testing all the functionalities using valid and invalid combinations is not possible. Not only exhausting testing requires unlimited efforts, but it also doesn't give you the expected results. Therefore, testers recommend using only a few combinations using various techniques like Boundary value analysis and equivalence partitioning.

Why does exhausting testing fail in most test cases?

  • Creating all possible execution environments of a system is impossible, especially for software depending on real-world factors like temperature, weather, wind speed, pressure, and many more.

  • Software developed with implicating design and assumptions are incredibly complex for testing.

  • Testing both valid inputs and invalid inputs can be too large to use in testing a system.

  • Programs with large input domains along with input timing constraints can cause the testing to fail.

Exhaustive testing will take unlimited effort, and most of those efforts are ineffective. Also, the project timelines will not allow testing of so many combinations. Hence it is recommended to test input data using various methods like Equivalence Partitioning and Boundary Value Analysis.

Testing is Context-Dependent

The market comes with several domains like medical, banking, travel, advertisement, etc. Each application of a domain comes with unique functions. Therefore, it demands different requirements, testing processes, risk analysis, and techniques. This kind of diversity in a domain makes the testing a context-dependent process.

The likelihood of a developed software carrying the same code is too thin. Meaning, testers can't follow the testing process of a banking app for testing an eCommerce application. Everything, including the approach, methodologies, and types of testing, differs from app to app.

Defect Clustering

Defect clustering is a phenomenon that happens when most of the defects or bugs are concentrated on a small number of modules. It may occur due to the complex nature of the modules.

The principle of Defect Clustering follows the Pareto Principle that states that 20 percent of modules may contain 80 percent of the problems. It is most noticeable in large systems where a particular module is affected due to certain factors like:

  • System size
  • Coding complexity
  • Modifications
  • Mistakes by developers

The phenomenon of defect clustering is widely prevalent among test designers. Testers mostly use this information during risk assessments and test planning. Testers often focus on these tricky areas to find more defects, which reduces the time and costs of finding defects.

Pesticide Paradox

This phenomenon pesticide paradox states that testing software repeatedly makes it immune to the defects, just like insects build up resistance to pesticides.

There is no better way to explain the Pesticide Paradox than with an example. Let's say you are running a testing cycle on a module. You find some bugs and report them to the development team. Then, they fixed it and updated you with the new code.

Now you executed one more test using the same set of test cases. This time you find fewer bugs than the last time. You again send the report to the concerned team for fixing.   Now while running the same test cases, you are missing something. You are so indulged in fixing the current defects that you completely forget that some new bugs may have got into the system along with the recent changes.

Therefore, testers are advised to use a new set of test cases focusing on multiple hotspots or modules. Adding new test cases with existing test cases will also help in avoiding the pesticide paradox.

Absence of errors fallacy

This principle states that No software is 100% free from defects. Just because a software tested defect-free for 99 percent doesn't mean the rest 1 percent is not a matter of concern. There may be a chance that the testers have tested it against the wrong requirements.

For instance, a testing team makes banking software 99 % defect-free and submits it to the management. Although the software was defect-free, the administration was not satisfied as it wanted software with simple UI with high user load capacity. In this case, the testing team couldn't meet the end requirement as they were too focused on the quality of the product.

Updated on: 06-Mar-2021

174 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements