Skip to main content

Definition of Testing

Below is a description of how testing is defined generally.

General Definition

Testing is the process of executing program(s) with the intent of finding errors, rather than (a misconception) showing the correct functioning of the program(s).  The difference actually lies in the different psychological effect caused by the different objectives:  If the goal is to demonstrate that a program has no errors, then we will tend to select tests that have a low probability of causing the program to fail.  On the other hand, if the goal is to demonstrate that a program has errors, the test data will have a higher probability of finding errors. 

Objectives of Testing

The objectives of testing are:

  1. to verify and enhance the product's end-user experience and make sure it is fit for its intended usage.
  2. to test as early in the software development lifecycle as feasible (for example, in the requirements definition stage) utilizing various methodologies (for example, testing unwritten requirements such as user behavior of the system). The goal is to prevent defects from occurring in the first place, thus finding any defects is not sufficient.
  3. confirming that a product meets its primary needs. confirming and approving this claim by repeatedly asking, "Is this what the end user requires?"