Software Testing and Quality Assurance | theteche.com

Introduction Software Testing

People are not perfect. We make errors in design and code. Hence testing is an essential activity in software life cycle. The goal of testing is to uncover as many errors as possible. The software testing is an important activity carried out in order to improve the quality – the software testing. For finding out all possible errors the testing must be conducted systematically and test cases must be designed using disciplined techniques.

Definition : Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design and coding. The purpose of software testing is to ensure whether the software functions appear to be working according to specifications and performance requirements.

Testing Objectives 

According to Glen Myers the testing objectives are

  1. Testing is a process of executing a program with the intend of finding an error.
    2. A good test case is one that has high probability of finding an undiscovered error.
    3. A successful test is one that uncovers an as-yet undiscovered error.

The major testing objective is to design tests that systematically uncover types of errors with minimum time and effort.

Testing Principles

Every software engineer must apply following testing principles while performing the software testing. 1. All tests should be traceable to customer requirements.

  1. Tests should be planned long before testing begins.
  2. The Pareto principle can be applied to software testing 80 % of all errors uncovered during testing will likely be traceable to 20 % of all program modules.
  3. Testing should begin “in the small” and progress toward testing “in the large”
  4. Exhaustive testing is not possible.
  5. To be most effective, testing should be conducted by an independent third party.

Why Testing is Important ?

Generally, testing is a process that requires more efforts than any other software engineering activity. Testing is a set of activities that can be planned in advance and conducted systematically. If it is conducted haphazardly, then only time will be wasted and more even worse errors may get introduced. This may lead to have many undetected errors in the system being developed.

Hence performing testing by adopting systematic strategies is very much essential in during development of software.

Taxonomy of Software Testing

There are two general approaches for the software testing.

  1. Black box testing

The black box testing is used to demonstrate that the software functions are operational. As the name suggests in black box testing it is tested whether the input is accepted properly and output is correctly produced.

The major focus of black box testing is on functions, operations, external interfaces, external data and information.

White box testing

In white box testing the procedural details are closely examined. In this testing the internals of software are tested to make sure that they operate according specifications and designs. Thus major focus of white box testing is on internal structures, logic paths, control flows, data flows, internal data structures, conditions, loops, etc.

Levels

The testing can be typically carried out into two levels.

  1. Component testing

In component testing individual components are tested. It is the responsibility of component developer to carry out this kind of testing. These tests are derived from developer’s experience.

  1. System testing

In system testing the testing of groups of components integrated to create a system or sub-system is done. It is the responsibility of an independent testing team to carry this kind of testing. These tests are based on a system specification.

Component testing ->  System testing

Levels of testing

Activities

Various testing activities are

  1. Test planning

The test plan or test script is prepared. These are generated from requirements analysis document (for black box) and program code (for white box).

  1. Test case design

The goal of test case design is to create a set of tests that are effective in testing.

  1. Test execution

The test data is derived through various test cases in order to obtain the test result.

  1. Data collection

The test results are collected and verified.

  1. Effective evaluation

Au the above test activities are performed on the software’ model and the maximum number of errors are uncovered.

Black Box Test

  • The black box testing is also called as behavioural testing.
  • Black box testing methods focus on the functional requirements of the software. Test sets are derived, that fully exercise all functional requirements.
  • The black box testing is not an alternative to white box testing and it uncovers different class of errors than white box testing.

Why to perform black box testing ?

Black box testing uncovers following types of errors.

  1. Incorrect or missing functions
  2. Interface errors
  3. Errors in data structures
  4. Performance errors
  5. Initialization or termination errors

Equivalence Partitioning – Software Testing

  • It is a black box technique that divides the input domain into classes of data. From this data test cases can be derived.
  • An ideal test case uncovers a class of errors that might require many arbitrary test cases to be executed before a general error is observed.
  • In equivalence partitioning the equivalence classes are evaluated for given input condition. Equivalence class represents a set of valid or invalid states for input conditions.

Boundary Value Analysis (BVA)

  • Boundary value analysis is done to check boundary conditions.
  • A boundary value analysis is a testing technique in which the elements at the edge of the domain are selected and tested.
  • Using boundary value analysis, instead of focusing on input conditions only, the test cases from output domain are also derived. Boundary value analysis is a test case design technique that complements equivalence partitioning technique.
  • Guidelines for boundary value analysis technique are
  1. If the input condition specified the range bounded by values x and y, then test cases should be designed with values x and y. Also test cases should be with the values above and below x and y.
  2. If input condition specifies the number designed with minimum and maximum values as well as with the values that are just above and below the maximum and minimum should be tested. values then the test cases should be
  3. If the output condition specified the range bounded by values x and y, then test cases should be designed with values x and y. Also test cases should be with the values above and below x and y.
  4. If output condition specifies the number of values then the test cases should be designed with minimum and maximum values as well as with the values that are just above and below the maximum and minimum should be tested.
  5. If the internal program data structures specify such boundaries then the test cases must be designed such that the values at the boundaries of data structure can be tested.

For example :

Integer D with input condition [-2, 10],
Test values : -2, 10, 11, -1, 0
If input condition specifies a number values, test cases should developed to exercise the minimum and maximum numbers. Values just above and below this min and max should be tested. Enumerate data E with input condition : { 2, 7, 100, 102} Test values : 2, 102, -1, 200, 7

Orthogonal Array Testing(OAT) 

There are many applications for which very small number of input is needed and values that each input requires might be bounded. In such a situation the number of test cases is relatively small and can be manageable. But if the number of input gets increased then it will increase there will be large number of test cases. And testing may become impractical or impossible. Orthogonal array testing is a kind of testing method which can be applied to the applications in which input domain is relatively small but there could be large number of test cases. Using orthogonal array testing method, only faulty regions can be tested and thus the number of test cases can be reduced. Orthogonal arrays are two dimensional arrays of numbers which possess the interesting quality that by choosing any two columns in the array you receive an even distribution of all the pair-wise combinations of values in the array. Following are some important terminologies used in orthogonal testing methods.

Runs

It denotes the number of rows in the array. These can be directly translated to the test cases.

Factors

It denotes the number of columns in the array. These can be directly translated to
maximum number of variables that can be handled by the array.

Level

This number denotes the maximum number of values that a single factor(column) can take.

L9 orthogonal array

This array is used to generate the test cases. This array has a balancing property. That means the testing can be done uniformly by executing the test cases generated by the L9 orthogonal array.

Example

Consider that we want to develop an application which has three sections – top. bottom and middle.

  • Associated with each section we will consider one variable. That means now we have to analyses only three variables.
  • These variables can be assigned with Boolean values and hence the values can be true or false.
  • If we decide to test it completely then there would be 23 =8 test cases

why to perform white box testing ?

There are three main reasons behind performing the white box testing.

  1. Programmers may have some incorrect assumptions while designing of implementing some functions. Due to this there are chances of having logical errors in the program. To detect and correct such logical errors procedural details need to be examined.
  2. Certain assumptions on flow of control and data may lead programmer to make design errors. To uncover the errors on logical path, white box testing is must.
  3. There may be certain typographical errors that remain undetected even after syntax and type checking mechanisms. Such errors can be uncovered during white box testing.

Cyclomatic Complexity                              

Definition

Cyclomatic complexity is a software metric that gives the quantitative measure of logical complexity of the program.
The cyclomatic complexity defines the number of independent paths in the basis set of the program that provides the upper bound for the number of tests that must be conducted to ensure that all the statements have been executed atleast once.

There are three methods of computing cyclomatic complexities.

Method 1: The total number of regions in the flow graph is a cyclomatic complexity.
Method 2 : The cyclomatic complexity, V(G) for a flow graph G can be defined as

V(G) = E – N + 2

Where         E is total number of edges in the flow graph.

N is the total number of nodes in the flow graph.

Method 3: The cyclomatic complexity V(G) for a flow graph G can be defined as

V(G) = P+ 1

where                   P is the total number of predicate nodes contained in the flow graph G.

Let us understand computation of cyclomatic complexity with the help of an example.

Verification and Validation

  • Verification refers to the set of activities that ensure that software corrected implements a specific function.
  • Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements.
  • According to Boehm

Verification : “Are we building the product right ?”
Validation : “Are we building the right product ?”

  • Software testing is only one element of Software Quality Assurance (SQA). you cannot use testing to
  • Quality must be built into the development process add quality after the fact.
  • Verification and validation involve large number of software quality assurance activities such as
    •  Formal technical reviews
    •  Quality and configuration audits
    •  Performance monitoring
    •  Feasibility study
    •  Documentation review
    •  Database review
    •  Algorithmic analysis
    •  Development testing
    •  Installation testing.

Testing can also be categorized as static testing and dynamic testing. The verification activities fall into the category of static testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization.

The dynamic testing is a method in which the actual testing is done to uncover all errors. Various testing strategies such as unit testing, integration testing, possible validation testing, system testing can be applied while performing dynamic testing.

Reliability techniques and verification and validation

Software reliability is the probability of failure free operation over specified time in a given environment. Basically the verification and validation involves two complementary approaches.

  1. Reviews and analysis – feasibility study and performance monitoring, algorithmic analysis are the activities performed to boost the level of confidence about the developed product. These activities help in achieving the ultimate goal of verification and validation process, that the software system is fit for the purpose.

The formal technical reviews, database reviews,

2 Software testing – It involved development testing and installation testing. The results of these testing help in understanding whether the developed software will perform failure free operation in a given environment. Thus testing is dynamic technique of verification, and validation technique to check the reliability of the software.

The Software Testing Strategy

  • We begin by ‘testing-in-the-small’ and move toward ‘testing-in-the-large’.
  • Various testing strategies for conventional software are
  1. Unit testing
  2. Integration testing
  3. Validation testing
  4. System testing
  5. Unit testing – In this type of testing techniques are applied to detect the errors from each software component individually.
  6. Integration testing– It focuses on issues associated with verification and program construction as components begin interacting with one another.
  7. Validation testing – It provides assurance that the software validation criteria (established during requirements analysis) meets all functional, behavioural, and performance requirements.
  8. System testing – In system testing all system elements forming the system is tested as a whole.

Strategic Issues of Testing

  • Specify product requirements in a quantifiable manner before testing starts – Certain quality characteristics of the software such as maintainability, portability and usability should be specified in order to obtain the unambiguous test results.
  • Specify testing objectives explicitly – Testing objectives such as effectiveness, mean time to failure and cost of defects should be stated clearly in the test plan.
  • Identify categories of users for the software and develop a profile for each – Use cases describe the interactions among different class of users and thereby testing can focus on the actual use of the product.
  • Develop a test plan that emphasizes rapid cycle testing – Test plan is an important document which helps the tester to perform rapid cycle testing (2 percent of project effort).
  • Build robust software that is designed to test itself – The software should be capable of detecting certain classes of errors. Moreover, the software design should allow automated testing and regression testing.
  • Use effective formal reviews as a filter prior to testing– Formal technical reviews need to be conducted to uncover errors. The effective technical reviews conducted before testing, reduce significant amount of testing efforts.
  • Conduct formal technical reviews to assess the test strategy and test cases –

The formal technical review helps to detect any the lacuna in testing approach. Hence it is necessary to assess the test strategy and test cases by technical reviewers to improve the quality of software.

  • Develop a continuous improvement approach for the testing process – The measured test strategy should be used as part of statistical process control approach for software testing.

Unit Software Testing

  • In unit testing the individual components are tested independently to ensure their quality.
  • The focus is to uncover the errors in design and implementation.
  • The various tests that are conducted during the unit test are described as below.
  1. Module interfaces are tested for proper information flow in and out of the program.

Smoke Testing

  • The smoke testing’ is a kind of integration testing technique used for time critical projects wherein the project needs to be assessed on frequent basis.
  • Following activities need to be carried out in smoke testing – –
  1. Software components already translated into code are integrated into a “build”. The “build” can be data files, libraries, reusable modules or program components.
  2. A series of tests are designed to expose errors from build so that the “build” perform. its functioning correctly.
  3. The “build” is integrated with the other builds and the entire product is smoke tested daily.

Smoke testing benefits

  1. Integration risk is minimized.
  2. The quality of the end product is improved.
  3. Error diagnosis and correction are simplified.
  4. Assessment of progress is easy.

Validation Testing

  • The integrated software is tested based on requirements to ensure that the desired product is obtained.
  • In validation testing the main focus is to uncover errors in
  1. System input/output
  2. System functions and information data
  3. System interfaces with external parts
  4. User interfaces
  5. System behaviour and performance
  • Software validation can be performed through a series of black box tests.
  • After performing the validation tests there exists two conditions.

The function or performance characteristics are according to the specifications and are accepted.

About the author

Santhakumar Raja

Hi, I'm a part blogger. My profession is store manager (Supreme Paradise) from Tamil Nadu, India. This blog is dedicated to people to stay update in the education industry.

View all posts

Leave a Reply