2.3
Static Analysis and Dynamic Analysis:
Based on whether the actual
execution of software under evaluation is needed or not, there are two major
categories of quality assurance
activities:
Static
Analysis focuses on the range of methods that are used to
determine or estimate software quality
without reference to actual
executions. Techniques in this area include code inspection, program
analysis, symbolic analysis, and
model checking.
Dynamic
Analysis deals with specific methods for ascertaining and/or
approximating software quality
through actual executions, i.e.,
with real data and under real (or simulated) circumstances. Techniques
in this area include synthesis of
inputs, the use of structurally dictated testing procedures, and the
automation of testing environment
generation.
Generally the static and dynamic
methods are sometimes inseparable, but can almost always discussed
separately. In this paper, we
mean dynamic analysis when we say testing, since most of the testing
activities (thus all the
techniques studied in this paper) require the execution of the software.
2.4
Functional Technique and Structural Technique:
The information flow of testing
is shown in Figure 1. As we can see, testing involves the configuration of
proper inputs, execution of the
software over the input, and the analysis of the output. The “Software
Configuration” includes
requirements specification, design specification, source code, and so on. The
“Test Configuration” includes
test cases, test plan and procedures, and testing tools.
Based on the testing information
flow, a testing technique specifies the strategy used in testing to
select
input test cases and analyze test
results. Different techniques reveal different quality aspects of a software
system, and there are two major
categories of testing techniques, functional and structural.
Functional
Testing:
the software program or system under test is viewed as a “black box”.
The
selection of test cases for
functional testing is based on the requirement or design
specification of the
software entity under test.
Examples of expected results, some times are called test oracles,
include
3
requirement/design
specifications, hand calculated values, and simulated results. Functional
testing
emphasizes on the external
behavior of the software entity.
Structural
Testing:
the software entity is viewed as a “white box”. The selection of test
cases is based
on the implementation of
the software entity. The goal of selecting such test cases is to cause the
execution of specific spots in
the software entity, such as specific statements, program branches or
paths. The expected results are
evaluated on a set of coverage criteria. Examples of coverage criteria
include path coverage, branch
coverage, and data-flow coverage. Structural testing emphasizes on the
internal
structure of
the software entity.
Testing
Evaluation
Debug
Reliability
Model
Software
Configuration
Test
Configuration
Test
Results
Expected
Results Error
Rate
Data
Errors
Corrections
Predicted
Reliability
Figure 1. Testing Information
Flow
3 Scope
of the Study
3.1
Technical Scope
In this paper, we focus on the
technology maturation of testing techniques, including these functional and
structural techniques that have
been influential in the academic world and widely used in practice. We are
going to examine the growth and
propagation of the most established strategy and methodology used to
select test cases and analyze
test results. Research in software testing techniques can be roughly divided
into two branches: theoretical
and methodological, and the growth in both branches push the growth of
testing technology together.
Inhibitors of maturation, which explains why the in-depth research hasn’t
brought revolutionary advantage
in industry testing practice, are also within our scope of interest.
There are many other interesting
areas in software testing. We limit the scope of our study within the range
of testing techniques, although
some of the areas maybe inseparable from our study. Specifically, we are
not going
to discuss:
· How testing is involved in the software
development cycle
· How different levels of testing are
performed
· Testing process models
· Testing policy and management
responsibilities, and
· Stop criteria of testing and software
testability
3.2
Goal and standard of progress
The ultimate goal of software
testing is to help designers, developers, and managers construct systems with
high quality. Thus research and
development on testing aim at efficiently performing effective testing – to
find more errors in requirement,
design and implementation, and to increase confidence that the software
has various qualities. Testing
technique research leads to the destination of practical testing methods and
4
tools. Progress toward this
destination requires fundamental research, and the creation, refinement,
extension, and popularization of
better methods.
The standard of progress for the
research of testing techniques include:
· Degree of acceptance of the technology
inside and outside the research community
· Degree of dependability on other areas
of software engineering
· Change of research paradigms in response
to the maturation of software development technologies
· Feasibility of techniques being used in
a widespread practical scope, and
· Spread of technology – classes,
trainings, management attention
4. The
History of Testing Techniques
4.1
Concept Evolution
Software has been tested as early
as software has been written. The concept of testing itself evolved with
time. The evolution of definition
and targets of software testing has directed the research on testing
techniques. Let’s briefly review
the concept evolution of testing using the testing process model proposed
by Gelperin and Hetzel [6] before
we begin study the history of testing techniques.
Phase
I. Before 1956: The Debugging-Oriented Period
–
Testing was not separated from debugging
In 1950, Turing wrote the famous
article that is considered to be the first on program testing. The
article addresses the question
“How would we know that a program exhibits intelligence?” Stated in
another way, if the requirement
is to build such a program, then this question is a special case of “How
would we know that a program
satisfies its requirements?” The operational test Turing defined
required the behavior of the
program and a reference system (a human) to be indistinguishable to an
interrogator (tester). This could
be considered the embryotic form of functional testing. The concepts
of program checkout, debugging
and testing were not clearly differentiated by that time.
Phase
II. 1957~78: The Demonstration-Oriented Period
–
Testing to make sure that the software satisfies its specification
It was not until 1957 was
testing, which was called program checkout by that time, distinguished from
debugging. In 1957, Charles Baker
pointed out that “program checkout” was seen to have two goals:
“Make sure the program runs” and
“Make sure the program solves the problem.” The latter goal was
viewed as the focus of testing,
since “make sure” was often translated into the testing goal of satisfying
requirements. As we’ve seen in
Figure 1, debugging and testing are actually two different phases. The
distinction between testing and
debugging rested on the definition of success. During this period
definitions stress the purpose of
testing is to demonstrate correctness: “An ideal test, therefore,
succeeds only when a program
contains no errors.” [5]
The 1970s also saw the widespread
idea that software could be tested exhaustively. This led to a series
of research emphasis on path
coverage testing. As is said in Goodenough and Gerhart’s 1975 paper
“exhaustive testing defined
either in terms of program paths or a program’s input domain.” [5]
No comments:
Post a Comment