SlideShare uma empresa Scribd logo
1 de 76
Submitted by, 
C.PRIYANKA KARANCY, 
M.TECH,CCE, 
REG. NO. PR13CS2013.
INTRODUCTION TO TESTING AS 
AN ENGINEERING ACTIVITY
The software development implies that the following 
approaches. 
 The development process is well understood; 
 Projects are planned. 
 Life cycle models are defined and adhered to. 
 Standards are in place for product and process. 
 Measurements are employed to evaluate product and 
process quality. 
 Components are reused. 
 Validation and verification processes play a key role in 
quality 
 determination; 
 engineers have proper education, training, and certification.
Type Purposes Activities Roles 
1 Test Products Test Development, 
Test Execution. 
Testers: Extension 
of Development. 
2 Measure Products Test oversight, 
reporting results. 
Measurers; 
Quality Hurdle. 
3 Measure Processes Metrics. Information 
Engineers. 
4 Define Processes Process and Risk 
Management. 
Quality and 
Process Engineers. 
5 Guidance Resource Quality Reference. Quality Engineers.
 Validation is the process of evaluating a software 
system or component during, or at the end of, the 
development cycle in order to determine whether it 
satisfies specified requirements. 
 Verification is the process of evaluating a software 
system or component to determine whether the 
products of a given development phase satisfy the 
conditions imposed at the start of that phase.
Testing is generally described as a group of procedures 
carried out to evaluate some aspect of a piece of 
software. 
Debugging, or fault localization is the process of 
 locating the fault or defect, 
repairing the code, 
retesting the code.
Benefits of test process improvement are the following: 
 smarter testers 
 higher quality software 
 the ability to meet budget and scheduling goals 
 improved planning 
 the ability to meet quantifiable testing goals
Maturity goals 
Each maturity level, except 1, contains certain maturity goals. 
For an organization to reach a certain level, the corresponding 
maturity goals must be met by the organization. 
Maturity subgoals 
Maturity goals are supported by maturity subgoals 
ATRs 
Maturity subgoals are achieved by means of ATRs. 
ATRs address issues concerning implementation of activities and 
tasks. 
ATRs also address how an organization can adapt its practices so 
that it can move in-line with the TMM model. 
ATRs are refined into “views” from the perspectives of three 
groups: 
Managers 
Developers and test engineers 
Customers (user/client).
Level 1 – Initial 
There are no maturity goals to be met at this level. 
Testing begins after code is written. 
An organization performs testing to demonstrate that 
the system works. 
No serious effort is made to track the progress of 
testing. 
Test cases are designed and executed in an ad hoc 
manner. 
In summary, testing is not viewed as a critical, distinct 
phase in software development.
Level 2 – Phase Definition: The maturity goals are as follows: 
Develop testing and debugging goals. 
Some concrete maturity subgoals that can support this goal are as follows: 
Organizations form committees on testing and debugging. 
The committees develop and document testing and debugging goals. 
Initiate a test planning process. (Identify test objectives. Analyze risks. Devise 
strategies. Develop test specifications. Allocate resources.) 
Some concrete maturity subgoals that can support this goal are as follows: 
Assign the task of test planning to a committee. 
The committee develops a test plan template. 
Proper tools are used to create and manage test plans. 
Provisions are put in place so that customer needs constitute a part of the 
test plan. 
Institutionalize basic testing techniques and methods. 
The following concrete subgoals support the above maturity subgoal. 
An expert group recommends a set of basic testing techniques and 
methods. 
The management establishes policies to execute the recommendations.
Level 3 – Integration: The maturity goals are as follows: 
Establish a software test group. 
Concrete subgoals to support the above are: 
An organization-wide test group is formed with leadership, 
support, and $$. 
The test group is involved in all stages of the software 
development. 
Trained and motivated test engineers are assigned to the 
group. 
The test group communicates with the customers. 
Establish a technical training program. 
Integrate testing into the software lifecycle. 
Concrete subgoals to support the above are: 
The test phase is partitined into several activities: unit, 
integration, system, and acceptance testing. 
Follow the V-model. 
Control and monitor the testing process. 
Concrete subgoals to support the above are: 
Develop policies and mechanisms to monitor and control test 
projects. 
Define a set of metrics related to the test project. 
Be prepared with a contingency plan
Level 4 – Management and Measurement: The maturity goals are: 
Establish an organization-wide review program. 
Maturity subgoals to support the above are as follows. 
The management develops review policies. 
The test group develops goals, plans, procedures, and recording 
mechanisms for carrying out reviews. 
Members of the test group are trained to be effective. 
Establish a test management program. 
Maturity subgoals to support the above are as follows. 
Test metrics should be identified along with their goals. 
A test measurement plan is developed for data collection and analysis. 
An action plan should be developed to achieve process improvement. 
Evaluate software quality. 
Maturity subgoals to support the above are as follows. 
The organization defines quality attributes and quality goals for products. 
The management develops policies and mechanisms to collect test 
metrics to support the quality goals.
Level 5 –Optimization, Defect Prevention and Quality Control: The maturity 
goals are as follows: 
Application of process data for defect prevention 
Maturity subgoals to support the above are as follows. 
Establish a defect prevention team. 
Document defects that have been identified and removed. 
Each defect is analyzed to get to its root cause. 
Develop an action plan to eliminate recurrence of common defects. 
Statistical quality control 
Maturity subgoals to support the above are as follows. 
Establish high-level measurable quality goals. (Ex. Test case execution 
rate, defect arrival rate, …) 
Ensure that the new quality goals form a part of the test plan. 
The test group is trained in statistical testing and analysis methods.
 2.1 Basic Definition 
 Errors: An error is a mistake, misconception, or 
misunderstanding on the part of a software developer. 
 Faults (Defects) 
 A fault (defect) is introduced into the software as the result 
of an error. It is an anomaly in the software that may cause 
it to behave incorrectly, and not according to its 
specification. 
 Failures 
 A failure is the inability of a software system or component 
to perform its require functions within specified 
performance requirements.
 A test case in a practical sense is a test-related item which 
contains the following information: 
1. A set of test inputs. These are data items received from an 
external source by the code under test. The external source can 
be hardware, software, or human. 
2. Execution conditions. These are conditions required for 
running the test, for example, a certain state of a database, or a 
configuration of a hardware device. 
3. Expected outputs. These are the specified results to be 
produced by the code under test.
Test 
A test is a group of related test cases, or a group of related 
test cases and test Procedures. 
Test Oracle 
A test oracle is a document, or piece of software that allows 
testers to determine whether a test has been passed or failed. 
Test Bed 
A test bed is an environment that contains all the hardware 
and software needed to test a software component or a 
software system.
 1. Quality relates to the degree to which a system, 
system component, or process meets specified 
requirements. 
 2. Quality relates to the degree to which a system, 
system component, or process meets customer or user 
needs, or expectations. 
Metric: 
 A metric is a quantitative measure of the degree to 
which a system, system component, or process 
possesses a given attribute.
 The software quality assurance (SQA) group is a team 
of people with the necessary training and skills to 
ensure that all necessary actions are taken during the 
development process so hat the resulting software 
conforms to established technical requirements. 
 REVIEW: 
A review is a group meeting whose purpose is to 
evaluate a software artifact or a set of software 
artifacts.
 Principle 1. Testing is the process of exercising a 
software component using a selected set of test cases, 
with the intent of (i) revealing defects, and (ii) 
evaluating quality. 
 Principle 2. When the test objective is to detect 
defects, then a good test case is one that has a high 
probability of revealing a yet undetected defect(s).
Principle 3. Test results should be inspected meticulously. 
Principle 4. A test case must contain the expected output or result. 
Principle 5. Test cases should be developed for both valid and 
invalid input conditions. 
Principle 6. The probability of the existence of additional defects in 
a software component is proportional to the number of defects already 
detected in that component. 
Principle 7. Testing should be carried out by a group that is 
independent of the development group. 
Principle 8. Tests must be repeatable and reusable. 
Principle 9. Testing should be planned. 
Principle 10. Testing activities should be integrated into the software 
life cycle. 
Principle 11. Testing is a creative and challenging task.
 The term defect and its relationship to the terms 
error and failure in the context of the software 
development domain. 
 Sources of defects are Education, 
Communication, Oversight, Transcription, 
Process.
Defect classes and the defect repository. 
1 . Functional Description Defects. 
2 . Feature Defects 
Features may be described as distinguishing 
characteristics of a software component or system. 
3 . Feature Interaction Defects 
4 . Interface Description Defects
 Design defects occur when system components, 
interactions between system components, interactions 
between the components and outside 
software/hardware, or users are incorrectly designed. 
 This covers defects in the design of algorithms, control, 
logic, data elements, module interface descriptions, and 
external software/hardware/user interface descriptions.
 Algorithmic and Processing Defects 
 Control, Logic, and Sequence Defects 
 Data Defects 
 Module Interface Description Defects 
 Functional Description Defects 
 External Interface Description Defects.
 Coding defects are derived from errors in implementing the code. 
Coding defects classes are closely related to design defect classes 
especially if pseudo code has been used for detailed design. 
 Algorithmic and Processing Defects 
 Control, Logic and Sequence Defects 
 Typographical Defects 
 Initialization Defects 
 Data-Flow Defects 
 Data Defects 
 Module Interface Defects 
 Code Documentation Defects 
 External Hardware, Software Interfaces Defects
 Test Harness Defects 
 Test Case Design and Test Procedure Defects
 The Smart Tester: 
Software components have defects, no matter how 
well our defect prevention activities are implemented. 
Developers cannot prevent/eliminate all defects during 
development. 
 Test Case Design Strategies 
A smart tester who wants to maximize use of time 
and resources knows that she needs to develop what we 
will call effective test cases for execution-based testing.
 Black box testing approach uses only the input and 
the output on the basis of designing test cases. 
 infinite time and resources are not available to 
exhaustively test all possible inputs. 
 The goal for the smart tester is to effectively use the 
resources available by developing a set of test cases 
that gives the maximum yield of defects for the time 
and effort spent.
 Each software module or system has an input domain from 
which test input data is selected. If a tester randomly selects 
inputs from the domain, this is called random testing. 
 Issues in random testing: 
 Are the three values adequate to show that the module 
meets its specification when the tests are run? 
 Should additional or fewer values be used to make the most 
effective use of resources? 
 Are there any input values, other than those selected, more 
likely to reveal defects? 
 Should any values outside the valid domain be used as test 
inputs?
 If a tester is viewing the software-under-test as a black 
box with well-defined inputs and outputs, a good 
approach to selecting test inputs is to use a method 
called equivalence class partitioning. 
 Equivalence class partitioning results in a partitioning 
of the input domain of the software under test. The 
technique can also be used to partition the output 
domain, but this is not a common usage.
1. It eliminates the need for exhaustive testing, which is not 
feasible. 
2. It guides a tester in selecting a subset of test inputs with a 
high probability of detecting a defect. 
3.It allows a tester to cover a larger domain of 
inputs/outputs with a smaller subset selected from an 
equivalence class.
 1. ‘‘If an input condition for the software-under-test is specified 
as a range of values, select one valid equivalence class that 
covers the allowed range and two invalid equivalence classes, 
one outside each end of the range.’’ 
 2. ‘‘If an input condition for the software-under-test is specified 
as a number of values, then select one valid equivalence class 
that includes the allowed number of values and two invalid 
equivalence classes that are outside each end of the allowed 
number.’’
3. ‘‘If an input condition for the software-under-test is specified as a set 
of valid input values, then select one valid equivalence class that 
contains all the members of the set and one invalid equivalence class 
for any value outside the set.’’ 
4.‘‘If an input condition for the software-under-test is specified as a 
“must be” condition, select one valid equivalence class to represent 
the “must be” condition and one invalid class that does not include 
the “must be” condition.’’ 
5. ‘‘If the input specification or any other information leads to the belief 
that an element in an equivalence class is not handled in an identical 
way by the software-under-test, then the class should be further 
partitioned into smaller equivalence classes.’’
 Equivalence class partitioning gives the tester a 
useful tool with which to develop black box based-test 
cases for the software-under-test. The method 
requires that a tester has access to a specification of 
input/output behavior for the target software. 
 The test cases developed based on equivalence class 
partitioning can be strengthened by use of an another 
technique called boundary value analysis.
 C a u s e - a n d - E f f e c t G r a p h i n g 
Cause-and-effect graphing is a technique that can be used to 
combine conditions and derive an effective set of test cases that 
may disclose inconsistencies in a specification. 
 S t a t e T r a n s i t i o n T e s t i n g 
State transition testing is useful for both procedural and 
object-oriented development. It is based on the concepts of states 
and finite-state machines, and allows the tester to view the 
developing software in term of its states, transitions between 
states, and the inputs and events that trigge state changes.
 Designing test cases using the error guessing 
approach is based on the tester’s/developer’s past 
experience with code similar to the code-under 
test, and their intuition as to where defects may 
lurk in the code. 
 Code similarities may extend to the structure of 
the code, its domain, the design approach used, its 
complexity, and other factors.
 As software development evolves into an 
engineering discipline, the reuse of software 
components will play an increasingly important 
role. 
 Reuse of components means that developers 
need not reinvent the wheel; instead they can 
reuse an existing software component with the 
required functionality.
 Black box methods have ties to the other maturity goals at 
TMM level 2. 
 The defect/problem fix report should contain the following 
information: 
• project identifier 
• the problem/defect identifier 
• testing phase that uncovered the defect 
• a classification for the defect found 
• a description of the repairs that were done 
• the identification number(s) of the associated tests 
• the date of repair 
• the name of the repairer.
S T R A T E G I E S A N D M E T H O D S 
F O R T E S T C A S E D E S I G N -II
 Testers need a framework for deciding which structural 
elements to select as the focus of testing, for choosing 
the appropriate test data, and for deciding when the 
testing efforts are adequate enough to terminate the 
process with confidence that the software is working 
properly. 
 Such a framework exists in the form of test adequacy 
criteria.
The application scope of adequacy criteria also includes: 
(i) helping testers to select properties of a program to focus on during 
test; 
(ii) helping testers to select a test data set for a program based on the 
selected properties; 
(iii) supporting testers with the development of quantitative objectives 
for 
testing; 
(iv) indicating to testers whether or not testing can be stopped for that 
program. 
A program is said to be adequately tested with respect to a given 
criterion if all of the target structural elements have been exercised 
according to the selected criterion.
 The application of coverage analysis is typically 
associated with the use of control and data flow 
models to represent program structural elements 
and data. 
 The logic elements most commonly considered 
for coverage are based on the flow of control in a 
unit of code.
 Logic-based white box–based test design and use of 
test data adequacy/coverage concepts provide two 
major payoffs for the tester: (i) quantitative coverage 
goals can be proposed, and (ii) commercial tool support 
is readily available to facilitate the tester’s work. 
 The tester must decide, based on the type of code, 
reliability requirements, and resources available which 
criterion to select, since the stronger the criterion 
selected the more resources are usually required to 
satisfy it.
 The cyclomatic complexity attribute is very 
useful to a tester. 
 The complexity value is usually calculated from 
the control flow graph (G) by the formula 
 V(G) =E -N +2 
 The value E is the number of edges in the control 
flow graph and N is the number of nodes.
 Additionally, use of software logic and control 
structures to guide test data generation and to 
evaluate test completeness there are alternative 
methods that focus on other characteristics of the 
code. 
 To satisfy the all def-use criterion the tester must 
identify and classify occurrences of all the 
variables in the software under test.
 Loops are among the most frequently used 
control structures. Experienced software 
engineers realize that many defects are 
associated with loop constructs. 
 Loop testing strategies focus on detecting 
common defects associate with these 
structures.
 Mutation testing is another approach to test data 
generation that requires knowledge of code 
structure, but it is classified as a fault-based 
testing approach. 
 It considers the possible faults that could occur in 
a software component as the basis for test data 
generation and evaluation of testing effectiveness.
Mutation testing makes two major assumptions: 
1. The competent programmer hypothesis. This states that a 
competent programmer writes programs that are nearly correct. 
Therefore assume that there are no major construction errors in the 
program; the code is correct except for a simple error(s). 
2. The coupling effect. This effect relates to questions a tester might 
have about how well mutation testing can detect complex errors 
since the changes made to the code are very simple.
 Testers are often faced with the decision of which criterion to apply 
to a given item under test given the nature of the item and the 
constraints of the test environment (time, costs, resources). 
 Testers can use the axioms to 
◦ Recognize both strong and weak adequacy criteria; a tester may 
decide to use a weak criterion, but should be aware of its 
weakness with respect to the properties described by the axioms; 
◦ Focus attention on the properties that an effective test data 
adequacy criterion should exhibit; 
◦ Select an appropriate criterion for the item under test; 
◦ Stimulate thought for the development of new criteria; the axioms 
are the framework with which to evaluate these new criteria.
 The axioms are based on the following set of 
assumptions: 
(i) programs are written in a structured 
programming language; 
(ii) programs are SESE (single entry/single exit); 
(iii) all input statements appear at the beginning of 
the program; 
(iv) all output statements appear at the end of the 
program.
 Applicability Property 
 Non-exhaustive applicability Property 
 Monotonicity Property 
 Inadequate Empty Set 
 Antiextensionality Property 
 General Multiple Change Property 
 Antidecomposition Property 
 Anticomposition Property 
 Complexity Property 
 Statement Coveraage Property
Use of different testing strategies and methods has the following benefits: 
1. The tester is encouraged to view the developing software from several 
different views to generate the test data. 
2.The tester must interact with other development personnel such as 
requirements analysts and designers to review their representations of the 
software. 
3.The tester is better equipped to evaluate the quality of the testin effort (there 
are more tools and approaches available from the combination 
of strategies). 
4.The tester is better able to contribute to organizational test process 
improvement efforts based on his/her knowledge of a variety of testing 
strategies.
 Levels of testing include the different 
methodologies that can be used while conducting 
Software Testing. 
 They are: 
◦ Unit Test 
◦ Integration Test 
◦ System Test 
◦ Acceptance Test
 This type of testing is performed by the developers 
before the setup is handed over to the testing team to 
formally execute the test cases. 
 Unit testing is performed by the respective developers 
on the individual units of source code assigned areas. 
The developers use test data that is separate from the 
test data of the quality assurance team. 
 The goal of unit testing is to isolate each part of the 
program and show that individual parts are correct in 
terms of requirements and functionality.
 Testing cannot catch each and every bug in an 
application. It is impossible to evaluate every 
execution path in every software application. The 
same is the case with unit testing. 
 There is a limit to the number of scenarios and test 
data that the developer can use to verify the source 
code. So after he has exhausted all options there is 
no choice but to stop unit testing and merge the 
code segment with other units.
 The testing of combined parts of an application to 
determine if they function correctly together is 
Integration testing. There are two methods of doing 
Integration Testing Bottom-up Integration testing and 
Top Down Integration testing. 
 The process concludes with multiple tests of the 
complete application, preferably in scenarios designed 
to mimic those it will encounter in customers' 
computers, systems and network.
SNO. Integration Testing Method 
1 Bottom-up integration 
This testing begins with unit testing, followed by tests of 
progressively higher-level combinations of units called 
modules or builds. 
2 Top-Down integration 
This testing, the highest-level modules are tested first and 
progressively lower-level modules are tested after that.
 This is the next level in the testing and tests the system as a whole. 
Once all the components are integrated, the application as a whole is 
tested rigorously to see that it meets Quality Standards. This type of 
testing is performed by a specialized testing team. 
 System testing is so important because of the following reasons: 
 System Testing is the first step in the Software Development Life 
Cycle, where the application is tested as a whole. 
 The application is tested thoroughly to verify that it meets the 
functional and technical specifications. 
 The application is tested in an environment which is very close to the 
production environment where the application will be deployed. 
 System Testing enables us to test, verify and validate both the business 
requirements as well as the Applications Architecture.
 Functional testing 
 Performance testing 
 Stress testing 
 Configuration testing 
 Security testing 
 Recovery testing
 This is arguably the most importance type of testing as it is conducted by 
the Quality Assurance Team who will gauge whether the application meets 
the intended specifications and satisfies the client.s requirements. The QA 
team will have a set of pre written scenarios and Test Cases that will be 
used to test the application. 
 More ideas will be shared about the application and more tests can be 
performed on it to gauge its accuracy and the reasons why the project was 
initiated. Acceptance tests are not only intended to point out simple 
spelling mistakes, cosmetic errors or Interface gaps, but also to point out 
any bugs in the application that will result in system crashers or major 
errors in the application. 
 By performing acceptance tests on an application the testing team will 
deduce how the application will perform in production. There are also legal 
and contractual requirements for acceptance of the system.
 This test is the first stage of testing and will be performed 
amongst the teams (developer and QA teams). Unit testing, 
integration testing and system testing when combined are 
known as alpha testing. During this phase, the following 
will be tested in the application: 
 Spelling Mistakes 
 Broken Links 
 Cloudy Directions 
 The Application will be tested on machines with the lowest 
specification to test loading times and any latency 
problems.
Beta test is performed after Alpha testing has been successfully 
performed. In beta testing a sample of the intended audience tests 
the application. Beta testing is also known as pre-release testing. 
In this phase the audience will be testing the following: 
 Users will install, run the application and send their feedback to 
the project team. 
 Typographical errors, confusing application flow, and even 
crashes. 
 Getting the feedback, the project team can fix the problems 
before releasing the software to the actual users. 
 The more issues you fix that solve real user problems, the higher 
the quality of your application will be. 
 Having a higher-quality application when you release to the 
general public will increase customer satisfaction.
 The development, documentation, and 
institutionalization of goals and related policies is 
important to an organization. 
 The goals/policies may be business-related, 
technical, or political in nature. 
 They are the basis for decision making; therefore 
setting goals and policies requires the 
participation and support of upper management.
 1.Business goal: to increase market share 10% in the 
next 2 years in the area of financial software. 
 2. Technical goal: to reduce defects by 2% per year 
over the next 3 years. 
 3. Business/technical goal: to reduce hotline calls by 
5% over the next 2 years. 
 4. Political goal: to increase the number of women and 
minorities in high management positions by 15% in the 
next 3 years.
 A plan is a document that provides a framework or 
approach for achieving a set of goals. 
 Test plans for software projects are very complex and 
detailed documents. 
 The planner usually includes the following essential high-level 
items. 
 Overall test objectives. 
 What to test (scope of the tests). 
 Who will test 
 How to test 
 When to test 
 When to stop testing
 Test Item Transmittal Report is not a component of the test plan, 
but is necessary to locate and track the items that are submitted 
for test. Each Test Item Transmittal Report has a unique 
identifier. 
 It should contain the following information for each item that is 
tracked. 
 Version/revision number of the item; 
 Location of the item; 
 Persons responsible for the item (e.g., the developer); 
 References to item documentation and the test plan it is related 
to; 
 Status of the item; 
 Approvals—space for signatures of staff who approve the 
transmittal.
 The test plan and its attachments are test-related 
documents that are prepared prior to test execution. 
There are additional documents related to testing that 
are prepared during and after execution of the tests. 
 The test log should be prepared by the person 
executing the tests. It is a diary of the events that take 
place during the test. 
 Test Log Identifier 
Each test log should have a unique identifier.
 Working with management to develop testing and debugging policy 
and goals. 
 Participating in the teams that oversee policy compliance and 
change management. 
 Familiarizing themselves with the approved set of testing/debugging 
goals and policies, keeping up-to-date with revisions, and making 
suggestions for changes when appropriate. 
 When developing test plans, setting testing goals for each project at 
each level of test that reflect organizational testing goals and 
policies. 
 Carrying out testing activities that are in compliance with 
organizational policies.
THANK YOU 

Mais conteúdo relacionado

Mais procurados

powerpoint template for testing training
powerpoint template for testing trainingpowerpoint template for testing training
powerpoint template for testing trainingJohn Roddy
 
Basic Guide to Manual Testing
Basic Guide to Manual TestingBasic Guide to Manual Testing
Basic Guide to Manual TestingHiral Gosani
 
SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4  SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4 Mohammad Faizan
 
Software Testing 101
Software Testing 101Software Testing 101
Software Testing 101QA Hannah
 
Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing FundamentalsChankey Pathak
 
Types of software testing
Types of software testingTypes of software testing
Types of software testingPrachi Sasankar
 
Types of software testing
Types of software testingTypes of software testing
Types of software testingTestbytes
 
Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...
Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...
Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...Ankit Prajapati
 
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | EdurekaSoftware Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | EdurekaEdureka!
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testingHadi Fadlallah
 
Software testing and process
Software testing and processSoftware testing and process
Software testing and processgouravkalbalia
 
What is Integration Testing? | Edureka
What is Integration Testing? | EdurekaWhat is Integration Testing? | Edureka
What is Integration Testing? | EdurekaEdureka!
 

Mais procurados (20)

Software Testing or Quality Assurance
Software Testing or Quality AssuranceSoftware Testing or Quality Assurance
Software Testing or Quality Assurance
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Testing methodology
Testing methodologyTesting methodology
Testing methodology
 
powerpoint template for testing training
powerpoint template for testing trainingpowerpoint template for testing training
powerpoint template for testing training
 
Basic Guide to Manual Testing
Basic Guide to Manual TestingBasic Guide to Manual Testing
Basic Guide to Manual Testing
 
SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4  SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4
 
Software Testing 101
Software Testing 101Software Testing 101
Software Testing 101
 
Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing Fundamentals
 
Manual testing ppt
Manual testing pptManual testing ppt
Manual testing ppt
 
Types of software testing
Types of software testingTypes of software testing
Types of software testing
 
Types of software testing
Types of software testingTypes of software testing
Types of software testing
 
Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...
Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...
Software Testing - Part 1 (Techniques, Types, Levels, Methods, STLC, Bug Life...
 
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | EdurekaSoftware Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testing
 
Software testing
Software testing Software testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
Types of testing
Types of testingTypes of testing
Types of testing
 
Software testing and process
Software testing and processSoftware testing and process
Software testing and process
 
What is Integration Testing? | Edureka
What is Integration Testing? | EdurekaWhat is Integration Testing? | Edureka
What is Integration Testing? | Edureka
 
Software testing
Software testingSoftware testing
Software testing
 

Semelhante a SOFTWARE TESTING

Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance Webtech Learning
 
Lecture 08 (SQE, Testing, PM, RM, ME).pptx
Lecture 08 (SQE, Testing, PM, RM, ME).pptxLecture 08 (SQE, Testing, PM, RM, ME).pptx
Lecture 08 (SQE, Testing, PM, RM, ME).pptxSirRafiLectures
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGSachin Pathania
 
38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answersMaria FutureThoughts
 
Module-4 PART-2&3.ppt
Module-4 PART-2&3.pptModule-4 PART-2&3.ppt
Module-4 PART-2&3.pptSharatNaik11
 
What is software quality management
What is software quality managementWhat is software quality management
What is software quality managementselinasimpson321
 
IT8076 – Software Testing Intro
IT8076 – Software Testing IntroIT8076 – Software Testing Intro
IT8076 – Software Testing IntroJohnSamuel280314
 
Building a software testing environment
Building a software testing environmentBuilding a software testing environment
Building a software testing environmentHimanshu
 
Software_Verification_and_Validation.ppt
Software_Verification_and_Validation.pptSoftware_Verification_and_Validation.ppt
Software_Verification_and_Validation.pptSaba651353
 
MIT521 software testing (2012) v2
MIT521   software testing  (2012) v2MIT521   software testing  (2012) v2
MIT521 software testing (2012) v2Yudep Apoi
 
What Is the Software Testing Life Cycle.pdf
What Is the Software Testing Life Cycle.pdfWhat Is the Software Testing Life Cycle.pdf
What Is the Software Testing Life Cycle.pdfAnanthReddy38
 
DISE - Software Testing and Quality Management
DISE - Software Testing and Quality ManagementDISE - Software Testing and Quality Management
DISE - Software Testing and Quality ManagementRasan Samarasinghe
 
Software testing sengu
Software testing  senguSoftware testing  sengu
Software testing senguSengu Msc
 

Semelhante a SOFTWARE TESTING (20)

Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
Lecture 08 (SQE, Testing, PM, RM, ME).pptx
Lecture 08 (SQE, Testing, PM, RM, ME).pptxLecture 08 (SQE, Testing, PM, RM, ME).pptx
Lecture 08 (SQE, Testing, PM, RM, ME).pptx
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
 
38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers
 
Module-4 PART-2&3.ppt
Module-4 PART-2&3.pptModule-4 PART-2&3.ppt
Module-4 PART-2&3.ppt
 
QACampus PPT (STLC)
QACampus PPT (STLC)QACampus PPT (STLC)
QACampus PPT (STLC)
 
What is software quality management
What is software quality managementWhat is software quality management
What is software quality management
 
IT8076 – Software Testing Intro
IT8076 – Software Testing IntroIT8076 – Software Testing Intro
IT8076 – Software Testing Intro
 
Softwaretesting
SoftwaretestingSoftwaretesting
Softwaretesting
 
T0 numtq0nje=
T0 numtq0nje=T0 numtq0nje=
T0 numtq0nje=
 
Software testing kn husainy
Software testing kn husainySoftware testing kn husainy
Software testing kn husainy
 
Building a software testing environment
Building a software testing environmentBuilding a software testing environment
Building a software testing environment
 
Software_Verification_and_Validation.ppt
Software_Verification_and_Validation.pptSoftware_Verification_and_Validation.ppt
Software_Verification_and_Validation.ppt
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
CTFL chapter 05
CTFL chapter 05CTFL chapter 05
CTFL chapter 05
 
MIT521 software testing (2012) v2
MIT521   software testing  (2012) v2MIT521   software testing  (2012) v2
MIT521 software testing (2012) v2
 
What Is the Software Testing Life Cycle.pdf
What Is the Software Testing Life Cycle.pdfWhat Is the Software Testing Life Cycle.pdf
What Is the Software Testing Life Cycle.pdf
 
DISE - Software Testing and Quality Management
DISE - Software Testing and Quality ManagementDISE - Software Testing and Quality Management
DISE - Software Testing and Quality Management
 
Software testing sengu
Software testing  senguSoftware testing  sengu
Software testing sengu
 
STLC-ppt-1.pptx
STLC-ppt-1.pptxSTLC-ppt-1.pptx
STLC-ppt-1.pptx
 

Último

DM Pillar Training Manual.ppt will be useful in deploying TPM in project
DM Pillar Training Manual.ppt will be useful in deploying TPM in projectDM Pillar Training Manual.ppt will be useful in deploying TPM in project
DM Pillar Training Manual.ppt will be useful in deploying TPM in projectssuserb6619e
 
Ch10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfCh10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfChristianCDAM
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Class 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm SystemClass 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm Systemirfanmechengr
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Engineering Drawing section of solid
Engineering Drawing     section of solidEngineering Drawing     section of solid
Engineering Drawing section of solidnamansinghjarodiya
 
Input Output Management in Operating System
Input Output Management in Operating SystemInput Output Management in Operating System
Input Output Management in Operating SystemRashmi Bhat
 
Virtual memory management in Operating System
Virtual memory management in Operating SystemVirtual memory management in Operating System
Virtual memory management in Operating SystemRashmi Bhat
 
"Exploring the Essential Functions and Design Considerations of Spillways in ...
"Exploring the Essential Functions and Design Considerations of Spillways in ..."Exploring the Essential Functions and Design Considerations of Spillways in ...
"Exploring the Essential Functions and Design Considerations of Spillways in ...Erbil Polytechnic University
 
home automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadhome automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadaditya806802
 
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONTHE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONjhunlian
 
Research Methodology for Engineering pdf
Research Methodology for Engineering pdfResearch Methodology for Engineering pdf
Research Methodology for Engineering pdfCaalaaAbdulkerim
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptMadan Karki
 
multiple access in wireless communication
multiple access in wireless communicationmultiple access in wireless communication
multiple access in wireless communicationpanditadesh123
 
Industrial Safety Unit-IV workplace health and safety.ppt
Industrial Safety Unit-IV workplace health and safety.pptIndustrial Safety Unit-IV workplace health and safety.ppt
Industrial Safety Unit-IV workplace health and safety.pptNarmatha D
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating SystemRashmi Bhat
 
Crystal Structure analysis and detailed information pptx
Crystal Structure analysis and detailed information pptxCrystal Structure analysis and detailed information pptx
Crystal Structure analysis and detailed information pptxachiever3003
 
Katarzyna Lipka-Sidor - BIM School Course
Katarzyna Lipka-Sidor - BIM School CourseKatarzyna Lipka-Sidor - BIM School Course
Katarzyna Lipka-Sidor - BIM School Coursebim.edu.pl
 
Earthing details of Electrical Substation
Earthing details of Electrical SubstationEarthing details of Electrical Substation
Earthing details of Electrical Substationstephanwindworld
 

Último (20)

DM Pillar Training Manual.ppt will be useful in deploying TPM in project
DM Pillar Training Manual.ppt will be useful in deploying TPM in projectDM Pillar Training Manual.ppt will be useful in deploying TPM in project
DM Pillar Training Manual.ppt will be useful in deploying TPM in project
 
Ch10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfCh10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdf
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Class 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm SystemClass 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm System
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Engineering Drawing section of solid
Engineering Drawing     section of solidEngineering Drawing     section of solid
Engineering Drawing section of solid
 
Input Output Management in Operating System
Input Output Management in Operating SystemInput Output Management in Operating System
Input Output Management in Operating System
 
Virtual memory management in Operating System
Virtual memory management in Operating SystemVirtual memory management in Operating System
Virtual memory management in Operating System
 
"Exploring the Essential Functions and Design Considerations of Spillways in ...
"Exploring the Essential Functions and Design Considerations of Spillways in ..."Exploring the Essential Functions and Design Considerations of Spillways in ...
"Exploring the Essential Functions and Design Considerations of Spillways in ...
 
home automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadhome automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasad
 
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONTHE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
 
Research Methodology for Engineering pdf
Research Methodology for Engineering pdfResearch Methodology for Engineering pdf
Research Methodology for Engineering pdf
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.ppt
 
multiple access in wireless communication
multiple access in wireless communicationmultiple access in wireless communication
multiple access in wireless communication
 
Industrial Safety Unit-IV workplace health and safety.ppt
Industrial Safety Unit-IV workplace health and safety.pptIndustrial Safety Unit-IV workplace health and safety.ppt
Industrial Safety Unit-IV workplace health and safety.ppt
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
 
Crystal Structure analysis and detailed information pptx
Crystal Structure analysis and detailed information pptxCrystal Structure analysis and detailed information pptx
Crystal Structure analysis and detailed information pptx
 
Katarzyna Lipka-Sidor - BIM School Course
Katarzyna Lipka-Sidor - BIM School CourseKatarzyna Lipka-Sidor - BIM School Course
Katarzyna Lipka-Sidor - BIM School Course
 
Earthing details of Electrical Substation
Earthing details of Electrical SubstationEarthing details of Electrical Substation
Earthing details of Electrical Substation
 

SOFTWARE TESTING

  • 1. Submitted by, C.PRIYANKA KARANCY, M.TECH,CCE, REG. NO. PR13CS2013.
  • 2. INTRODUCTION TO TESTING AS AN ENGINEERING ACTIVITY
  • 3. The software development implies that the following approaches.  The development process is well understood;  Projects are planned.  Life cycle models are defined and adhered to.  Standards are in place for product and process.  Measurements are employed to evaluate product and process quality.  Components are reused.  Validation and verification processes play a key role in quality  determination;  engineers have proper education, training, and certification.
  • 4. Type Purposes Activities Roles 1 Test Products Test Development, Test Execution. Testers: Extension of Development. 2 Measure Products Test oversight, reporting results. Measurers; Quality Hurdle. 3 Measure Processes Metrics. Information Engineers. 4 Define Processes Process and Risk Management. Quality and Process Engineers. 5 Guidance Resource Quality Reference. Quality Engineers.
  • 5.  Validation is the process of evaluating a software system or component during, or at the end of, the development cycle in order to determine whether it satisfies specified requirements.  Verification is the process of evaluating a software system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
  • 6.
  • 7. Testing is generally described as a group of procedures carried out to evaluate some aspect of a piece of software. Debugging, or fault localization is the process of  locating the fault or defect, repairing the code, retesting the code.
  • 8. Benefits of test process improvement are the following:  smarter testers  higher quality software  the ability to meet budget and scheduling goals  improved planning  the ability to meet quantifiable testing goals
  • 9. Maturity goals Each maturity level, except 1, contains certain maturity goals. For an organization to reach a certain level, the corresponding maturity goals must be met by the organization. Maturity subgoals Maturity goals are supported by maturity subgoals ATRs Maturity subgoals are achieved by means of ATRs. ATRs address issues concerning implementation of activities and tasks. ATRs also address how an organization can adapt its practices so that it can move in-line with the TMM model. ATRs are refined into “views” from the perspectives of three groups: Managers Developers and test engineers Customers (user/client).
  • 10.
  • 11. Level 1 – Initial There are no maturity goals to be met at this level. Testing begins after code is written. An organization performs testing to demonstrate that the system works. No serious effort is made to track the progress of testing. Test cases are designed and executed in an ad hoc manner. In summary, testing is not viewed as a critical, distinct phase in software development.
  • 12. Level 2 – Phase Definition: The maturity goals are as follows: Develop testing and debugging goals. Some concrete maturity subgoals that can support this goal are as follows: Organizations form committees on testing and debugging. The committees develop and document testing and debugging goals. Initiate a test planning process. (Identify test objectives. Analyze risks. Devise strategies. Develop test specifications. Allocate resources.) Some concrete maturity subgoals that can support this goal are as follows: Assign the task of test planning to a committee. The committee develops a test plan template. Proper tools are used to create and manage test plans. Provisions are put in place so that customer needs constitute a part of the test plan. Institutionalize basic testing techniques and methods. The following concrete subgoals support the above maturity subgoal. An expert group recommends a set of basic testing techniques and methods. The management establishes policies to execute the recommendations.
  • 13. Level 3 – Integration: The maturity goals are as follows: Establish a software test group. Concrete subgoals to support the above are: An organization-wide test group is formed with leadership, support, and $$. The test group is involved in all stages of the software development. Trained and motivated test engineers are assigned to the group. The test group communicates with the customers. Establish a technical training program. Integrate testing into the software lifecycle. Concrete subgoals to support the above are: The test phase is partitined into several activities: unit, integration, system, and acceptance testing. Follow the V-model. Control and monitor the testing process. Concrete subgoals to support the above are: Develop policies and mechanisms to monitor and control test projects. Define a set of metrics related to the test project. Be prepared with a contingency plan
  • 14. Level 4 – Management and Measurement: The maturity goals are: Establish an organization-wide review program. Maturity subgoals to support the above are as follows. The management develops review policies. The test group develops goals, plans, procedures, and recording mechanisms for carrying out reviews. Members of the test group are trained to be effective. Establish a test management program. Maturity subgoals to support the above are as follows. Test metrics should be identified along with their goals. A test measurement plan is developed for data collection and analysis. An action plan should be developed to achieve process improvement. Evaluate software quality. Maturity subgoals to support the above are as follows. The organization defines quality attributes and quality goals for products. The management develops policies and mechanisms to collect test metrics to support the quality goals.
  • 15. Level 5 –Optimization, Defect Prevention and Quality Control: The maturity goals are as follows: Application of process data for defect prevention Maturity subgoals to support the above are as follows. Establish a defect prevention team. Document defects that have been identified and removed. Each defect is analyzed to get to its root cause. Develop an action plan to eliminate recurrence of common defects. Statistical quality control Maturity subgoals to support the above are as follows. Establish high-level measurable quality goals. (Ex. Test case execution rate, defect arrival rate, …) Ensure that the new quality goals form a part of the test plan. The test group is trained in statistical testing and analysis methods.
  • 16.  2.1 Basic Definition  Errors: An error is a mistake, misconception, or misunderstanding on the part of a software developer.  Faults (Defects)  A fault (defect) is introduced into the software as the result of an error. It is an anomaly in the software that may cause it to behave incorrectly, and not according to its specification.  Failures  A failure is the inability of a software system or component to perform its require functions within specified performance requirements.
  • 17.  A test case in a practical sense is a test-related item which contains the following information: 1. A set of test inputs. These are data items received from an external source by the code under test. The external source can be hardware, software, or human. 2. Execution conditions. These are conditions required for running the test, for example, a certain state of a database, or a configuration of a hardware device. 3. Expected outputs. These are the specified results to be produced by the code under test.
  • 18. Test A test is a group of related test cases, or a group of related test cases and test Procedures. Test Oracle A test oracle is a document, or piece of software that allows testers to determine whether a test has been passed or failed. Test Bed A test bed is an environment that contains all the hardware and software needed to test a software component or a software system.
  • 19.  1. Quality relates to the degree to which a system, system component, or process meets specified requirements.  2. Quality relates to the degree to which a system, system component, or process meets customer or user needs, or expectations. Metric:  A metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute.
  • 20.  The software quality assurance (SQA) group is a team of people with the necessary training and skills to ensure that all necessary actions are taken during the development process so hat the resulting software conforms to established technical requirements.  REVIEW: A review is a group meeting whose purpose is to evaluate a software artifact or a set of software artifacts.
  • 21.  Principle 1. Testing is the process of exercising a software component using a selected set of test cases, with the intent of (i) revealing defects, and (ii) evaluating quality.  Principle 2. When the test objective is to detect defects, then a good test case is one that has a high probability of revealing a yet undetected defect(s).
  • 22. Principle 3. Test results should be inspected meticulously. Principle 4. A test case must contain the expected output or result. Principle 5. Test cases should be developed for both valid and invalid input conditions. Principle 6. The probability of the existence of additional defects in a software component is proportional to the number of defects already detected in that component. Principle 7. Testing should be carried out by a group that is independent of the development group. Principle 8. Tests must be repeatable and reusable. Principle 9. Testing should be planned. Principle 10. Testing activities should be integrated into the software life cycle. Principle 11. Testing is a creative and challenging task.
  • 23.  The term defect and its relationship to the terms error and failure in the context of the software development domain.  Sources of defects are Education, Communication, Oversight, Transcription, Process.
  • 24.
  • 25. Defect classes and the defect repository. 1 . Functional Description Defects. 2 . Feature Defects Features may be described as distinguishing characteristics of a software component or system. 3 . Feature Interaction Defects 4 . Interface Description Defects
  • 26.  Design defects occur when system components, interactions between system components, interactions between the components and outside software/hardware, or users are incorrectly designed.  This covers defects in the design of algorithms, control, logic, data elements, module interface descriptions, and external software/hardware/user interface descriptions.
  • 27.  Algorithmic and Processing Defects  Control, Logic, and Sequence Defects  Data Defects  Module Interface Description Defects  Functional Description Defects  External Interface Description Defects.
  • 28.  Coding defects are derived from errors in implementing the code. Coding defects classes are closely related to design defect classes especially if pseudo code has been used for detailed design.  Algorithmic and Processing Defects  Control, Logic and Sequence Defects  Typographical Defects  Initialization Defects  Data-Flow Defects  Data Defects  Module Interface Defects  Code Documentation Defects  External Hardware, Software Interfaces Defects
  • 29.  Test Harness Defects  Test Case Design and Test Procedure Defects
  • 30.  The Smart Tester: Software components have defects, no matter how well our defect prevention activities are implemented. Developers cannot prevent/eliminate all defects during development.  Test Case Design Strategies A smart tester who wants to maximize use of time and resources knows that she needs to develop what we will call effective test cases for execution-based testing.
  • 31.  Black box testing approach uses only the input and the output on the basis of designing test cases.  infinite time and resources are not available to exhaustively test all possible inputs.  The goal for the smart tester is to effectively use the resources available by developing a set of test cases that gives the maximum yield of defects for the time and effort spent.
  • 32.  Each software module or system has an input domain from which test input data is selected. If a tester randomly selects inputs from the domain, this is called random testing.  Issues in random testing:  Are the three values adequate to show that the module meets its specification when the tests are run?  Should additional or fewer values be used to make the most effective use of resources?  Are there any input values, other than those selected, more likely to reveal defects?  Should any values outside the valid domain be used as test inputs?
  • 33.  If a tester is viewing the software-under-test as a black box with well-defined inputs and outputs, a good approach to selecting test inputs is to use a method called equivalence class partitioning.  Equivalence class partitioning results in a partitioning of the input domain of the software under test. The technique can also be used to partition the output domain, but this is not a common usage.
  • 34. 1. It eliminates the need for exhaustive testing, which is not feasible. 2. It guides a tester in selecting a subset of test inputs with a high probability of detecting a defect. 3.It allows a tester to cover a larger domain of inputs/outputs with a smaller subset selected from an equivalence class.
  • 35.  1. ‘‘If an input condition for the software-under-test is specified as a range of values, select one valid equivalence class that covers the allowed range and two invalid equivalence classes, one outside each end of the range.’’  2. ‘‘If an input condition for the software-under-test is specified as a number of values, then select one valid equivalence class that includes the allowed number of values and two invalid equivalence classes that are outside each end of the allowed number.’’
  • 36. 3. ‘‘If an input condition for the software-under-test is specified as a set of valid input values, then select one valid equivalence class that contains all the members of the set and one invalid equivalence class for any value outside the set.’’ 4.‘‘If an input condition for the software-under-test is specified as a “must be” condition, select one valid equivalence class to represent the “must be” condition and one invalid class that does not include the “must be” condition.’’ 5. ‘‘If the input specification or any other information leads to the belief that an element in an equivalence class is not handled in an identical way by the software-under-test, then the class should be further partitioned into smaller equivalence classes.’’
  • 37.  Equivalence class partitioning gives the tester a useful tool with which to develop black box based-test cases for the software-under-test. The method requires that a tester has access to a specification of input/output behavior for the target software.  The test cases developed based on equivalence class partitioning can be strengthened by use of an another technique called boundary value analysis.
  • 38.  C a u s e - a n d - E f f e c t G r a p h i n g Cause-and-effect graphing is a technique that can be used to combine conditions and derive an effective set of test cases that may disclose inconsistencies in a specification.  S t a t e T r a n s i t i o n T e s t i n g State transition testing is useful for both procedural and object-oriented development. It is based on the concepts of states and finite-state machines, and allows the tester to view the developing software in term of its states, transitions between states, and the inputs and events that trigge state changes.
  • 39.  Designing test cases using the error guessing approach is based on the tester’s/developer’s past experience with code similar to the code-under test, and their intuition as to where defects may lurk in the code.  Code similarities may extend to the structure of the code, its domain, the design approach used, its complexity, and other factors.
  • 40.  As software development evolves into an engineering discipline, the reuse of software components will play an increasingly important role.  Reuse of components means that developers need not reinvent the wheel; instead they can reuse an existing software component with the required functionality.
  • 41.  Black box methods have ties to the other maturity goals at TMM level 2.  The defect/problem fix report should contain the following information: • project identifier • the problem/defect identifier • testing phase that uncovered the defect • a classification for the defect found • a description of the repairs that were done • the identification number(s) of the associated tests • the date of repair • the name of the repairer.
  • 42. S T R A T E G I E S A N D M E T H O D S F O R T E S T C A S E D E S I G N -II
  • 43.  Testers need a framework for deciding which structural elements to select as the focus of testing, for choosing the appropriate test data, and for deciding when the testing efforts are adequate enough to terminate the process with confidence that the software is working properly.  Such a framework exists in the form of test adequacy criteria.
  • 44. The application scope of adequacy criteria also includes: (i) helping testers to select properties of a program to focus on during test; (ii) helping testers to select a test data set for a program based on the selected properties; (iii) supporting testers with the development of quantitative objectives for testing; (iv) indicating to testers whether or not testing can be stopped for that program. A program is said to be adequately tested with respect to a given criterion if all of the target structural elements have been exercised according to the selected criterion.
  • 45.  The application of coverage analysis is typically associated with the use of control and data flow models to represent program structural elements and data.  The logic elements most commonly considered for coverage are based on the flow of control in a unit of code.
  • 46.  Logic-based white box–based test design and use of test data adequacy/coverage concepts provide two major payoffs for the tester: (i) quantitative coverage goals can be proposed, and (ii) commercial tool support is readily available to facilitate the tester’s work.  The tester must decide, based on the type of code, reliability requirements, and resources available which criterion to select, since the stronger the criterion selected the more resources are usually required to satisfy it.
  • 47.  The cyclomatic complexity attribute is very useful to a tester.  The complexity value is usually calculated from the control flow graph (G) by the formula  V(G) =E -N +2  The value E is the number of edges in the control flow graph and N is the number of nodes.
  • 48.  Additionally, use of software logic and control structures to guide test data generation and to evaluate test completeness there are alternative methods that focus on other characteristics of the code.  To satisfy the all def-use criterion the tester must identify and classify occurrences of all the variables in the software under test.
  • 49.  Loops are among the most frequently used control structures. Experienced software engineers realize that many defects are associated with loop constructs.  Loop testing strategies focus on detecting common defects associate with these structures.
  • 50.  Mutation testing is another approach to test data generation that requires knowledge of code structure, but it is classified as a fault-based testing approach.  It considers the possible faults that could occur in a software component as the basis for test data generation and evaluation of testing effectiveness.
  • 51. Mutation testing makes two major assumptions: 1. The competent programmer hypothesis. This states that a competent programmer writes programs that are nearly correct. Therefore assume that there are no major construction errors in the program; the code is correct except for a simple error(s). 2. The coupling effect. This effect relates to questions a tester might have about how well mutation testing can detect complex errors since the changes made to the code are very simple.
  • 52.  Testers are often faced with the decision of which criterion to apply to a given item under test given the nature of the item and the constraints of the test environment (time, costs, resources).  Testers can use the axioms to ◦ Recognize both strong and weak adequacy criteria; a tester may decide to use a weak criterion, but should be aware of its weakness with respect to the properties described by the axioms; ◦ Focus attention on the properties that an effective test data adequacy criterion should exhibit; ◦ Select an appropriate criterion for the item under test; ◦ Stimulate thought for the development of new criteria; the axioms are the framework with which to evaluate these new criteria.
  • 53.  The axioms are based on the following set of assumptions: (i) programs are written in a structured programming language; (ii) programs are SESE (single entry/single exit); (iii) all input statements appear at the beginning of the program; (iv) all output statements appear at the end of the program.
  • 54.  Applicability Property  Non-exhaustive applicability Property  Monotonicity Property  Inadequate Empty Set  Antiextensionality Property  General Multiple Change Property  Antidecomposition Property  Anticomposition Property  Complexity Property  Statement Coveraage Property
  • 55. Use of different testing strategies and methods has the following benefits: 1. The tester is encouraged to view the developing software from several different views to generate the test data. 2.The tester must interact with other development personnel such as requirements analysts and designers to review their representations of the software. 3.The tester is better equipped to evaluate the quality of the testin effort (there are more tools and approaches available from the combination of strategies). 4.The tester is better able to contribute to organizational test process improvement efforts based on his/her knowledge of a variety of testing strategies.
  • 56.  Levels of testing include the different methodologies that can be used while conducting Software Testing.  They are: ◦ Unit Test ◦ Integration Test ◦ System Test ◦ Acceptance Test
  • 57.
  • 58.  This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases.  Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is separate from the test data of the quality assurance team.  The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.
  • 59.  Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing.  There is a limit to the number of scenarios and test data that the developer can use to verify the source code. So after he has exhausted all options there is no choice but to stop unit testing and merge the code segment with other units.
  • 60.  The testing of combined parts of an application to determine if they function correctly together is Integration testing. There are two methods of doing Integration Testing Bottom-up Integration testing and Top Down Integration testing.  The process concludes with multiple tests of the complete application, preferably in scenarios designed to mimic those it will encounter in customers' computers, systems and network.
  • 61. SNO. Integration Testing Method 1 Bottom-up integration This testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds. 2 Top-Down integration This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that.
  • 62.  This is the next level in the testing and tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. This type of testing is performed by a specialized testing team.  System testing is so important because of the following reasons:  System Testing is the first step in the Software Development Life Cycle, where the application is tested as a whole.  The application is tested thoroughly to verify that it meets the functional and technical specifications.  The application is tested in an environment which is very close to the production environment where the application will be deployed.  System Testing enables us to test, verify and validate both the business requirements as well as the Applications Architecture.
  • 63.  Functional testing  Performance testing  Stress testing  Configuration testing  Security testing  Recovery testing
  • 64.  This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client.s requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application.  More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application that will result in system crashers or major errors in the application.  By performing acceptance tests on an application the testing team will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system.
  • 65.  This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application:  Spelling Mistakes  Broken Links  Cloudy Directions  The Application will be tested on machines with the lowest specification to test loading times and any latency problems.
  • 66. Beta test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. In this phase the audience will be testing the following:  Users will install, run the application and send their feedback to the project team.  Typographical errors, confusing application flow, and even crashes.  Getting the feedback, the project team can fix the problems before releasing the software to the actual users.  The more issues you fix that solve real user problems, the higher the quality of your application will be.  Having a higher-quality application when you release to the general public will increase customer satisfaction.
  • 67.  The development, documentation, and institutionalization of goals and related policies is important to an organization.  The goals/policies may be business-related, technical, or political in nature.  They are the basis for decision making; therefore setting goals and policies requires the participation and support of upper management.
  • 68.  1.Business goal: to increase market share 10% in the next 2 years in the area of financial software.  2. Technical goal: to reduce defects by 2% per year over the next 3 years.  3. Business/technical goal: to reduce hotline calls by 5% over the next 2 years.  4. Political goal: to increase the number of women and minorities in high management positions by 15% in the next 3 years.
  • 69.  A plan is a document that provides a framework or approach for achieving a set of goals.  Test plans for software projects are very complex and detailed documents.  The planner usually includes the following essential high-level items.  Overall test objectives.  What to test (scope of the tests).  Who will test  How to test  When to test  When to stop testing
  • 70.
  • 71.
  • 72.  Test Item Transmittal Report is not a component of the test plan, but is necessary to locate and track the items that are submitted for test. Each Test Item Transmittal Report has a unique identifier.  It should contain the following information for each item that is tracked.  Version/revision number of the item;  Location of the item;  Persons responsible for the item (e.g., the developer);  References to item documentation and the test plan it is related to;  Status of the item;  Approvals—space for signatures of staff who approve the transmittal.
  • 73.  The test plan and its attachments are test-related documents that are prepared prior to test execution. There are additional documents related to testing that are prepared during and after execution of the tests.  The test log should be prepared by the person executing the tests. It is a diary of the events that take place during the test.  Test Log Identifier Each test log should have a unique identifier.
  • 74.
  • 75.  Working with management to develop testing and debugging policy and goals.  Participating in the teams that oversee policy compliance and change management.  Familiarizing themselves with the approved set of testing/debugging goals and policies, keeping up-to-date with revisions, and making suggestions for changes when appropriate.  When developing test plans, setting testing goals for each project at each level of test that reflect organizational testing goals and policies.  Carrying out testing activities that are in compliance with organizational policies.