2. Expectations
How quality can be achieved? - Done
How we maintain quality? - Done
Quality measurements; - Overview provided;
How reliability can be achieved? - Done
3. Software Quality
In simple terms “Quality software is reasonably bug-free,
delivered on time and within budget, meets
requirements and/or expectations, and is maintainable”
More formally, Software quality measures how well
software is designed (quality of design) and how well the
software conforms to that design (i.e., Implementation -
quality of conformance / quality assurance)
4. So, we’d have quality issues …
1. if customer’s expectations are not met by the end result
2. if there is a lack of conformance to requirement
3. if the development criteria towards specified standards are not met
4. if implicit requirements are not captured and addressed
(ex:
- a change in the physician name in configuration should reflect in the case
selection & acquisition screen
- A Remote service connection has to be secure
- A long text message or caption should have a “…” & a tooltip on mouse hover
- Pressing the Escape key or the X button should close a pop-up and cancel the
changes made)
5. quality of
Hence, both
design and quality
assurance needs to be
ensured throughout the SDLC,
across all the phases
The V-Model product development
process (and agile), ensures better
quality assurance by preparing for
testing early in each stage of SDLC.
6. The V Model Phase Measure
involves the testers
Requirements SSRS, URS cover Quality Attributes & implicit
early in the project requirements (Performance, Security, Safety,
lifecycle and thus Regulatory)
providing avenues Test Strategy & Plan, Adoption of specific Test Design
Techniques based on Risk Matrix for each unit
to correct before Critical to Quality Use cases
critical decisions Requirement Workshops
are made. Reviews or Walkthroughs, Checklists
Traceability Matrix for requirement conformance
Prototypes
Design Design Guidelines, Standards
Design Workshops
DAR
Reviews & Walkthroughs, Checklists
Implementation Unit testing, Mocks & Stubs
Continuous Integration
Reviews & Walkthroughs, Checklists
Testing Functional Testing
Integration Testing
Smoke Testing
8. Reliability
Software reliability is defined as “the probability of failure-free software
operation for a specified period of time in a specified environment”.
Software reliability is based on the three primary concepts: fault, Person (developer) makes
error, and failure (Bug in a program is a fault. Possible incorrect zero to many
values caused by this bug is an error. Possible crash of the
operating system is a failure.) Mistakes
Can be attributed
A fault is the result of a mistake made in the development of the Leads to zero or many
to one or many
system. Faults are dormant but they can become active due to
some revealing mechanisms. (ex: a check for free disk space Faults
threshold before acquisition start, other ex: null ref.,
Can be attributed
uninitialized variable leading to errors) to one or many
An error is the manifestation of what is wrong in the running Leads to zero or many
system. Often errors lead to new errors (propagation), which Errors
eventually may lead to system failure. (ex: fault leading to full Can be attributed
disk space usage, without a warning or validation) to one or many Leads to zero or many
An error can become a failure when it is not corrected or Failures
masked, i.e., when error become observable by the system’s Can be attributed
Leads to zero or manyCustomer complaints
to one or many
user it become a failure (that is, failure is observable by the end
user and error is not). Field Calls
(ex: full disk space leads to acquisition failure or data loss or
system crash)
10. Reliability – Key Areas
Area Description Applicable Phase
Fault Prevention focuses on avoidance of faults in SW products Requirements, Design
Fault Detection focuses on revealing reliability problems Requirements, Design,
Implementation, Testing
Fault Tolerance ensures that system is working properly in case of faults Design, Implementation
Fault Forecasting focuses on prediction of the future system reliability Deployment, Support & Service
11. Reliability in practice…
Phase Measure
Requirements Requirements Reliability Requirements, Safety & Risk
Management Requirements
Reliability requirements Operational profile Critical to Quality Use cases
(MTBF, MTBC, MTBE, etc.) (which functionality is critical)
Requirement Workshops
Reviews, Walkthroughs, Checklists
Design Design Guidelines, Standards
Architecture/Design for reliability
(principles, practices, and patterns) Emphasis for threading, execution architecture
Whiteboard designs
Design Workshops
FMEA
Measuring and testing for reliability Graceful Degradation
(Measuring: MTBF, MTBC, ..., tools DAR
Testing: load/stress/capacity testing, reliability growth testing, tools)
Reviews, Walkthroughs
Is reliability req. Implementation Follow Best Practices & Coding Standards
fulfilled? NO Error Handling
TICS
YES Static Analysis, Code Coverage,
Release Memory Profiling
product
Reviews, Reports attached to CQ activity
Improved Logging
POST, BIST
Testing Performance Testing
Smoke Testing based on CTQ’s & Operational
Profiles
Regression Testing
12. Reliability in practice…
Reliability Parameters Targets & Measurement Criteria
Call Rate Target:
< 1.5 calls per system per year
Actions: Implement I/O enhancements
Recommendations: Start study to analyze how to decrease the call rate to <
1.0
Failure Rate (Failure Rate gives an Target: # of failures reported should be less than 10 per site
indication of the number of non- Have explicit robustness designed into the product
recoverable failures in the field. )
Actions: Execute FMEAs
MTBF Mean time between failure would be 200 days or 1000 studies
MTTR Mean time to repair should not exceed 2 days
Usage of PII Private Interfaces # of private interfaces used – should be 0 ideally, and no increase in usage
of new private interfaces
TICS Target: 0 violations for level 1..6, No increase of level 7..10 violations
Actions: Monitor and act
Code Coverage (Method, Statement) > 80% Statement coverage & 100% Method Coverage
Code reviews 100 %
13. Reliability – Design FMEA
Identify Critical Functionality & Classify for
effective design towards handling faults
Faults to be System/service does not
Fault prevention experience faults
prevented by system
architecture/design design of class I
Class I
System/service keeps
Classify functionality Faults Operating when
System High level Identify faults Faults to be handled Fault tolerant
according to Classify faults Class II encounters
Specification importance/criticality design (e.g., FMEA) (severity, by the system design a fault of class II
frequency)
Class III Faults not to be System/service crashes
System controlled-fail when encounters
handled by the
system design a fault of class III
14. Reliability testing
Steps Possible inputs/tools Hints
1. Derive test cases from Application specialist Operational profiles may differ per
Operational profiles Logging from production deployment…
Use cases Test cases derived from operational profiles
differ from stress testing.
2. Run tests Manual testing on system Test cases should be as repeatable as
Mock, stubs, drivers possible and executed under same
simulators conditions.
QTP
Test Automation Framework
3. Gather data System logging In case of automation, keep in mind time
Test logging compression factor.
Failure definition should be explicit for the
tested system.
4. Plot data and extract failure
intensity and failure rate
5. Predict reliability at end of Be aware that the predicted reliability will
current project phase still very vulnerable to variances
15. Quality vs. Reliability
Quality is a snapshot at the start of life (Time Zero).
All requirements are implemented and as per the design.
All user expectations are met.
Time zero defects are mistakes that escaped the final test.
“Quality is everything until put into operation (0-hours)”
Reliability is a motion picture of the day-by-day operation.
The additional defects that appear over time are "reliability
defects" or reliability fallout.
“Reliability is everything happening after 0-hours”
Notas do Editor
Why the picture?
Requirements vary;Specifically NFR’s are never captured.Take the mobile example around the board;Ask out what is quality & reliability from the audience;