ICS 121: QA Plans
Overview
- Who, what, and why?
- QA plan contents
- Aspects to address
- QA Activities
- Reviews
- Testing approaches
- Testing scopes
- Testing specific quality goals
- QA strategy
Who, What, and Why?
- Who: Usually, the lead QA engineer authors the test plan. He/she
must work with the project manager, development team, and QA
team.
- What: A "QA Plan" is a plan for quality assurance.
- Sets goals
- Outlines a strategy
- Identifies actions to be done
- Identifies resources needed
- Schedules activities, allocates budget
- It is basically a subset of the overall project plan
- The test suite(s) and results are separate from the plan
- Why: coordinate development activities to achieve quality.
- There are many possible quality goals and ways to try to achieve them.
- A written plan is needed to make sure everyone is
working along the same path.
QA Plan Contents
- Identifying information: Project, release, etc.
- Introduction: Background, summary
- Quality goals
- List and describe quality goals
- Set measurable objectives: e.g., 100% statement-coverage in
key classes
- Reference materials: Links to requirements, design,
standards.
- Resources: People, machines, and tools to be used in testing.
- Strategy:
- Choose and describe QA activities to be done
- Try to assure that quality goals are met
- Matrix of goals vs. activities
- Risks: What quality goals will not be assured
- Plan of action:
- What will be done, by whom, and when
- Links to checklists, test suites, etc.
- ReadySET
QA plan template
Aspects to address
- Are you doing everything needed to build in quality?
- Where are defects creeping in?
- What types of analysis will you do?
- What types of review will you do?
- What types of testing will you do?
- Are your requirements and design good enough to even worry about testing?
- What testing coverage will you need? Not all components need
the same level of testing.
- What testing tools/methods will you use?
- What resources (people, capital, schedule) will you use?
- Is your system being built in a way that supports that testing
method/tool? (testability)
- How will you defend against regressions?
QA Activities > Reviews
- Basically, perform reviews that focus on specific artifacts or quality goals:
- Requirements review
- Prototype review
- Design review
- Implementation review
- Documentation review
- UI review
- Security review
- Performance/Scalability review
- Daily or automated reviews:
- All changes to a release branch must be peer-reviewed
- All developer work must compile before check-in
- All developers must subscribe to cvs@PROJECT mailing list
- Nightly run of automated style checker
QA Activities > Testing Approaches
- Exploratory testing
- Just get the product and poke around.
Useful if QA team needs to gain familiarity before forming a detailed
plan. Useful for verifying that a product is ready for testing by
QA team.
- Ad hoc testing
- Manually exercise the product to just see if you
can break it. Ideas from test case design can be applied "on the fly."
- Structured testing
- More systematic manual testing. Testers
work through a detailed manual test suite.
- Automated testing
-
- Record-and-playback: test through the GUI or web interface with a simulated user
- Programmatic testing: write test code to run against product code
- Black box / specification-based testing
- Test only the visible
UI or specified API, do not make use of any knowledge of the
implementation. Try to cover every specified requirement.
- White box / implementation-based testing
- Design the test suite
using knowledge of the product implementation. Try to cover every
part of the implementation.
- Regression testing
- Verify that solved problems remain solved.
And, testing that fixes do not introduce unintended changes.
Usually automated.
- Smoke testing / Quick tests / Nightly tests
- Automated test of
selected features that can be run often. E.g., run by developers
before they do a commit.
- User acceptance testing
- Final, high-level test of the entire
system to see if it is acceptable to users. Think in open-ended,
real-world terms.
- Beta testing
- Usually ad hoc testing by limited outside people.
Beta test programs must be managed to get many results.
- Early access
- Actual usage by a broader group of outside
people
QA Activities > Testing scopes
- Testing in small sections is useful because the requirements are
more narrowly scoped and because any observed failures must be due to
defects in that section.
- Unit testing
- Test one function, method, or class at a time.
Try to isolate that element from the rest of the product. Each
test is very simple. Assigning blame is obvious. Can be started
very early in development.
- Integration testing
- Test specific combinations of components.
Other components may need to be replaced with stubs or drivers.
Test cases focus on interactions between components. Can be started
during development.
- System testing
- Systematic testing of the entire system. Test
cases get larger and more complex. Harder to assign blame. Can
only be started after entire system is implemented.
- Staging
- System testing in something very close to the system's
intended operating environment. Usually done with actual user
data. Can only be done just before deployment.
QA Activities > Testing specific quality goals
- Remember that the goal of QA is to assure that we meet our
quality goals. A good way to do that is to address a specific
quality goal.
- Functional testing: Verify that the system produces the correct
result. Can be done with any strategy, at any scope.
- Correctness: gives the right result for valid input
- Robustness: gracefully handles invalid input
- Accuracy: correct results are mathematically precise
- Compatibility: file formats, network protocols, browsers,
operating system versions, etc.
- Performance and scalability testing: Verify that the system will
perform well in heavy usage. Usually done with automated,
system-level tests.
- Load testing: time to complete operations under heavy usage load
- Stress testing: gracefully handles excessive usage load
- Volume testing: performance with very large datasets
- Longevity testing: servers should continue to satisfy long sequences of requests
- Usability testing: Verify that the system will be usable by
humans.
- Understandability: users can understand how to use the system
- Learnability: the UI gives clues to explain unfamiliar items
- Efficiency of use: users can get their work done without too many steps
- UI safety: common human errors have limited negative impact
- Security testing
- Physical, network, operating system, application
- Encrypted communications
- Authentication: you are who you say you are
- Authorization: limited access, limited actions
- Malicious inputs
- Denial of service
- Operability testing: Verify that use cases for the system administrator work.
- Install/uninstall
- Upgrade software, migrate data to new formats
- Recovery for system crashes or other errors
- Auditability: system keeps records of events for later review
QA strategy
- Don't just rely on system testing:
- It comes too late and allows too much risk to accumulate
- Without test results, it is difficult to assess development progress
- It is difficult to locate defects in the overall product
- It is difficult to understand individual defects when
multiple defects interact
- It is difficult to incrementally repair a product with
a large number of defects
- Build in quality. Don't expect to just test it in.
- Use a mix of QA activities where each gives the best returns
- Mix reviews, analysis, assertions, testing
- Assertions for data structures
- Unit tests for business objects
- Reviews for difficult, central, or new code
- Set higher quality assurance for key components
- Components where defects could be very harmful
- Components that are central to transaction processing
- Focus on quality throughout the development cycle
- Think of QA early and often
- Advocate quality and be able to clearly justify continuous QA effort
- Make sure the team knows about QA plans and measurements
- Understand industry averages
- About 50 defects per 1000 lines of code
- Two or three release candidates for each release
- One to three maintenance releases after a minor release, e.g.,
v1.1.3 is OK, but 1.1.8 is not normal
example use case templatesample test plan templateProject plan template