sample use case templatesample test plan templateexample project plan template
ICS 125: Issue Tracking
- Why use issue trackers?
- What are trackers?
- Defect terminology
- Issue contents
- Common operations
- Issue states
- Management and metrics
- Management example
Issue tracking > Why?
- Software always has defects. A lot of defects. Too many
defects to keep track of without a special tool.
- Software testing and repair is a team activity. The tracking
must be done with a collaborative tool.
- Each issue contains significant information
- There can be debate about what the exact problem really is,
and how it should be addressed.
- Evidence (e.g., error messages, debugging output, screen
shots) must be gathered to document the problem.
- Later, this information must be available to explain why
changes were made.
- Software engineering means multiple people working on the
product at the same time. There needs to be a clear indication of
who is responsible for each unit of work.
- Defects are not the only work items, enhancement requests and
other development tasks must also be tracked.
- Managers and developers need to understand everything that must
still be done to reach a release.
- Changes to a release branch must be motivated by documented
Issue tracking > What is it?
- An issue tracker is like a client-server database, plus collaborative features
- Central database server with all information about all issues
- Users use clients to access the database. Can be web-based or desktop applications.
- Users become stakeholders in a particular issue and are notified when changes occur
- It is where everyone enters issues and tasks
- It is where managers assess project status and set priorities on tasks
- It is where developers look to see what they should work on each day
Issue tracking > Defect terminology
- Error: v. The mistake in the developer's mind. The
mistaken idea that they had when they did the design or code. Often
caused by miscommunication or bad assumptions.
- Defect: n. The result of the developer's error embodied in
the product source code or documents.
- Fault: n. The execution of defective code. This is
not directly visible.
- Failure: n. The user-visible result of the fault. E.g., an
error message. This is evidence that can be used in debugging.
Issue tracking > Issue contents
- Describe the situation/failure
- Summary: One line brief description
- Description/attachments: Detailed description of the problem or task:
expected behavior, error messages, steps to reproduce
- Severity: How much does this impact users? (IZ does not have this)
- Votes: End user urging the development team to set higher priority
- Version: Which version of the product has the problem?
- Platform/OS: In what situation was the problem seen?
- Component: Which part of the product has the problem?
- Reported-by: The user who entered the issue
- Issue ID number: automatically generated
- Plan to work on the issue
- Comments: Managers direct the developer on what to do, including which branches to work on
- Priority: How important is this to the development team?
- Milestone/Deadline: A symbolic deadline for the work, e.g., "1.0.0 FC"
- Assigned-to: The user who needs to work on fixing the issue
- QA-contact: The QA engineer who oversees the issue to completion
- Dependencies: List issues that must be resolved before this one
- Work on the issue/understand the failure and find the defect
- Comments: Ongoing discussion of how to solve the issue
- Comments/status whiteboard: estimation of original/remaining effort
- Comments/status whiteboard: Indications of progress on long issues
- Status: One-word state of the issue in the issue tracking process
- Resolution: One-word indication of how the issue was resolved (set when it has status=resolved)
Issue tracking > Common operations
- Enter new issue
- Review/Validate issue (reproduce the problem)
- Check to see if this issue is a duplicate
- Triage: set milestone, priority, assign to developer, comment on plan of action
- Developer provides estimate of effort needed
- Developer marks issues as STARTED
- Developer comments on progress, requests more information from reporter, gathers evidence
- Developer thinks the problem is resolved
- QA team verifies that the issue is really solved
- QA team implements regression test to automate future verification
- Technical docs team summaries verified issues in release notes, updates documentation
- Technical support advises users of known issues and work-arounds
- Management reviews lists of open issues for each release
- Management reviews workload/backlog for each developer
- Management reviews individual issues
- Management slips issues to a later release, or decides to drop them
Issue tracking > Issue states
- Pending issue states:
- Unconfirmed: Entered by outsider, not yet checked for validity
- Open issue states:
- New: Valid issue, but no work done yet
- Started: Developer has started work on the issue
- Reopened: The fix failed, work must be redone
- Closed issue states:
- Resolved: The developer thinks they have solved the problem
- Invalid: The reporter was mistake. E.g., user error or misunderstanding.
- Duplicate: This issue has already been reported before
- Fixed: The defect was repaired
- Won't fix: Team agrees not to fix the defect
- Later: This is already fixed in a later release, user must upgrade
- Remind: Team cannot fit this into current roadmap, reconsider it later
- Verified: QA has verified that the developer's fix worked
- Closed: All work on this issue has been completed (including docs, release notes, etc.)
Issue tracking > Management and metrics
- Forecast development milestone dates:
- Number of requested changes in a release
- Estimated vs. actual effort
- Rate of defects introduction per requested change
- Defect detection rate
- Defect resolution rate
- "Bug charts": projection of date when release has zero know defects
- Risk analysis and SPI:
- Total issues / Total code size
- At-risk/high-defect components and/or developers
- Inspection defect detection rate
- Identify common root causes of defects
- Gather/adjust coefficients for future schedule estimates
- Highlighting missed areas:
- Some defects indicate poor requirement or design
- Shipped defects indicate holes in test suite
- Reopened issues indicate mis-communications/misunderstandings
Issue tracking > Management example
- In this release we request 10 new features and 30 enhancements
- Total estimated effort on those 40 is 400 hours
- Work on those 40 changes introduces 60 defects
- 20 out of 60 defects were caught in review
- The defects are being found in testing at about 10 per day, only 1/2 will be found
- The defects are being found in by customers at about 1 per day
- The defects are being fixed at about 3 per day
- The 60 repairs introduce 12 new defects
- Of the 72 repairs, 4 fail and must be redone
- Actual effort on the 40 requested changes is 600 hours
- Actual effort on the 60 defects is 320 hours