-
Notifications
You must be signed in to change notification settings - Fork 3
Quality Control Plan
Yue Qi edited this page May 9, 2019
·
1 revision
The quality control and quality assurance activities should be carried out throughout the entire SDLC, including the following steps:
- Conduct technical reviews of the content/interface/design model to uncover errors.
- Conduct unit testing to ensure the correctness of each functional component.
- Conduct code reviews to ensure the correctness of coding standard and functionalities.
- Conduct integration testing to test the navigation throughout the architecture and uncover errors on interfaces.
- Conduct system functionality testing to ensure no obvious defects in the workflow.
- Conduct system quality testing to ensure the system can meet the specified quality attributes.
- Conduct the acceptance testing on a controlled and monitored the population of end users to evaluate the usability and reliability of the system.
| Artifacts | Architecture/interface design document. |
|---|---|
| Tools | NA |
| When | The design document has been completed. |
| Whom | The whole team. |
| Entry Criteria | A new version of the design document has been completed. |
| Exit Criteria | All team members agree on the consistency and completeness of the design for the architecture drivers, or a revision plan has been generated. |
| Effort Estimation | 8 hours (2 hours/person * 4 people) |
| Training | If possible, some training on the process-related issues and the human psychological side of reviews should be conducted. |
| Defect Reporting & Tracking | The defects found should be added to the project’s backlog to be fixed in the future. |
| Artifacts | Code implementing the critical functionalities. |
|---|---|
| Tools | NA |
| When | Code implementing the critical functionalities has been developed but has not to be delivered. |
| Whom | The whole team. |
| Entry Criteria | The critical functional requirements have been implemented. |
| Exit Criteria | The code has been reviewed in the following aspects: 1. Coding standard (Readability/formatting/commenting/consistency/naming/dead code removal) 2. Correctness and efficiency of the algorithm. 3. Correctness of syntax. 4. The code implements the design correctly. |
| Effort Estimation | 8 hours (2 hours/person * 4 people) |
| Training | Code review process & coding standard. |
| Defect Reporting & Tracking | 1. All defects and code improvement issues reported by the code review should be summarized and added to the issue tracker. 2. The defects should be marked with its severity, and all critical level defects should be fixed before delivering the functionality. |
| Artifacts | Methods/Functions at the code level. |
|---|---|
| Tools | JUnit, Pytest, Jacoco, ACTS |
| When | Unit testing should be completed after implementation but before integration. |
| Whom | At least two developers test independently. |
| Entry Criteria | The method has been implemented |
| Exit Criteria | 1. All critical functional requirements have been thoroughly tested. 2. All classes/methods generated by the development team (instead of the IDE itself) should be thoroughly tested. 3. All the unit tests have been passed. 4. For critical methods that implement the core functions, the line coverage reaches 80% and the branch coverage reaches 80% for critical code parts. |
| Effort Estimation | 22.5 hours (0.5 hour/test case * 5 test case/functional requirement * 9 (critical) functional requirements) |
| Training | Unit testing concepts and tools |
| Defect Reporting & Tracking | 1. All defects reported by the unit testing should be tracked using the issue tracker and be fixed as soon as possible. 2. All defects have to be resolved before the development enters the next stage. |
| Artifacts | Interfaces, components. |
|---|---|
| Tools | 1. Mockito for testing interfaces. 2. Junit and Pytest for testing components. |
| When | Integration testing should be completed after all the components pass the unit test but before the system test. |
| Whom | At least two developers test independently. |
| Entry Criteria | Components have passed unit testing |
| Exit Criteria | 1. All interfaces (inner-system or cross-system) should be tested. 2. All critical components should be tested. 3. All tests should be passed. |
| Effort Estimation | 2 hours/interface |
| Training | Integration test concepts and tools |
| Defect Reporting & Tracking | 1. All defects reported by the integration test shall be tracked using the issue tracker, and they shall be fixed as soon as possible. 2. All defects have to be resolved before the development enters the next stage. |
| Artifacts | Code |
|---|---|
| Tools | IntelliJ IDEA, PyCharm, SonarQube |
| When | Every build. |
| Whom | The developer who is responsible for the piece of code. |
| Entry Criteria | Project is able to be built. |
| Exit Criteria | 1. All coding standard issues found by IDE are fixed. 2. All bugs/vulnerabilities found by SonarQube are fixed. 3. All code smell issues found by SonarQube above critical level are fixed. |
| Effort Estimation | 1 hour/release |
| Training | Coding standard check process & SonarQube. |
| Defect Reporting & Tracking | 1. All defects reported by the integration test shall be tracked using the issue tracker, and they shall be fixed as soon as possible. 2. All defects have to be resolved before the development enters the next stage. |
The activity diagram showing the system functionality is presented as follows:

| Artifacts | Architecture/interface design document. |
|---|---|
| Tools | NA |
| When | The design document has been completed. |
| Whom | The whole team. |
| Entry Criteria | A new version of the design document has been completed. |
| Exit Criteria | All team members agree on the consistency and completeness of the design for the architecture drivers, or a revision plan has been generated. |
| Effort Estimation | 8 hours (2 hours/person * 4 people) |
| Training | If possible, some training on the process-related issues and the human psychological side of reviews should be conducted. |
| Defect Reporting & Tracking | The defects found should be added to the project’s backlog to be fixed in the future. |
| Artifacts | Code implementing the critical functionalities. |
|---|---|
| Tools | NA |
| When | Code implementing the critical functionalities has been developed but has not to be delivered. |
| Whom | The whole team. |
| Entry Criteria | The critical functional requirements have been implemented. |
| Exit Criteria | The code has been reviewed in the following aspects: 1. Coding standard (Readability/formatting/commenting/consistency/naming/dead code removal) 2. Correctness and efficiency of the algorithm. 3. Correctness of syntax. 4. The code implements the design correctly. |
| Effort Estimation | 8 hours (2 hours/person * 4 people) |
| Training | Code review process & coding standard. |
| Defect Reporting & Tracking | 1. All defects and code improvement issues reported by the code review should be summarized and added to the issue tracker. 2. The defects should be marked with its severity, and all critical level defects should be fixed before delivering the functionality. |
| Artifacts | Methods/Functions at the code level. |
|---|---|
| Tools | JUnit, Pytest, Jacoco, ACTS |
| When | Unit testing should be completed after implementation but before integration. |
| Whom | At least two developers test independently. |
| Entry Criteria | The method has been implemented |
| Exit Criteria | 1. All critical functional requirements have been thoroughly tested. 2. All classes/methods generated by the development team (instead of the IDE itself) should be thoroughly tested. 3. All the unit tests have been passed. 4. For critical methods that implement the core functions, the line coverage reaches 80% and the branch coverage reaches 80% for critical code parts. |
| Effort Estimation | 22.5 hours (0.5 hour/test case * 5 test case/functional requirement * 9 (critical) functional requirements) |
| Training | Unit testing concepts and tools |
| Defect Reporting & Tracking | 1. All defects reported by the unit testing should be tracked using the issue tracker and be fixed as soon as possible. 2. All defects have to be resolved before the development enters the next stage. |
| Artifacts | Interfaces, components. |
|---|---|
| Tools | 1. Mockito for testing interfaces. 2. Junit and Pytest for testing components. |
| When | Integration testing should be completed after all the components pass the unit test but before the system test. |
| Whom | At least two developers test independently. |
| Entry Criteria | Components have passed unit testing |
| Exit Criteria | 1. All interfaces (inner-system or cross-system) should be tested. 2. All critical components should be tested. 3. All tests should be passed. |
| Effort Estimation | 2 hours/interface |
| Training | Integration test concepts and tools |
| Defect Reporting & Tracking | 1. All defects reported by the integration test shall be tracked using the issue tracker, and they shall be fixed as soon as possible. 2. All defects have to be resolved before the development enters the next stage. |
| Artifacts | Code |
|---|---|
| Tools | IntelliJ IDEA, PyCharm, SonarQube |
| When | Every build. |
| Whom | The developer who is responsible for the piece of code. |
| Entry Criteria | Project is able to be built. |
| Exit Criteria | 1. All coding standard issues found by IDE are fixed. 2. All bugs/vulnerabilities found by SonarQube are fixed. 3. All code smell issues found by SonarQube above critical level are fixed. |
| Effort Estimation | 1 hour/release |
| Training | Coding standard check process & SonarQube. |
| Defect Reporting & Tracking | 1. All defects reported by the integration test shall be tracked using the issue tracker, and they shall be fixed as soon as possible. 2. All defects have to be resolved before the development enters the next stage. |
Testing should be conducted in parallel with development:
- Testing tasks should be established under the given corresponding entry criteria.
- Testing tasks should be prioritized and assigned to individuals at everyday standup meetings.
- The testing of most user stories should be completed by the end of each sprint.
- For some stories, which are completed near the end of a sprint, the corresponding testing may spill into the beginning of the next sprint.
- All defects should be documented in defect tables (or other issue tracker tools: JIRA) and be classified based on their root cause.
- After a defect is detected, the person who created it and the person who reported it should provide necessary information for root cause analysis and repair.
- For all issues, the product owner should determine if the issues need to be resolved in the current sprint (may need to communicate with the client).
- All defects to be fixed should be prioritized based on the severity, impact, effort to repair.
- Any issue to be fixed on priority should be re-tested after they are fixed.
- The state of the issues (to be fixed, to be re-tested, closed, or won’t be fixed) should be updated on issue tracker in time.