-
Notifications
You must be signed in to change notification settings - Fork 0
Testplan
This document describes the testing strategy followed during the development of our app “AthleteView”. The contents of this document ensure that the developers know how their implemented features are supposed to be tested and when it is “accepted” as complete in this regard. It is important to note however that these test-requirements truly only apply once code is supposed to be pushed to the main branch as it shall be ensured that the product on main is always in working condition. It is, however, not always binding - if necessary conduct the test manager Stephan Stöger to discuss potentially not testing certain aspects for various reasons.
The main objective of this plan is to ensure that all developers actively working on the project can rely on it functioning as intended at any time during the process as well as know precisely how features need to be tested in order to reduce time spent on test-cases that are not needed.
For developers it is important to know the contents of this document as otherwise it is not possible to rely on the work of each other throughout the project which might lead to less features implemented or delays of releases, both of which are undesired at any time.
The scope of the test plan encompasses all phases of development, ensuring that at latest when a new feature is merged into the main codebase test cases verify that the application as a whole still functions as intended.
The backend has to be tested completely on all layers. The only exception to this might be the database/repository layer if it does not hold any logic. Unit tests shall be written in kotlin using JUnit 5. In addition Mockito shall be used to mock layers which are not being tested at the moment to ensure that all layers can be correctly tested independently, even if the implementation of the layer above/below is faulty. For a feature to be “tested” there must be at least two test-cases: one verifying correct behavior and one verifying correct handling of invalid input. Every developer is of course welcome to test their feature more thoroughly if they see a need for it (to ensure certain edge-cases work correctly for example).
Similarly to Unit testing, all features of the backend have to be tested with at least two integration tests: one ensuring correct behavior and one ensuring correct error-handling. In this case, however, it is advisable to test features with more than the minimum requirements, given that multiple layers are involved and thus there could be a multitude of edge-cases to be concerned with. Technology used is the same as for Unit testing (see above).
Stress/Performance tests per-se are not planned to be implemented. However, the CSP/COP algorithm should be tested with a big enough sample group to ensure decent enough run-time to not degrade user-experience. It is however not necessary to test if the API-server is able to handle e.g. thousands of requests in a short timeframe.
Every feature should describe what is necessary (from a user’s point of view) to ensure that the feature is sufficiently well implemented and that regular users are satisfied with the feature itself. Generally, User acceptance tests do not have to be implemented but can be performed manually (as some/a lot of them will utilize the frontend). Correct functioning of backend is already ensured by Unit/Integration tests.
Batch testing will be performed using automatic unit / integration tests. These should be run locally before pushing to the shared repository, although the test-cases (all of them!) will be run again when pushing to the repository. Aside of that, there won’t be any batch tests, like for example manual testing before release.
As already specified, tests will be automatically executed when pushing to the shared repository, and once again before merging branches. This will ensure that newly introduced features will not break existing ones. While we do not follow a TDD-approach in our development cycle, it is required for tests to be written before merging into the stable-codebase (main branch).
We will not perform automated frontend-tests as the main aspect of our application cannot be properly tested like this. For example it could potentially be verified that some graph exists on a webpage, but the actual content (i.e. how the line is drawn and what it shows) likely cannot be inferred properly. Therefore, manually testing frontend for user-acceptance is all that will be done.
There are no hardware requirements for running tests. They can be executed locally using the IDE or maven. Additionally, a CI/CD pipeline will be set up in the shared repository which automatically verifies correct behavior (according to tests). The latter, however, does not require any additional infrastructure (all’s provided).
When a problem is found during testing, there are two options for a developer: it’s a feature they implemented themselves: Fix the bug yourself! it’s someone else’s feature: Create a ticket and assign it to the feature’s original developer
Additionally, the person reviewing a Pull Request (PR) is responsible to ensure that test cases for the implemented feature are also implemented. It is, however, not the reviewer’s responsibility to ensure the tests also pass! If problems arise, conduct with the Test-Manager.
All functional requirements of AthleteView (as described in Gitlab Wiki [WBS, Iceberglist] or project contract).
Non-functional requirements