Skip to content

Automated_front_end_testing(#1248)#1253

Open
Shasan634 wants to merge 13 commits intomasterfrom
Automated_front_end_testing
Open

Automated_front_end_testing(#1248)#1253
Shasan634 wants to merge 13 commits intomasterfrom
Automated_front_end_testing

Conversation

@Shasan634
Copy link
Copy Markdown
Contributor

@Shasan634 Shasan634 commented Nov 4, 2025

Background

Before, all frontend tests had to be run manually before any release, which was time-consuming and error-prone. Automating these tests with Playwright saves significant time and ensures consistent validation. This setup is also extensible and can be used to validate other frontend components in the future.

Why did you take this approach?

I chose Playwright for its robust parallel processing capabilities and cross-browser support. Initially, I attempted to run all datasets fully in parallel to speed up test execution. However, this caused memory issues due to the large number of datasets and browser instances.

To balance speed and stability, I structured the tests so that all datasets within a single test run sequentially, but the tests themselves are run in parallel across workers. This approach maximizes throughput while avoiding memory overload and has proven to be the fastest stable configuration.

Notes

-Area plots/ Netcdf file download for subsets can be very memory-intensive, and when one dataset fails, it can cause the local server to crash and stopping test for other datasets. In these cases, it’s better to identify and address the specific dataset causing the issue, then re-run the tests. This ensures the remaining datasets can be processed reliably without triggering another crash.

Checks

  • I ran unit tests.
  • I've tested the relevant changes from a user POV.
  • I've formatted my Python code using black ..

Hint To run all python unit tests run the following command from the root repository directory:

bash run_python_tests.sh

* add multi-map mouse position control

* combine zoom controls

## Background

## Why did you take this approach?

## Anything in particular that should be highlighted?

## Screenshot(s)

## Checks
- [ ] I ran unit tests.
- [ ] I've tested the relevant changes from a user POV.
- [ ] I've formatted my Python code using `black .`.

_Hint_ To run all python unit tests run the following command from the root repository directory:
```sh
bash run_python_tests.sh
```
@Shasan634 Shasan634 requested a review from JustinElms as a code owner November 4, 2025 13:15
@Shasan634 Shasan634 changed the title Combine map controls on compare datasets (#1248) Automated_front_end_testing(#1248) Nov 17, 2025
@Shasan634 Shasan634 self-assigned this Nov 30, 2025
@Shasan634 Shasan634 added enhancement Javascript Priority: Medium Code Quality Anything related to the quality of the code labels Nov 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Code Quality Anything related to the quality of the code enhancement Javascript Priority: Medium

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants