The Ningnan Semantic Segmentation Visualization Tool is a deep learning visualization application developed using PyQt5, aimed at simplifying the experimental workflow for semantic segmentation tasks related to landslide points in the Ningnan Loess Hilly Region. The tool provides an intuitive graphical user interface (GUI), allowing users to conduct deep learning experiments without modifying the source code, thereby enhancing research efficiency.
- Version: 1.0
- Author: Nyongwon
- Release Date: November 18, 2024
| Date | Version | Author | Description |
|---|---|---|---|
| 2024-11-18 | 1.0 | Nyongwon | Initial Release |
- Storage: 100GB or more
- Memory: 16GB or more
- Processor: Intel i5 (10th Generation) or above (with integrated graphics)
- Graphics Card: NVIDIA 3060 or above (optional)
- Operating System: Windows 10 or above, Ubuntu 20.04 or above
- Processor: AMD Ryzen 7 4800U
- Memory: 16GB DDR4 3200MHz
- Graphics Card: AMD Radeon Graphics 2GB
- Storage: 512GB SSD
The main page includes 10 practical buttons, each corresponding to different functions. Users can proceed with operations step by step or use specific features based on their needs.
Clicking the "Image Segmentation" button on the main page will open a corresponding window where users can perform image segmentation tasks.
This feature allows users to randomly select segmented images and save them to a specified path for further processing and analysis.
Users can open the labelme tool through this feature to create labels that facilitate the training of deep learning models.
This feature is used to convert created label files from JSON format to PNG format for visualization and storage purposes.
Users can utilize this feature to augment data to improve the model's generalization ability and robustness.
This feature supports the automatic shuffling of data training order and reads file names into two files, partitioning data according to a specified ratio.
Users can set training parameters in this interface, which includes a sub-interface for viewing the training progress.
This interface serves as a progress display window where users can view real-time training conditions and terminate training if necessary.
This feature is used to extract areas that the model has learned, helping users understand the model's generalization ability.
Users can divide images into 2000-pixel resolution segments for more detailed analysis and processing.
This feature allows users to merge segmented images into one, facilitating further processing and analysis.
This tool primarily targets the following user groups:
- Students: As a learning and experimental tool to assist students in conducting deep learning-related projects.
- Researchers: To provide researchers with a simplified experimental process, enhancing research efficiency.
- User-Friendly: The user-friendly interface makes operations intuitive and easy to master.
- Visual Interface: Utilizing a visual approach for various deep learning tasks enhances user experience.
- Compact Size: The tool is lightweight, making it easy to download and install.
- Strong Practicality: Optimally designed for semantic segmentation tasks, effectively improving experimental efficiency.
- Install the Tool: Download and install the latest version of the tool.
- Prepare the Runtime Environment: Ensure that the operating system and hardware configuration meet the requirements.
- Launch the Tool: Start the application and access the main page.
- Conduct Experiments: Select the appropriate function based on needs and gradually conduct the experiment.
Future versions will consider the following improvements:
- Cloud Deployment: Deploying the tool on Docker or other cloud platforms for easier accessibility.
- Function Expansion: Adding support for more deep learning models to enhance the tool's versatility.
- Interface Optimization: Further optimizing the layout and interaction design of the interface based on user feedback.
Thank you for using the Ningnan Semantic Segmentation Visualization Tool. If you encounter any issues or have suggestions during use, please provide feedback through GitHub. We are committed to continuously improving the tool to meet user needs.
For more information, please refer to the official documentation and GitHub page of the tool.