In recent years, the medical imaging community has increasingly embraced machine learning and deep learning for their remarkable advancements in computer vision tasks. Despite this, the integration of machine learning software into clinical practice faces challenges due to neural network error-proneness, especially in adapting to diverse medical data conditions. Limited fully annotated medical imaging datasets hinder progress, prompting the exploration of synthetic data generation methods. Generative models like Generative Adversarial Networks (GANs) exhibit promise but face challenges such as instability and mode collapse. Our investigation explores alternative architectures like Triple-GAN and diffusion models to enhance stability and diversity in generating realistic medical images, crucial for meaningful clinical applications.
References:
The code for the generative models is based on MONAI framework. The dataset used is MEDNIST.