Adapted nnU-Net: A Robust Baseline for Cross-Modality Synthesis and Medical Image Inpainting
Résumé
In medical image synthesis, the development of robust and reliable baseline methods is crucial due to the complexity and variability of existing techniques. Despite advances with architectures such as GANs and diffusion models, a clear state-of-the-art has yet to be established. This paper introduces a versatile adaptation of the nnU-Net framework as a robust baseline for both cross-modality synthesis and image inpainting tasks. Known for its superior performance in segmentation challenges, nnU-Net's automatic configuration and parameter optimization capabilities have been adapted for these new applications. We evaluate this method on two use cases: pelvis MR to CT translation using the Synthrad2023 challenge dataset and local synthesis using the BraTs 2023 inpainting challenge dataset. Standard synthesis metrics -MAE, MSE, SSIM and PSNR-demonstrate that our adapted nnU-Net outperforms GAN-based methods like pix2pixHD and ranks among the best methods for both challenges. We recommend this adapted nnU-Net as a new benchmark for medical image translation and inpainting tasks, and provide our implementations for public use on GitHub.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|