Szymon Mazurek,  Monika Pytlarz,  Sylwia Malec, Alessandro Crimi 

Advancements across various industries have been significantly propelled by artificial intelligence. However, the rapid proliferation of these technologies also raises environmental concerns, particularly due to the substantial carbon footprints associated with training computational models. Segmenting the fetal brain in medical imaging presents a challenge due to its small size and the limited quality of fast 2D sequences. Deep neural networks emerge as a promising solution to this issue. The development of larger models in this context requires significant data and computing resources, leading to increased energy consumption. Our research focuses on exploring model architectures and compression techniques that enhance energy efficiency. We aim to optimize the balance between accuracy and energy usage through strategies such as designing lightweight networks, conducting architecture searches, and utilizing optimized distributed training tools. We have identified several effective strategies, including optimizing data loading, employing modern optimizers, implementing distributed training strategies, and reducing the precision of floating-point operations in light model architectures while adjusting parameters to match available computing resources. Our findings confirm that these methods ensure satisfactory model performance with minimal energy consumption during the training of deep neural networks for medical image segmentation.

DOI: 10.1007/978-3-031-63772-8_5

READ HERE