Date of Award


Publication Type


Degree Name



Computer Science

First Advisor

B. Boufama

Second Advisor

D. Wu

Third Advisor

C. Ezeife


Lesion segmentation, BUS images, image processing, Neural networks




Medical Image Segmentation is the process of segmenting and detecting boundaries of anatomical structures in various types of 2D and 3D medical images. The latter come from different modalities, such as Magnetic Resonance Imaging (MRI), X-Rays, Positron Emission Tomography (PET)/Single-Photon Emission Computed Tomography, Computed Tomography (CT), and Ultrasound (US). It is a key supporting technology for medical applications including diagnostics, planning, monitoring, and guidance. Hence, a large number of segmentation methods have been published in past decades. This dissertation presents four contributions to the field d of medical images and for segmentation in general and to Breast Ultrasound (BUS) image segmentation in particular. Initially, a comprehensive review of the current medical segmentation techniques was conducted to help the researcher to choose the suitable segmentation methodology based on the type of image and organ. In particular, this dissertation reviewed the most important medical segmentation methods that have been utilized for almost all types of medical images. The methods were grouped into categories and then compared, contrasted, and highlighted their main advantages and limitations. We further proposed three neural networks models for BUS tumors segmentation as to solve the limitations of BUS images and to provide better solution which can be utilized to save peoples’ life. Breast related diseases have significantly grown among women and have become a leading cause of death worldwide. An effective way to diminish breast cancer is to offer a proper diagnosis in the early stages of the disease by using ultrasound images. However, it is still a challenging task due to intensity inhomogeneity and class imbalanced problems. In this contribution we proposed an encoder-decoder based convolution neural networks which have proven their efficiency in the segmentation of BUS tumors. To address the above-mentioned downsides of ultrasound images, our work proposes a modified U-Net architecture equipped with pre-trained inception residual blocks as an encoder for BUS image segmentation. To enhance the performance, we increased the depth of the network by adapting the inception blocks based on the basic U-Net decoder. Our proposed approach shows promising results, as it demonstrates improved performance over the existing U-Net architecture, as well as the more recent models on two datasets named UDIAT and BUSIS, achieving a Dice Coefficient and IOU (Intersection Over Union)of 0.94 and 0.90 on UDIAT, respectively, and 0.89 and 0.80 on BUSI, respectively. We further proposed a new version of the U-Net architecture, which is commonly used for medical image segmentation. We adapted residual blocks to resolve the issue of the vanishing gradient while down sampling the feature maps, we also replaced the skip connection path in the base model by convolution paths. Our proposed model presented a simple decoder to reconstruct the extracted features. This model showed better performance than basic U-Net and the existing U-Net based neural networks. The model achieved a Dice coefficient and IOU of 0.91 and 0.84 on the dataset, respectively. In the last contribution in this dissertation we presented a novel neural network in which an attention mechanism was proposed. We employed similar contraction structure as our previous contribution and dense blocks in the expanding path which provide full connectivity between the layers. Channels include important information and we showed significant improvement when comprising these details with the spatial ones. Our attention module comprises both information in order to provide a good overall representation of the feature maps. The result reported shows that our model outperforms the existing models and has proven its feasibility for BUS segmentation and can work very well even with very limited dataset.

Available for download on Friday, May 31, 2024