Improving Ultrasound Image Quality with Deep Learning
Ultrasound remains an invaluable tool for clinicians because it is real-time, cost-effective, and portable. However, poor image quality can make diagnostic and guidance tasks with ultrasound unreliable (e.g., tumor and gallbladder boundaries difficult to see in left most figure). Machine learning techniques applied to ultrasound data have had great success for improving image quality. Among these techniques, DSI affiliate Dr. Brett Byram’s group from the Department of Biomedical Engineering pioneered efforts for using deep neural networks (DNNs) for ultrasound image reconstruction (i.e., beamforming). These DNN beamformers have been shown to substantially improve B-mode image quality compared to conventional methods.
Despite the general effectiveness of DNN beamforming, several challenges remain that I have been working on addressing. Of the remaining challenges, the most prominent has been obtaining realistic ground truth training data. Since we do not know ground truth information in vivo, we rely on simulations. However, DNN beamformers trained on simulated data can fail to generalize to the in vivo clinical data that we care about (e.g., lots of dark regions called “drop out” in Sim Only DNN figure make it difficult to see relevant structures). To overcome this challenge, in collaboration with DSI-affiliate Dr. Matthew Berger from the Department of Computer Science, we have been developing a domain adaptation (DA) scheme to incorporate unlabeled in vivo data during training. To do this, we leverage generative adversarial networks (GANs) to map between simulated and in vivo data to ultimately train an in vivo-specific beamformer (schematic shown on the right). We have demonstrated that the DA DNN beamformer generalizes better to in vivo data and substantially improves image quality compared to the DNN beamformer trained with simulated data only (e.g., tumor boundary easier to delineate in DA DNN figure).