Skip to main content

VISE Summer Research In Progress (RiPs) 8.10.23

Posted by on Tuesday, August 1, 2023 in News.

VISE Summer Seminar to be led by

Jumanh Atoum (CS)

 

 

 


and

Xing Yao (CS)

 

 

 

Date: Thursday, August 10, 2023
Time: 11:45 am for lunch, noon start
Location: Stevenson Center 532

RiP Speaker #1:
Jumanh Atoum, Computer Science Department
RiP Title #1:

“Ask me! I am the trainee.” Investigating Limitations of Augmented Reality-based Guidance in a Surgical Training Environment and User-Centered Improvements.

Abstract #1:
By superimposing computer-generated images on a user’s view of the physical world, Augmented Reality (AR) has revolutionized the way we do things. The usage of AR-based applications is being explored in the surgical field for numerous purposes, such as improving the training process. Upon investigating expert and trainee surgeons’ behavior in phantom procedures, eye-gaze patterns stand out as a key indicator of skill. In this light, we aim to use this difference to improve trainees’ skill acquisition. We do that through conducting co-design focus groups to investigate the design ideas and features surgeons would need for a gaze-guided training experience. Our co-design user study consists of three parts, firstly we aim at building an understanding of the current training environment by conducting a semi-structured interview. Secondly, we conduct a co-design session to encourage the trainee surgeons to build their own features and designs to incorporate them in later user studies. We perform qualitative thematic analysis on the data generated from the interviews and the co-designs. Our preliminary results show many improvements can be made to the current surgical training environment. The results pointed to the importance of visual feedback and the necessity for proper deployment. We then looked into the effect of visual guidance on trainee surgeons’ performance. Thirdly, based on the results from the qualitative thematic analysis, we present an AR-based gaze-sharing application on Microsoft HoloLens 2 headset. This application can help attending surgeons indicate specific regions, communicate with decreased verbal effort, and guide residents throughout an operation. We tested the utility of the application with a user study of endoscopic kidney stone localization completed by urology attending and resident surgeons. The trainee surgeons were asked to fill in the NASA test load index survey at the end of every task. We observe improvement in the NASA test load index surveys (up to 25.71%), in the success rate of the task (6.9% increase in localized stone percentage), in completion time (5.37% decrease), and in gaze analyses (up to 27.93%). 
Bio #1:
Jumanh is a Ph.D. student in computer science. She is interested in surgical robotics, gesture recognition in robotic surgery, and human-computer interaction. Currently, she is working on eye gaze tracking and sharing between expert and novice surgeons and multi-modal-based gesture estimation to improve surgical training efficacy.

RiP Speaker #2:
Xing Yao, Computer Science Department
RiP Title #2:
PLEASE: pay less effort to achieve coarse-to-fine segmentation on low quality ultrasound images
Abstract #2:
Deep convolutional neural networks (DNNs) are powerful tools for medical image segmentation, but they typically require time-intensive pixel-level annotations. To address this, bounding-box-based coarse-to-fine segmentation approaches have been explored. The Segment Anything Model (SAM) has emerged as a strong model for generating fine-grade segmentation masks using sparse prompts such as bounding boxes, but it requires improvement for medical image segmentation tasks. In this study, we present an advanced test-phase prompt augmentation method that combines multi-box prompt augmentation and aleatoric uncertainty thresholding. Our method is designed to enhance SAM’s performance on low-contrast, low-resolution, and noisy ultrasound images, without additional training or fine-tuning. The approach is assessed on three ultrasound image segmentation tasks. Our results suggest that our method dramatically improves the SAM performance, with notable robustness to changes in the prompt.
Bio #2:
Xing Yao obtained his Bachelor’s and Master’s degrees in Biomedical Engineering and is currently an upcoming third-year Computer Science PhD student at MedICL. Under the advisement of Dr. Ipek Oguz, his research focuses on medical image analysis and machine learning.

Tags: , , , , , , ,