Li, Xueyuan., Cui, Can., Deng, Ruining., Tang, Yucheng., Liu, Quan., Yao, Tianyuan., Bao, Shunxing., Chowdhury, Naweed Iffat., Yang, Haichun., & Huo, Yuankai. (2025). Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model. Journal of Medical Imaging, 12(5), 57501. https://doi.org/10.1117/1.JMI.12.5.057501
Recent advances in computational pathology—the use of computers to analyze tissue images—have been driven by Vision Foundation Models (VFMs), particularly the Segment Anything Model (SAM). SAM, a type of VFM, can segment, or outline, cell nuclei using either prompts (zero-shot segmentation) or specialized cell-focused models, allowing it to work across many types of cells. However, general VFMs often struggle with fine-grained tasks, such as identifying specific nuclei subtypes or particular cells.
To address this, we developed the molecular empowered all-in-SAM model, which enhances SAM and VFMs for more precise pathology analysis. Our approach has three key components: (1) annotation, where molecular-informed guidance allows even non-experts to label images without detailed pixel-level work; (2) learning, where SAM is adapted with a SAM adapter to focus on specific cell types and biological features; and (3) refinement, which improves segmentation accuracy through molecular-oriented corrective learning.
Testing on both in-house and public datasets showed that all-in-SAM greatly improves cell classification, even when annotation quality varies. This approach reduces the workload for human annotators and makes precise biomedical image analysis more accessible, especially in resource-limited settings, supporting advances in automated pathology and medical diagnostics using VFMs.

Fig. 1
Overall idea of our work: this diagram illustrates the distinctions between our approach (bottom panel) and existing methods. (1) Traditional: expert annotators manually label cells using only PAS images. (2) MOCL: lay annotators provide pixel-level labels under the guidance of IF molecular images, followed by the application of deep learning for segmentation. (3) SAM-L: the SAM technique is utilized to expedite the annotation process, requiring only minimal (box) annotations. (4) All-in-SAM (our method): we integrate SAM in the annotation phase and adaptively fine-tune it during the training of the model.