Onco-Seg

Adapting SAM3 for Medical Image Segmentation

Multi-modal AI for tumor & organ delineation across CT, MRI, Ultrasound, and more

📝 Preprint Available: Our technical paper is available as a preprint (not yet peer-reviewed). In the spirit of open science, we are releasing this work early so others can build upon it, provide feedback, and accelerate progress in medical imaging AI.
Onco-Seg liver segmentation demo

🔬 What you're seeing

This animation shows a CT scan of a patient's abdomen. The AI automatically identifies and highlights the liver (shown in green). This task—which typically takes radiologists several minutes per scan—is completed by Onco-Seg in under a second. Accurate organ segmentation is critical for radiation therapy planning, surgical navigation, and treatment monitoring.

Brain Tumor Segmentation Demo

Onco-Seg brain tumor segmentation demo

🧠 Brain Tumor Segmentation

This demo shows Onco-Seg segmenting a glioblastoma from a BraTS MRI scan using the interactive viewer. The AI user types "tumor" as a text prompt, clicks "Find It" to segment the current slice, then "Segment Entire Scan" to propagate the mask across all slices — significantly speeding up segmentation & contouring workflows, largely done manually today. The custom interactive DICOM viewer in the video shows yellow for tumor core and red</strong> for edema. </p> </div>

Why Onco-Seg?

Manual tumor and organ contouring remains a major bottleneck in cancer care. Radiation oncologists spend 2-4 hours per patient drawing boundaries around 30+ organs-at-risk. Radiologists manually measure tumors for treatment response. This process is time-consuming, variable between experts, and difficult to scale.

Onco-Seg adapts Meta's SAM3 (Segment Anything Model 3)—trained on billions of natural images—for medical imaging. Using parameter-efficient fine-tuning (LoRA), we transfer SAM3's visual understanding to medical scans while training on just a fraction of the parameters.

35
Training Datasets
98K+
Training Cases
12
Eval Benchmarks
8
Imaging Modalities

Supported Modalities

🫁 CT Scan 🧠 MRI 📷 Ultrasound 🔬 Dermoscopy 🎥 Endoscopy ☢️ PET-CT 🩻 X-Ray 🔬 Histology (SRH)

Evaluation Results

Onco-Seg was evaluated on 12 public benchmark datasets spanning 6 anatomical regions. Results show strong performance on breast ultrasound, polyp detection, and liver CT, with room for improvement on challenging targets like lung nodules and pancreatic tumors.

Onco-Seg evaluation results across 12 datasets
📊 Clinical Interpretation: Dice scores above 0.65 (green) indicate strong overlap with expert annotations—suitable for clinical decision support. Scores 0.40-0.65 (yellow) may benefit from human review. Lower scores reflect inherently difficult targets (small nodules, low-contrast tumors) where even expert agreement is limited.

Key Features

🎯 One Model, Many Tasks

A single unified model handles CT, MRI, ultrasound, dermoscopy, and endoscopy—no need for separate specialized networks for each modality.

Fast Inference

Sub-second segmentation on a single GPU enables real-time clinical use, from interactive radiology assistants to automated contouring pipelines.

🔧 Parameter Efficient

LoRA fine-tuning updates <5% of SAM3's 848M parameters, preserving pretrained knowledge while adapting to medical imaging.

🏥 Clinical Deployment Ready

Two deployment patterns: interactive "sidecar" for diagnostic radiology (OHIF/PACS integration) and "silent assistant" for automated radiation oncology contouring.

Clinical Use Cases

📍 Diagnostic Radiology

Interactive Sidecar: Radiologist clicks on a lesion → instant segmentation with auto-computed volume and diameter for structured reporting.

☢️ Radiation Oncology

Silent Assistant: CT scan triggers automatic segmentation of 30+ organs-at-risk → DICOM-RT Structure Set → ready for treatment planning. 80-90% time savings.

Technical Details

Architecture: SAM3 (848M parameters) with LoRA adapters (rank=16, alpha=32) on attention layers

Training: Sequential checkpoint chaining across 8 phases, starting from Medical Segmentation Decathlon (MSD) and expanding to BraTS brain tumors, breast imaging, chest X-rays, and specialized oncology datasets

Loss: Combined Dice + Focal loss with modality-specific weighting for class imbalance

Infrastructure: 4× NVIDIA RTX 4090 GPUs, PyTorch Lightning, Weights & Biases tracking

napari Plugin

Onco-Seg is available as a napari plugin for interactive medical image segmentation. The plugin provides a graphical interface for clinicians and researchers who prefer point-and-click workflows over command-line tools.

🖱️ Interactive Segmentation

Click-to-segment with point prompts or draw bounding boxes. See results instantly overlaid on your medical image.

📝 Multiple Prompt Types

Point prompts (click on target) and box prompts (draw rectangle)—text-based prompting planned for future releases.

📦 Multi-Format Export

Save segmentation masks as NIfTI (.nii.gz) for research or DICOM-RT Structure Sets for radiation oncology treatment planning systems.

🔄 3D Propagation

Segment one slice, propagate to the entire volume. The plugin handles slice-by-slice inference with centroid tracking.

Installation

The napari plugin is included in the main repository:

```bash # Clone the repository git clone https://github.com/inventcures/onco-segment.git cd onco-segment/napari_plugin # Install in development mode pip install -e ".[dev]" # Launch napari napari # Then go to Plugins > OncoSeg ```

The plugin supports automatic checkpoint download from HuggingFace, with pre-trained models for general-purpose segmentation as well as specialized checkpoints for breast, liver, and brain imaging.

Recording a Demo

Want to create your own demo video? See our detailed step-by-step guide for recording brain tumor segmentation demos using the napari plugin.

Get Started

Citation

```bibtex @article{makani2026oncoseg, title={Onco-Seg: Adapting Promptable Concept Segmentation for Multi-Modal Medical Imaging}, author={Makani, Ashish and Agrawal, Anjali and Agrawal, Anurag}, journal={arXiv preprint}, year={2026} } ```

Acknowledgments

This work was supported by the Koita Centre for Digital Health at Ashoka University (KCDH-A). We thank RunPod for GPU infrastructure and Weights & Biases for experiment tracking.

Deep thanks & gratitude to:

1. Meta AI & the SAM Team: Special thanks to Meta and the entire SAM team, led by Nikhila Ravi (Meta AI), for being torchbearers of research and innovation in this field with their prolific releases of SAM, SAM2, and SAM3. More importantly, we thank them for making a conscious choice to embrace open source and releasing detailed technical reports and open weights for all SAM releases. We believe innovation in ML & AI at large, and in biomedical & cancer informatics specifically, can truly be accelerated by standing on the shoulders of giants.

2. Bo Wang Lab: The brilliant Bo Wang (@BoWang87) and his prolific lab have been an inspiration. Their pioneering MedSAM work demonstrated the potential of adapting foundation models for medical imaging and paved the way for projects like Onco-Seg.

3. NCI, CBIIT & TCIA: We are deeply grateful to the National Cancer Institute (NCI), its Center for Biomedical Informatics and Information Technology (CBIIT), and The Cancer Imaging Archive (TCIA) for creating such a wonderful open-access resource that has enabled countless research innovations in medical imaging AI. Special thanks to Justin Kirby at TCIA for helping debug minor data access issues and for consistently encouraging innovation built on top of TCIA's datasets.

The availability of open-source datasets greatly accelerated our progress on this project. We hope that as research in biomedical machine learning and AI progresses, there is an even greater emphasis on building and releasing open datasets—as the success of AlphaFold has so aptly demonstrated—for the greater public good.

We also thank the creators of benchmark datasets (Medical Segmentation Decathlon, BraTS, LiTS, ISIC, Kvasir-SEG, PROMISE12, BUSI, and others).


Contact: Ashish Makani — ashish.makani@ashoka.edu.in
Last updated: January 2026

</div> </main>