seg2med: a bridge from artificial anatomy to multimodal medical images
Objective. We present seg2med (segmentation-to-medical images), a modular framework for anatomy-driven multimodal medical image synthesis. The system integrates three components to enable high-fidelity, cross-modality generation of computed tomography (CT) and magnetic resonance (MR) images based on...
Gespeichert in:
| Hauptverfasser: | , , , , , , , |
|---|---|
| Dokumenttyp: | Article (Journal) |
| Sprache: | Englisch |
| Veröffentlicht: |
15 December 2025
|
| In: |
Physics in medicine and biology
Year: 2025, Jahrgang: 70, Heft: 24, Pages: 1-23 |
| ISSN: | 1361-6560 |
| DOI: | 10.1088/1361-6560/ae2738 |
| Online-Zugang: | Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1088/1361-6560/ae2738 |
| Verfasserangaben: | Zeyu Yang, Zhilin Chen, Yipeng Sun, Anika Strittmatter, Anish Raj, Ahmad Allababidi, Johann S Rink and Frank G Zöllner |
| Zusammenfassung: | Objective. We present seg2med (segmentation-to-medical images), a modular framework for anatomy-driven multimodal medical image synthesis. The system integrates three components to enable high-fidelity, cross-modality generation of computed tomography (CT) and magnetic resonance (MR) images based on structured anatomical priors. Approach. First, anatomical maps are independently derived from three sources: real patient data, extended cardiac-torso (XCAT) digital phantoms, and anatomies-synthetic subjects created by combining organs from multiple patients. Second, we introduce PhysioSynth, a modality-specific simulator that converts anatomical masks into imaging-like prior volumes using tissue-dependent parameters (e.g. HU, T1, T2, ρ) and modality-specific signal models. It supports simulation of CT and multiple MR sequences, including gradient-echo, SPACE, and volumetric interpolated breath-hold examination. Third, the synthesized anatomical priors are used to train 2-channel conditional denoising diffusion probabilistic models, which take the anatomical prior as a structural condition alongside the noisy image, enabling it to generate high-quality, structurally aligned images within its modality. Main results. The framework achieves a structural similarity index measure (SSIM) of for CT and for MR images compared to real patient data, and FSIM for simulated CT from XCAT. The generative quality is further supported by a Fréchet inception distance of 20.20 for CT synthesis. In modality conversion tasks, seg2med attains SSIM scores of (MR CT) and (CT MR). Significance. In anatomical fidelity evaluation, synthetic CT images achieve a mean Dice coefficient exceeding 0.90 for 11 key abdominal organs, and over 0.80 for 34 of 59 total organs. These results underscore seg2med’s utility in cross-modality image synthesis, dataset augmentation, and anatomy-aware AI development in medical imaging. |
|---|---|
| Beschreibung: | Veröffentlicht: 15. Dezember 2025 Gesehen am 09.02.2026 |
| Beschreibung: | Online Resource |
| ISSN: | 1361-6560 |
| DOI: | 10.1088/1361-6560/ae2738 |