TU Darmstadt / ULB / TUbiblio

Continual Hippocampus Segmentation with Transformers

Ranem, Amin ; Gonzalez, Camila ; Mukhopadhyay, Anirban (2022)
Continual Hippocampus Segmentation with Transformers.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA (19.06.2022-24.06.2022)
doi: 10.1109/CVPRW56347.2022.00415
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

In clinical settings, where acquisition conditions andpatient populations change over time, continual learningis key for ensuring the safe use of deep neural networks.Yet most existing work focuses on convolutional architecture sand image classification. Instead, radiologists preferto work with segmentation models that outline specific regions-of-interest, for which Transformer-based architectures are gaining traction. The self-attention mechanism of Transformers could potentially mitigate catastrophic forgetting,opening the way for more robust medical image segmentation.In this work, we explore how recently-proposed Transformer mechanisms for semantic segmentation behavein sequential learning scenarios, and analyse howbest to adapt continual learning strategies for this setting.Our evaluation on hippocampus segmentation shows that Transformer mechanisms mitigate catastrophic forgettingfor medical image segmentation compared to purely convolutional architectures, and demonstrates that regularising ViT modules should be done with caution.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Ranem, Amin ; Gonzalez, Camila ; Mukhopadhyay, Anirban
Art des Eintrags: Bibliographie
Titel: Continual Hippocampus Segmentation with Transformers
Sprache: Englisch
Publikationsjahr: 2022
Verlag: IEEE
Buchtitel: Proceedings: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
Veranstaltungstitel: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Veranstaltungsort: New Orleans, USA
Veranstaltungsdatum: 19.06.2022-24.06.2022
DOI: 10.1109/CVPRW56347.2022.00415
Kurzbeschreibung (Abstract):

In clinical settings, where acquisition conditions andpatient populations change over time, continual learningis key for ensuring the safe use of deep neural networks.Yet most existing work focuses on convolutional architecture sand image classification. Instead, radiologists preferto work with segmentation models that outline specific regions-of-interest, for which Transformer-based architectures are gaining traction. The self-attention mechanism of Transformers could potentially mitigate catastrophic forgetting,opening the way for more robust medical image segmentation.In this work, we explore how recently-proposed Transformer mechanisms for semantic segmentation behavein sequential learning scenarios, and analyse howbest to adapt continual learning strategies for this setting.Our evaluation on hippocampus segmentation shows that Transformer mechanisms mitigate catastrophic forgettingfor medical image segmentation compared to purely convolutional architectures, and demonstrates that regularising ViT modules should be done with caution.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
Hinterlegungsdatum: 27 Feb 2023 13:51
Letzte Änderung: 20 Jun 2023 13:59
PPN: 508937752
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen