Kügler, David ; Distergoft, Alexander ; Kuijper, Arjan ; Mukhopadhyay, Anirban (2018)
Exploring Adversarial Examples.
International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Granada, Spain (16.09.2018-20.09.2018)
doi: 10.1007/978-3-030-02628-8_8
Konferenzveröffentlichung, Bibliographie
Kurzbeschreibung (Abstract)
Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed. Unfortunately, complexity of the medical images hinders such study design directly from the medical images. We hypothesize that adversarial examples might result from the incorrect mapping of image space to the low dimensional generation manifold by deep networks. To test the hypothesis, we simplify a complex medical problem namely pose estimation of surgical tools into its barest form. An analytical decision boundary and exhaustive search of the one-pixel attack across multiple image dimensions let us localize the regions of frequent successful one-pixel attacks at the image space.
Typ des Eintrags: | Konferenzveröffentlichung |
---|---|
Erschienen: | 2018 |
Autor(en): | Kügler, David ; Distergoft, Alexander ; Kuijper, Arjan ; Mukhopadhyay, Anirban |
Art des Eintrags: | Bibliographie |
Titel: | Exploring Adversarial Examples |
Sprache: | Englisch |
Publikationsjahr: | 2018 |
Ort: | Cham |
Verlag: | Springer |
Buchtitel: | Understanding and Interpreting Machine Learning in Medical Image Computing Applications |
Reihe: | Lecture Notes in Computer Science (LNCS) |
Band einer Reihe: | 11038 |
Veranstaltungstitel: | International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) |
Veranstaltungsort: | Granada, Spain |
Veranstaltungsdatum: | 16.09.2018-20.09.2018 |
DOI: | 10.1007/978-3-030-02628-8_8 |
URL / URN: | https://doi.org/10.1007/978-3-030-02628-8_8 |
Kurzbeschreibung (Abstract): | Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed. Unfortunately, complexity of the medical images hinders such study design directly from the medical images. We hypothesize that adversarial examples might result from the incorrect mapping of image space to the low dimensional generation manifold by deep networks. To test the hypothesis, we simplify a complex medical problem namely pose estimation of surgical tools into its barest form. An analytical decision boundary and exhaustive search of the one-pixel attack across multiple image dimensions let us localize the regions of frequent successful one-pixel attacks at the image space. |
Freie Schlagworte: | Convolutional Neural Networks (CNN), Deep learning, Pattern recognition, Feature recognition, Attack mechanisms |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Graphisch-Interaktive Systeme 20 Fachbereich Informatik > Mathematisches und angewandtes Visual Computing |
Hinterlegungsdatum: | 26 Jun 2019 11:45 |
Letzte Änderung: | 03 Jul 2024 10:40 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |