TU Darmstadt / ULB / TUbiblio

Exploring Adversarial Examples

Kügler, David and Distergoft, Alexander and Kuijper, Arjan and Mukhopadhyay, Anirban (2018):
Exploring Adversarial Examples.
In: Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Cham, Springer, In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, 2018, In: Lecture Notes in Computer Science (LNCS), 11038, ISSN 0302-9743,
ISBN 978-3-030-02627-1,
DOI: 10.1007/978-3-030-02628-8_8,
[Online-Edition: https://doi.org/10.1007/978-3-030-02628-8_8],
[Conference or Workshop Item]

Abstract

Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed. Unfortunately, complexity of the medical images hinders such study design directly from the medical images. We hypothesize that adversarial examples might result from the incorrect mapping of image space to the low dimensional generation manifold by deep networks. To test the hypothesis, we simplify a complex medical problem namely pose estimation of surgical tools into its barest form. An analytical decision boundary and exhaustive search of the one-pixel attack across multiple image dimensions let us localize the regions of frequent successful one-pixel attacks at the image space.

Item Type: Conference or Workshop Item
Erschienen: 2018
Creators: Kügler, David and Distergoft, Alexander and Kuijper, Arjan and Mukhopadhyay, Anirban
Title: Exploring Adversarial Examples
Language: English
Abstract:

Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed. Unfortunately, complexity of the medical images hinders such study design directly from the medical images. We hypothesize that adversarial examples might result from the incorrect mapping of image space to the low dimensional generation manifold by deep networks. To test the hypothesis, we simplify a complex medical problem namely pose estimation of surgical tools into its barest form. An analytical decision boundary and exhaustive search of the one-pixel attack across multiple image dimensions let us localize the regions of frequent successful one-pixel attacks at the image space.

Title of Book: Understanding and Interpreting Machine Learning in Medical Image Computing Applications
Series Name: Lecture Notes in Computer Science (LNCS)
Volume: 11038
Place of Publication: Cham
Publisher: Springer
ISBN: 978-3-030-02627-1
Uncontrolled Keywords: Convolutional Neural Networks (CNN), Deep learning, Pattern recognition, Feature recognition, Attack mechanisms
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Interactive Graphics Systems
20 Department of Computer Science > Mathematical and Applied Visual Computing
Event Title: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)
Event Location: Granada, Spain
Event Dates: 2018
Date Deposited: 26 Jun 2019 11:45
DOI: 10.1007/978-3-030-02628-8_8
Official URL: https://doi.org/10.1007/978-3-030-02628-8_8
Export:
Suche nach Titel in: TUfind oder in Google

Optionen (nur für Redakteure)

View Item View Item