TU Darmstadt / ULB / TUbiblio

Visualization Of Class Activation Maps To Explain AI Classification Of Network Packet Captures

Cherepanov, Igor ; Ulmer, Alex ; Joewono, Jonathan Geraldi ; Kohlhammer, Jorn (2022)
Visualization Of Class Activation Maps To Explain AI Classification Of Network Packet Captures.
19th IEEE Symposium on Visualization for Cyber Security. Oklahoma City, USA (19.10.2022-19.10.2022)
doi: 10.1109/VizSec56996.2022.9941392
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

The classification of internet traffic has become increasingly important due to the rapid growth of today’s networks and application variety. The number of connections and the addition of new applications in our networks causes a vast amount of log data and complicates the search for common patterns by experts. Finding such patterns among specific classes of applications is necessary to fulfill various requirements in network analytics. Supervised deep learning methods learn features from raw data and achieve high accuracy in classification. However, these methods are very complex and are used as black-box models, which weakens the experts’ trust in these classifications. Moreover, by using them as a black-box, new knowledge cannot be obtained from the model predictions despite their excellent performance. Therefore, the explainability of the classifications is crucial. Besides increasing trust, the explanation can be used for model evaluation to gain new insights from the data and to improve the model. In this paper, we present a visual and interactive tool that combines the classification of network data with an explanation technique to form an interface between experts, algorithms, and data.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Cherepanov, Igor ; Ulmer, Alex ; Joewono, Jonathan Geraldi ; Kohlhammer, Jorn
Art des Eintrags: Bibliographie
Titel: Visualization Of Class Activation Maps To Explain AI Classification Of Network Packet Captures
Sprache: Englisch
Publikationsjahr: 10 November 2022
Verlag: IEEE
Buchtitel: 2022 IEEE Symposium on Visualization for Cyber Security (VizSec)
Veranstaltungstitel: 19th IEEE Symposium on Visualization for Cyber Security
Veranstaltungsort: Oklahoma City, USA
Veranstaltungsdatum: 19.10.2022-19.10.2022
DOI: 10.1109/VizSec56996.2022.9941392
Kurzbeschreibung (Abstract):

The classification of internet traffic has become increasingly important due to the rapid growth of today’s networks and application variety. The number of connections and the addition of new applications in our networks causes a vast amount of log data and complicates the search for common patterns by experts. Finding such patterns among specific classes of applications is necessary to fulfill various requirements in network analytics. Supervised deep learning methods learn features from raw data and achieve high accuracy in classification. However, these methods are very complex and are used as black-box models, which weakens the experts’ trust in these classifications. Moreover, by using them as a black-box, new knowledge cannot be obtained from the model predictions despite their excellent performance. Therefore, the explainability of the classifications is crucial. Besides increasing trust, the explanation can be used for model evaluation to gain new insights from the data and to improve the model. In this paper, we present a visual and interactive tool that combines the classification of network data with an explanation technique to form an interface between experts, algorithms, and data.

Freie Schlagworte: Human-centered computing, Visualization, User interface design, Explainability, Network classification, Convolutional neural networks (CNN)
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
Hinterlegungsdatum: 09 Dez 2022 08:42
Letzte Änderung: 19 Jan 2023 15:11
PPN: 503919969
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen