TU Darmstadt / ULB / TUbiblio

Plausibility Assessment and Validation of Deep Learning Algorithms in Automotive Software Development

Korthals, Felix ; Stöcker, Marcel ; Rinderknecht, Stephan (2021)
Plausibility Assessment and Validation of Deep Learning Algorithms in Automotive Software Development.
doi: 10.1007/978-3-658-33466-6_7
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

The implementation of artificial intelligence (AI) systems in automotive software development still is an obstacle. Despite of accelerating scientific research and big wins in this field, the practical application is only possible in restricted environments or non safety critical components. There is a need to develop methods to verify the robustness and safety of AI software modules. The data based generation of deep learning (DL) algorithms creates black box models, which properties inhibit a validation as it is done for deterministic algorithms following ISO 26262. This paper introduces methods to assess the plausibility of AI model outputs. A description of the training data domains for a robust training is accomplished by means of one-class support vector machines (OCSVMs). This anomaly detection process encloses valid data within a DB, to be able to verify model outputs during operation. A further categorization of the training data domain into 20, equally spaced sub-domains led to best results in detecting implausible model calculations.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2021
Autor(en): Korthals, Felix ; Stöcker, Marcel ; Rinderknecht, Stephan
Art des Eintrags: Bibliographie
Titel: Plausibility Assessment and Validation of Deep Learning Algorithms in Automotive Software Development
Sprache: Englisch
Publikationsjahr: 14 Mai 2021
Ort: Wiesbaden
Verlag: Springer Vieweg
Buchtitel: 21. Internationales Stuttgarter Symposium : Automobil- und Motorentechnik : Stuttgart 14.05.2021
DOI: 10.1007/978-3-658-33466-6_7
URL / URN: https://link.springer.com/chapter/10.1007/978-3-658-33466-6_...
Kurzbeschreibung (Abstract):

The implementation of artificial intelligence (AI) systems in automotive software development still is an obstacle. Despite of accelerating scientific research and big wins in this field, the practical application is only possible in restricted environments or non safety critical components. There is a need to develop methods to verify the robustness and safety of AI software modules. The data based generation of deep learning (DL) algorithms creates black box models, which properties inhibit a validation as it is done for deterministic algorithms following ISO 26262. This paper introduces methods to assess the plausibility of AI model outputs. A description of the training data domains for a robust training is accomplished by means of one-class support vector machines (OCSVMs). This anomaly detection process encloses valid data within a DB, to be able to verify model outputs during operation. A further categorization of the training data domain into 20, equally spaced sub-domains led to best results in detecting implausible model calculations.

Schlagworte:
Einzelne SchlagworteSprache
Machine Learning, Plausibility Assessment, Data Domain, One-Class Support, Vector Machinenicht bekannt
Fachbereich(e)/-gebiet(e): 16 Fachbereich Maschinenbau
16 Fachbereich Maschinenbau > Institut für Mechatronische Systeme im Maschinenbau (IMS)
Hinterlegungsdatum: 23 Jun 2021 05:17
Letzte Änderung: 23 Jun 2021 05:17
PPN:
Schlagworte:
Einzelne SchlagworteSprache
Machine Learning, Plausibility Assessment, Data Domain, One-Class Support, Vector Machinenicht bekannt
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen