Khalil, Ahmad (2024)
Vehicular Communication for Collective Perception : Adaptive and Communication-Efficient Vehicular Perception Mechanisms.
Technische Universität Darmstadt
Dissertation, Erstveröffentlichung, Verlagsversion
Kurzbeschreibung (Abstract)
The global road safety report for 2023 highlights a worryingly high number of 1.19 million road traffic fatalities per year. Notably, European road accidents, predominantly caused by human errors, highlight the urgent necessity for advanced vehicle automation to supplement or replace human driver control. Effective autonomous driving systems rely on reliable vehicular perception, crucial for vehicles to accurately perceive their surroundings. Vehicular perception must deliver consistently reliable results across diverse scenarios and conditions. Various methods, such as object detection models realize vehicular perception. Presently, deep neural networks dominate object detection in vehicular contexts due to their superior performance. However, the conventional approach of centrally training object detection models involves transferring vehicle data to central servers, which raises concerns about privacy and bandwidth usage. Furthermore, this method leads to models (situation-agnostic) that perform poorly in specific conditions such as adverse weather or complex traffic scenarios.
To address these challenges, the first contribution of this thesis introduces the Situation-aware Collective Perception (SCP) framework which provides two-stage perception. Environmental situations are initially detected, and then the situation-aware object detection models are employed based on the detection situation. This framework enables transitioning from conventional situation-agnostic to situation-aware object detection. When integrated with V2X communication, our framework supports collaborative and online model training directly in vehicles. Evaluations under diverse environmental conditions demonstrate that using situation-aware models notably improves detection performance.
Moreover, a notable finding emerging from our first contribution highlights the negative impact of increased data heterogeneity on the performance of the object detection models. To effectively mitigate the adverse influence of data heterogeneity, the second contribution focuses on mitigating data heterogeneity by introducing two novel methods that leverage both data characteristics and model parameters to enable an optimized training process. Evaluation of our methods using camera data from the vehicular field demonstrates a remarkable enhancement in model training convergence and detection performance compared with state-of-the-art approaches.
Lastly, the third contribution introduces the Adaptive Resource-aware Clustered Federated Learning (AR-CFL) framework. By integrating with the collaborative and online perception model training paradigm of our SCP framework, AR-CFL optimizes communication efficiency through the creation of efficient vehicle clusters for localized model training.
Overall, the proposed approaches are promising for increasing vehicular perception by enabling adaptive perception while considering communication efficiency.
Typ des Eintrags: | Dissertation | ||||
---|---|---|---|---|---|
Erschienen: | 2024 | ||||
Autor(en): | Khalil, Ahmad | ||||
Art des Eintrags: | Erstveröffentlichung | ||||
Titel: | Vehicular Communication for Collective Perception : Adaptive and Communication-Efficient Vehicular Perception Mechanisms | ||||
Sprache: | Englisch | ||||
Referenten: | Steinmetz, Prof. Dr. Ralf ; Fernández Anta, Prof. Dr. Antonio ; Meuser, Dr. Tobias | ||||
Publikationsjahr: | 5 September 2024 | ||||
Ort: | Darmstadt | ||||
Kollation: | vi, 131 Seiten | ||||
Datum der mündlichen Prüfung: | 21 Juni 2024 | ||||
URL / URN: | https://tuprints.ulb.tu-darmstadt.de/27891 | ||||
Kurzbeschreibung (Abstract): | The global road safety report for 2023 highlights a worryingly high number of 1.19 million road traffic fatalities per year. Notably, European road accidents, predominantly caused by human errors, highlight the urgent necessity for advanced vehicle automation to supplement or replace human driver control. Effective autonomous driving systems rely on reliable vehicular perception, crucial for vehicles to accurately perceive their surroundings. Vehicular perception must deliver consistently reliable results across diverse scenarios and conditions. Various methods, such as object detection models realize vehicular perception. Presently, deep neural networks dominate object detection in vehicular contexts due to their superior performance. However, the conventional approach of centrally training object detection models involves transferring vehicle data to central servers, which raises concerns about privacy and bandwidth usage. Furthermore, this method leads to models (situation-agnostic) that perform poorly in specific conditions such as adverse weather or complex traffic scenarios. To address these challenges, the first contribution of this thesis introduces the Situation-aware Collective Perception (SCP) framework which provides two-stage perception. Environmental situations are initially detected, and then the situation-aware object detection models are employed based on the detection situation. This framework enables transitioning from conventional situation-agnostic to situation-aware object detection. When integrated with V2X communication, our framework supports collaborative and online model training directly in vehicles. Evaluations under diverse environmental conditions demonstrate that using situation-aware models notably improves detection performance. Moreover, a notable finding emerging from our first contribution highlights the negative impact of increased data heterogeneity on the performance of the object detection models. To effectively mitigate the adverse influence of data heterogeneity, the second contribution focuses on mitigating data heterogeneity by introducing two novel methods that leverage both data characteristics and model parameters to enable an optimized training process. Evaluation of our methods using camera data from the vehicular field demonstrates a remarkable enhancement in model training convergence and detection performance compared with state-of-the-art approaches. Lastly, the third contribution introduces the Adaptive Resource-aware Clustered Federated Learning (AR-CFL) framework. By integrating with the collaborative and online perception model training paradigm of our SCP framework, AR-CFL optimizes communication efficiency through the creation of efficient vehicle clusters for localized model training. Overall, the proposed approaches are promising for increasing vehicular perception by enabling adaptive perception while considering communication efficiency. |
||||
Alternatives oder übersetztes Abstract: |
|
||||
Status: | Verlagsversion | ||||
URN: | urn:nbn:de:tuda-tuprints-278919 | ||||
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik 500 Naturwissenschaften und Mathematik > 500 Naturwissenschaften 600 Technik, Medizin, angewandte Wissenschaften > 600 Technik 600 Technik, Medizin, angewandte Wissenschaften > 620 Ingenieurwissenschaften und Maschinenbau |
||||
Fachbereich(e)/-gebiet(e): | 18 Fachbereich Elektrotechnik und Informationstechnik 18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Datentechnik 18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Datentechnik > Multimedia Kommunikation |
||||
Hinterlegungsdatum: | 05 Sep 2024 12:12 | ||||
Letzte Änderung: | 06 Sep 2024 07:49 | ||||
PPN: | |||||
Referenten: | Steinmetz, Prof. Dr. Ralf ; Fernández Anta, Prof. Dr. Antonio ; Meuser, Dr. Tobias | ||||
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: | 21 Juni 2024 | ||||
Export: | |||||
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |