TU Darmstadt / ULB / TUbiblio

How to Match Tracks of Visual Features for Automotive Long-Term SLAM

Luthardt, Stefan ; Ziegler, Christoph ; Willert, Volker ; Adamy, Jürgen (2019)
How to Match Tracks of Visual Features for Automotive Long-Term SLAM.
2019 IEEE Intelligent Transportation Systems Conference (ITSC). Auckland, New Zealand (October 27-30, 2019)
Konferenzveröffentlichung, Zweitveröffentlichung

Kurzbeschreibung (Abstract)

Accurate localization is a vital prerequisite for future assistance or autonomous driving functions in intelligent vehicles. To achieve the required localization accuracy and availability, long-term visual SLAM algorithms like LLama-SLAM are a promising option. In such algorithms visual feature tracks, i.e. landmark observations over several consecutive image frames, have to be matched to feature tracks recorded days, weeks or months earlier. This leads to a more challenging matching problem than in short-term visual localization and known descriptor matching methods cannot be applied directly. In this paper, we devise several approaches to compare and match feature tracks and evaluate their performance on a long-term data set. With the proposed descriptor combination and masking ("CoMa") method the best track matching performance is achieved with minor computational cost. This method creates a single combined descriptor for each feature track and furthermore increases the robustness by capturing the appearance variations of this track in a descriptor mask.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2019
Autor(en): Luthardt, Stefan ; Ziegler, Christoph ; Willert, Volker ; Adamy, Jürgen
Art des Eintrags: Zweitveröffentlichung
Titel: How to Match Tracks of Visual Features for Automotive Long-Term SLAM
Sprache: Englisch
Publikationsjahr: Oktober 2019
Verlag: IEEE
Kollation: 8 Seiten
Veranstaltungstitel: 2019 IEEE Intelligent Transportation Systems Conference (ITSC)
Veranstaltungsort: Auckland, New Zealand
Veranstaltungsdatum: October 27-30, 2019
URL / URN: https://tuprints.ulb.tu-darmstadt.de/9108
Zugehörige Links:
Kurzbeschreibung (Abstract):

Accurate localization is a vital prerequisite for future assistance or autonomous driving functions in intelligent vehicles. To achieve the required localization accuracy and availability, long-term visual SLAM algorithms like LLama-SLAM are a promising option. In such algorithms visual feature tracks, i.e. landmark observations over several consecutive image frames, have to be matched to feature tracks recorded days, weeks or months earlier. This leads to a more challenging matching problem than in short-term visual localization and known descriptor matching methods cannot be applied directly. In this paper, we devise several approaches to compare and match feature tracks and evaluate their performance on a long-term data set. With the proposed descriptor combination and masking ("CoMa") method the best track matching performance is achieved with minor computational cost. This method creates a single combined descriptor for each feature track and furthermore increases the robustness by capturing the appearance variations of this track in a descriptor mask.

Freie Schlagworte: PRORETA4
URN: urn:nbn:de:tuda-tuprints-91082
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 600 Technik, Medizin, angewandte Wissenschaften > 600 Technik
600 Technik, Medizin, angewandte Wissenschaften > 620 Ingenieurwissenschaften und Maschinenbau
Fachbereich(e)/-gebiet(e): 18 Fachbereich Elektrotechnik und Informationstechnik
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Automatisierungstechnik und Mechatronik
18 Fachbereich Elektrotechnik und Informationstechnik > Institut für Automatisierungstechnik und Mechatronik > Regelungsmethoden und Robotik (ab 01.08.2022 umbenannt in Regelungsmethoden und Intelligente Systeme)
Hinterlegungsdatum: 29 Sep 2019 19:55
Letzte Änderung: 13 Feb 2024 13:45
PPN:
Zugehörige Links:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen