TU Darmstadt / ULB / TUbiblio

A Flexible Framework for Virtual Omnidirectional Vision to Improve Operator Situation Awareness

Oehler, Martin ; Stryk, Oskar von (2021)
A Flexible Framework for Virtual Omnidirectional Vision to Improve Operator Situation Awareness.
5th European Conference on Mobile Robots. virtual Conference (31.08.2021-03.09.2021)
doi: 10.1109/ECMR50962.2021.9568840
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

During teleoperation of a mobile robot, providing good operator situation awareness is a major concern as a single mistake can lead to mission failure. Camera streams are widely used for teleoperation but offer limited field-of-view. In this paper, we present a flexible framework for virtual projections to increase situation awareness based on a novel method to fuse multiple cameras mounted anywhere on the robot. Moreover, we propose a complementary approach to improve scene understanding by fusing camera images and geometric 3D Lidar data to obtain a colorized point cloud. The implementation on a compact omnidirectional camera reduces system complexity considerably and solves multiple use-cases on a much smaller footprint compared to traditional approaches such as actuated pan-tilt units. Finally, we demonstrate the generality of the approach by application to the multi-camera system of the Boston Dynamics Spot. The software implementation is available as open-source ROS packages on the project page https://tu-darmstadt-ros-pkg.github.io/omnidirectionalvision.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2021
Autor(en): Oehler, Martin ; Stryk, Oskar von
Art des Eintrags: Bibliographie
Titel: A Flexible Framework for Virtual Omnidirectional Vision to Improve Operator Situation Awareness
Sprache: Englisch
Publikationsjahr: 22 Oktober 2021
Verlag: IEEE
Buchtitel: 2021 European Conference on Mobile Robots (ECMR): Proceedings
Veranstaltungstitel: 5th European Conference on Mobile Robots
Veranstaltungsort: virtual Conference
Veranstaltungsdatum: 31.08.2021-03.09.2021
DOI: 10.1109/ECMR50962.2021.9568840
Zugehörige Links:
Kurzbeschreibung (Abstract):

During teleoperation of a mobile robot, providing good operator situation awareness is a major concern as a single mistake can lead to mission failure. Camera streams are widely used for teleoperation but offer limited field-of-view. In this paper, we present a flexible framework for virtual projections to increase situation awareness based on a novel method to fuse multiple cameras mounted anywhere on the robot. Moreover, we propose a complementary approach to improve scene understanding by fusing camera images and geometric 3D Lidar data to obtain a colorized point cloud. The implementation on a compact omnidirectional camera reduces system complexity considerably and solves multiple use-cases on a much smaller footprint compared to traditional approaches such as actuated pan-tilt units. Finally, we demonstrate the generality of the approach by application to the multi-camera system of the Boston Dynamics Spot. The software implementation is available as open-source ROS packages on the project page https://tu-darmstadt-ros-pkg.github.io/omnidirectionalvision.

Freie Schlagworte: emergenCITY_CPS
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Simulation, Systemoptimierung und Robotik
LOEWE
LOEWE > LOEWE-Zentren
LOEWE > LOEWE-Zentren > emergenCITY
Hinterlegungsdatum: 30 Nov 2021 14:03
Letzte Änderung: 17 Jun 2024 10:27
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen