TU Darmstadt / ULB / TUbiblio

Robust, fast and accurate vision-based localization of a cooperative target used for space robotic arm

Wen, Zhuoman ; Wang, Yanjie ; Luo, Jun ; Kuijper, Arjan ; Di, Nan ; Jin, Minghe (2017)
Robust, fast and accurate vision-based localization of a cooperative target used for space robotic arm.
In: Acta Astronautica, 136
doi: 10.1016/j.actaastro.2017.03.008
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

When a space robotic arm deploys a payload, usually the pose between the cooperative target fixed on the payload and the hand-eye camera installed on the arm is calculated in real-time. A high-precision robust visual cooperative target localization method is proposed. Combing a circle, a line and dots as markers, a target that guarantees high detection rates is designed. Given an image, single-pixel-width smooth edges are drawn by a novel linking method. Circles are then quickly extracted using isophotes curvature. Around each circle, a square boundary in a pre-calculated proportion to the circle radius is set. In the boundary, the target is identified if certain numbers of lines exist. Based on the circle, the lines, and the target foreground and background intensities, markers are localized. Finally, the target pose is calculated by the Point-3-Perspective algorithm. The algorithm processes 8 frames per second with the target distance ranging from 0.3m to 1.5 m. It generated highprecision poses of above 97.5% on over 100,000 images regardless of camera background, target pose, illumination and motion blur. At 0.3 m, the rotation and translation errors were less than 0.015° and 0.2 mm. The proposed algorithm is very suitable for real-time visual measurement that requires high precision in aerospace.

Typ des Eintrags: Artikel
Erschienen: 2017
Autor(en): Wen, Zhuoman ; Wang, Yanjie ; Luo, Jun ; Kuijper, Arjan ; Di, Nan ; Jin, Minghe
Art des Eintrags: Bibliographie
Titel: Robust, fast and accurate vision-based localization of a cooperative target used for space robotic arm
Sprache: Englisch
Publikationsjahr: Juli 2017
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Acta Astronautica
Jahrgang/Volume einer Zeitschrift: 136
DOI: 10.1016/j.actaastro.2017.03.008
URL / URN: https://doi.org/10.1016/j.actaastro.2017.03.008
Kurzbeschreibung (Abstract):

When a space robotic arm deploys a payload, usually the pose between the cooperative target fixed on the payload and the hand-eye camera installed on the arm is calculated in real-time. A high-precision robust visual cooperative target localization method is proposed. Combing a circle, a line and dots as markers, a target that guarantees high detection rates is designed. Given an image, single-pixel-width smooth edges are drawn by a novel linking method. Circles are then quickly extracted using isophotes curvature. Around each circle, a square boundary in a pre-calculated proportion to the circle radius is set. In the boundary, the target is identified if certain numbers of lines exist. Based on the circle, the lines, and the target foreground and background intensities, markers are localized. Finally, the target pose is calculated by the Point-3-Perspective algorithm. The algorithm processes 8 frames per second with the target distance ranging from 0.3m to 1.5 m. It generated highprecision poses of above 97.5% on over 100,000 images regardless of camera background, target pose, illumination and motion blur. At 0.3 m, the rotation and translation errors were less than 0.015° and 0.2 mm. The proposed algorithm is very suitable for real-time visual measurement that requires high precision in aerospace.

Freie Schlagworte: Edge detection, Marker localization, Robotics applications, Measurements
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Mathematisches und angewandtes Visual Computing
Hinterlegungsdatum: 04 Mai 2020 12:51
Letzte Änderung: 04 Mai 2020 12:51
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen