Richter, Stephan Randolf (2020)
Visual Perception with Synthetic Data.
Technische Universität Darmstadt
doi: 10.25534/tuprints-00013245
Dissertation, Erstveröffentlichung
Kurzbeschreibung (Abstract)
In recent years, learning-based methods have become the dominant approach to solving computer vision tasks. A major reason for this development is their automatic adaptation to the particularities of the task at hand by learning a model of the problem from (training) data. This approach assumes that the training data closely resemble the data encountered during testing. Successfully applying a learning-based algorithm in a wide range of real-world scenarios thus requires collecting a large set of training data, which models the complex phenomena encountered in the real world and covers rare, but critical edge cases. For many tasks in computer vision, however, the human effort required severely limits the scale and diversity of datasets. A promising approach to reducing the human effort involved is data synthesis, by which considerable parts of the collection and annotation process can be automated. Employing synthetic data, however, poses unique challenges: first, synthesis is only as useful as methods are able to capitalize on virtually infinite amounts of data and arbitrary precision. Second, synthetic data must be sufficiently realistic for being useful in real-world scenarios. However, modeling real world phenomena within the synthesis can be even more laborious than collection and annotation of real datasets in the first place. In this dissertation, we address these challenges in two ways: first, we propose to adapt data-driven methods to take advantage of the unique features of synthetic data. Specifically, we develop a method that reconstructs the surface of objects from a single view in uncalibrated illumination conditions. The method estimates illumination conditions and synthesizes suitable training data at test time, enabling reconstructions at unprecedented detail. Furthermore, we develop a memory-efficient approach for the reconstruction of complete 3D shapes from a single view. This way, we leverage the high precision available through 3D CAD models and obtain more accurate and detailed reconstructions than previous approaches. Second, we propose to tap into computer games for creating ground truth for a variety of visual perception tasks. Open world computer games mimic the real world and feature a large diversity paired with high realism. Since source code is not available for commercial games, we devise a technique to intercept the rendering pipeline during game play and use the rendering resources for identifying objects in rendered images. As there is only limited semantic information available at the level of interception and manual association of resources with semantic classes is still necessary, we develop a method to speed up the annotation dramatically by recognizing shared resources and automatically propagating annotations across the dataset. Leveraging the geometric information available through the rendering process, we further collect ground truth for optical flow, visual odometry, and 3D scene layout. The synthesis of data from computer games reduces the human annotation effort significantly and allows creating synthetic datasets that model the real world at unprecedented scale. The ground truth for multiple visual perception tasks enables deeper analysis of current methods and the development of novel approaches that reason about multiple tasks holistically. For both the adaptation of data-driven methods as well as the datasets derived from computer games, we demonstrate significant performance improvements through quantitative and qualitative evaluations.
Typ des Eintrags: | Dissertation | ||||
---|---|---|---|---|---|
Erschienen: | 2020 | ||||
Autor(en): | Richter, Stephan Randolf | ||||
Art des Eintrags: | Erstveröffentlichung | ||||
Titel: | Visual Perception with Synthetic Data | ||||
Sprache: | Englisch | ||||
Referenten: | Roth, Prof. Ph.D Stefan ; Geiger, Prof. Dr. Andreas | ||||
Publikationsjahr: | 2020 | ||||
Ort: | Darmstadt | ||||
Datum der mündlichen Prüfung: | 13 Januar 2020 | ||||
DOI: | 10.25534/tuprints-00013245 | ||||
URL / URN: | https://tuprints.ulb.tu-darmstadt.de/13245 | ||||
Kurzbeschreibung (Abstract): | In recent years, learning-based methods have become the dominant approach to solving computer vision tasks. A major reason for this development is their automatic adaptation to the particularities of the task at hand by learning a model of the problem from (training) data. This approach assumes that the training data closely resemble the data encountered during testing. Successfully applying a learning-based algorithm in a wide range of real-world scenarios thus requires collecting a large set of training data, which models the complex phenomena encountered in the real world and covers rare, but critical edge cases. For many tasks in computer vision, however, the human effort required severely limits the scale and diversity of datasets. A promising approach to reducing the human effort involved is data synthesis, by which considerable parts of the collection and annotation process can be automated. Employing synthetic data, however, poses unique challenges: first, synthesis is only as useful as methods are able to capitalize on virtually infinite amounts of data and arbitrary precision. Second, synthetic data must be sufficiently realistic for being useful in real-world scenarios. However, modeling real world phenomena within the synthesis can be even more laborious than collection and annotation of real datasets in the first place. In this dissertation, we address these challenges in two ways: first, we propose to adapt data-driven methods to take advantage of the unique features of synthetic data. Specifically, we develop a method that reconstructs the surface of objects from a single view in uncalibrated illumination conditions. The method estimates illumination conditions and synthesizes suitable training data at test time, enabling reconstructions at unprecedented detail. Furthermore, we develop a memory-efficient approach for the reconstruction of complete 3D shapes from a single view. This way, we leverage the high precision available through 3D CAD models and obtain more accurate and detailed reconstructions than previous approaches. Second, we propose to tap into computer games for creating ground truth for a variety of visual perception tasks. Open world computer games mimic the real world and feature a large diversity paired with high realism. Since source code is not available for commercial games, we devise a technique to intercept the rendering pipeline during game play and use the rendering resources for identifying objects in rendered images. As there is only limited semantic information available at the level of interception and manual association of resources with semantic classes is still necessary, we develop a method to speed up the annotation dramatically by recognizing shared resources and automatically propagating annotations across the dataset. Leveraging the geometric information available through the rendering process, we further collect ground truth for optical flow, visual odometry, and 3D scene layout. The synthesis of data from computer games reduces the human annotation effort significantly and allows creating synthetic datasets that model the real world at unprecedented scale. The ground truth for multiple visual perception tasks enables deeper analysis of current methods and the development of novel approaches that reason about multiple tasks holistically. For both the adaptation of data-driven methods as well as the datasets derived from computer games, we demonstrate significant performance improvements through quantitative and qualitative evaluations. |
||||
Alternatives oder übersetztes Abstract: |
|
||||
URN: | urn:nbn:de:tuda-tuprints-132459 | ||||
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik | ||||
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Visuelle Inferenz |
||||
Hinterlegungsdatum: | 10 Aug 2020 07:14 | ||||
Letzte Änderung: | 18 Aug 2020 07:37 | ||||
PPN: | |||||
Referenten: | Roth, Prof. Ph.D Stefan ; Geiger, Prof. Dr. Andreas | ||||
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: | 13 Januar 2020 | ||||
Export: | |||||
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |