TU Darmstadt / ULB / TUbiblio

Generative Training for 3D-Retrieval

Grabner, Harald ; Ullrich, Torsten ; Fellner, Dieter W. (2015)
Generative Training for 3D-Retrieval.
GRAPP 2015. Berlin, Germany (March 11-15, 2015)
doi: 10.5220/0005248300970105
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any "real" data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2015
Autor(en): Grabner, Harald ; Ullrich, Torsten ; Fellner, Dieter W.
Art des Eintrags: Bibliographie
Titel: Generative Training for 3D-Retrieval
Sprache: Englisch
Publikationsjahr: März 2015
Verlag: SciTePress
Veranstaltungstitel: GRAPP 2015
Veranstaltungsort: Berlin, Germany
Veranstaltungsdatum: March 11-15, 2015
DOI: 10.5220/0005248300970105
Kurzbeschreibung (Abstract):

A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any "real" data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.

Freie Schlagworte: Business Field: Digital society, Research Area: Computer graphics (CG), Research Area: Modeling (MOD), Generative modeling, Procedural modeling, 3D Object retrieval, Machine learning, Content based retrieval
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
Hinterlegungsdatum: 08 Mai 2019 07:44
Letzte Änderung: 04 Feb 2022 12:39
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen