TU Darmstadt / ULB / TUbiblio

Generative Training for 3D-Retrieval

Grabner, Harald ; Ullrich, Torsten ; Fellner, Dieter W. (2015)
Generative Training for 3D-Retrieval.
GRAPP 2015. Berlin, Germany (11.03.2015-15.03.2015)
doi: 10.5220/0005248300970105
Conference or Workshop Item, Bibliographie

Abstract

A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any "real" data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.

Item Type: Conference or Workshop Item
Erschienen: 2015
Creators: Grabner, Harald ; Ullrich, Torsten ; Fellner, Dieter W.
Type of entry: Bibliographie
Title: Generative Training for 3D-Retrieval
Language: English
Date: March 2015
Publisher: SciTePress
Event Title: GRAPP 2015
Event Location: Berlin, Germany
Event Dates: 11.03.2015-15.03.2015
DOI: 10.5220/0005248300970105
Abstract:

A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any "real" data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.

Uncontrolled Keywords: Business Field: Digital society, Research Area: Computer graphics (CG), Research Area: Modeling (MOD), Generative modeling, Procedural modeling, 3D Object retrieval, Machine learning, Content based retrieval
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Interactive Graphics Systems
Date Deposited: 08 May 2019 07:44
Last Modified: 04 Feb 2022 12:39
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Send an inquiry Send an inquiry

Options (only for editors)
Show editorial Details Show editorial Details