TU Darmstadt / ULB / TUbiblio

Generative Training for 3D-Retrieval

Grabner, Harald and Ullrich, Torsten and Fellner, Dieter W. (2015):
Generative Training for 3D-Retrieval.
pp. 97-105, SciTePress, GRAPP 2015, Berlin, Germany, March 11-15, 2015, DOI: 10.5220/0005248300970105,
[Conference or Workshop Item]

Abstract

A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any "real" data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.

Item Type: Conference or Workshop Item
Erschienen: 2015
Creators: Grabner, Harald and Ullrich, Torsten and Fellner, Dieter W.
Title: Generative Training for 3D-Retrieval
Language: English
Abstract:

A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any "real" data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.

Publisher: SciTePress
Uncontrolled Keywords: Business Field: Digital society, Research Area: Computer graphics (CG), Research Area: Modeling (MOD), Generative modeling, Procedural modeling, 3D Object retrieval, Machine learning, Content based retrieval
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Interactive Graphics Systems
Event Title: GRAPP 2015
Event Location: Berlin, Germany
Event Dates: March 11-15, 2015
Date Deposited: 08 May 2019 07:44
DOI: 10.5220/0005248300970105
Export:
Suche nach Titel in: TUfind oder in Google
Send an inquiry Send an inquiry

Options (only for editors)
Show editorial Details Show editorial Details