TU Darmstadt / ULB / TUbiblio

A Post Processing Technique to Automatically Remove Floater Artifacts in Neural Radiance Fields

Wirth, T. ; Rak, A. ; Knauthe, V. ; Fellner, D. W. (2024)
A Post Processing Technique to Automatically Remove Floater Artifacts in Neural Radiance Fields.
In: Computer Graphics Forum, 2023, 42 (7)
doi: 10.26083/tuprints-00027236
Artikel, Zweitveröffentlichung, Verlagsversion

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Neural Radiance Fields have revolutionized Novel View Synthesis by providing impressive levels of realism. However, in most in‐the‐wild scenes they suffer from floater artifacts that occur due to sparse input images or strong view‐dependent effects. We propose an approach that uses neighborhood based clustering and a consistency metric on NeRF models trained on different scene scales to identify regions that contain floater artifacts based on Instant‐NGPs multiscale occupancy grids. These occupancy grids contain the position of relevant optical densities in the scene. By pruning the regions that we identified as containing floater artifacts, they are omitted during the rendering process, leading to higher quality resulting images. Our approach has no negative runtime implications for the rendering process and does not require retraining of the underlying Multi Layer Perceptron. We show on a qualitative base, that our approach is suited to remove floater artifacts while preserving most of the scenes relevant geometry. Furthermore, we conduct a comparison to state‐of‐the‐art techniques on the Nerfbusters dataset, that was created with measuring the implications of floater artifacts in mind. This comparison shows, that our method outperforms currently available techniques. Our approach does not require additional user input, but can be be used in an interactive manner. In general, the presented approach is applicable to every architecture that uses an explicit representation of a scene's occupancy distribution to accelerate the rendering process.

Typ des Eintrags: Artikel
Erschienen: 2024
Autor(en): Wirth, T. ; Rak, A. ; Knauthe, V. ; Fellner, D. W.
Art des Eintrags: Zweitveröffentlichung
Titel: A Post Processing Technique to Automatically Remove Floater Artifacts in Neural Radiance Fields
Sprache: Englisch
Publikationsjahr: 27 Mai 2024
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: Oktober 2023
Ort der Erstveröffentlichung: Oxford
Verlag: Wiley-Blackwell
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Computer Graphics Forum
Jahrgang/Volume einer Zeitschrift: 42
(Heft-)Nummer: 7
Kollation: 12 Seiten
DOI: 10.26083/tuprints-00027236
URL / URN: https://tuprints.ulb.tu-darmstadt.de/27236
Zugehörige Links:
Herkunft: Zweitveröffentlichung DeepGreen
Kurzbeschreibung (Abstract):

Neural Radiance Fields have revolutionized Novel View Synthesis by providing impressive levels of realism. However, in most in‐the‐wild scenes they suffer from floater artifacts that occur due to sparse input images or strong view‐dependent effects. We propose an approach that uses neighborhood based clustering and a consistency metric on NeRF models trained on different scene scales to identify regions that contain floater artifacts based on Instant‐NGPs multiscale occupancy grids. These occupancy grids contain the position of relevant optical densities in the scene. By pruning the regions that we identified as containing floater artifacts, they are omitted during the rendering process, leading to higher quality resulting images. Our approach has no negative runtime implications for the rendering process and does not require retraining of the underlying Multi Layer Perceptron. We show on a qualitative base, that our approach is suited to remove floater artifacts while preserving most of the scenes relevant geometry. Furthermore, we conduct a comparison to state‐of‐the‐art techniques on the Nerfbusters dataset, that was created with measuring the implications of floater artifacts in mind. This comparison shows, that our method outperforms currently available techniques. Our approach does not require additional user input, but can be be used in an interactive manner. In general, the presented approach is applicable to every architecture that uses an explicit representation of a scene's occupancy distribution to accelerate the rendering process.

ID-Nummer: Artikel-ID: e14977
Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-272362
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
20 Fachbereich Informatik > Fraunhofer IGD
Hinterlegungsdatum: 27 Mai 2024 12:55
Letzte Änderung: 03 Jun 2024 11:56
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen