TU Darmstadt / ULB / TUbiblio

Reconstruction of Micropattern Detector Signals using Convolutional Neural Networks

Flekova, L. ; Schott, M. (2017)
Reconstruction of Micropattern Detector Signals using Convolutional Neural Networks.
In: Journal of Physics: Conference Series, 898 (3)
doi: 10.1088/1742-6596/898/3/032054
Artikel, Bibliographie

Dies ist die neueste Version dieses Eintrags.

Kurzbeschreibung (Abstract)

Micropattern gaseous detector (MPGD) technologies, such as GEMs or MicroMegas, are particularly suitable for precision tracking and triggering in high rate environments. Given their relatively low production costs, MPGDs are an exemplary candidate for the next generation of particle detectors. Having acknowledged these advantages, both the ATLAS and CMS collaborations at the LHC are exploiting these new technologies for their detector upgrade programs in the coming years. When MPGDs are utilized for triggering purposes, the measured signals need to be precisely reconstructed within less than 200 ns, which can be achieved by the usage of FPGAs.

In this work, we present a novel approach to identify reconstructed signals, their timing and the corresponding spatial position on the detector. In particular, we study the effect of noise and dead readout strips on the reconstruction performance. Our approach leverages the potential of convolutional neural network (CNNs), which have recently manifested an outstanding performance in a range of modeling tasks. The proposed neural network architecture of our CNN is designed simply enough, so that it can be modeled directly by an FPGA and thus provide precise information on reconstructed signals already in trigger level.

Typ des Eintrags: Artikel
Erschienen: 2017
Autor(en): Flekova, L. ; Schott, M.
Art des Eintrags: Bibliographie
Titel: Reconstruction of Micropattern Detector Signals using Convolutional Neural Networks
Sprache: Englisch
Publikationsjahr: 2017
Ort: Bristol
Verlag: IOP Publishing
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Journal of Physics: Conference Series
Jahrgang/Volume einer Zeitschrift: 898
(Heft-)Nummer: 3
Kollation: 6 Seiten
DOI: 10.1088/1742-6596/898/3/032054
Zugehörige Links:
Kurzbeschreibung (Abstract):

Micropattern gaseous detector (MPGD) technologies, such as GEMs or MicroMegas, are particularly suitable for precision tracking and triggering in high rate environments. Given their relatively low production costs, MPGDs are an exemplary candidate for the next generation of particle detectors. Having acknowledged these advantages, both the ATLAS and CMS collaborations at the LHC are exploiting these new technologies for their detector upgrade programs in the coming years. When MPGDs are utilized for triggering purposes, the measured signals need to be precisely reconstructed within less than 200 ns, which can be achieved by the usage of FPGAs.

In this work, we present a novel approach to identify reconstructed signals, their timing and the corresponding spatial position on the detector. In particular, we study the effect of noise and dead readout strips on the reconstruction performance. Our approach leverages the potential of convolutional neural network (CNNs), which have recently manifested an outstanding performance in a range of modeling tasks. The proposed neural network architecture of our CNN is designed simply enough, so that it can be modeled directly by an FPGA and thus provide precise information on reconstructed signals already in trigger level.

ID-Nummer: Artikel-ID: 032054
Zusätzliche Informationen:

Track 1: Online Computing

Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
500 Naturwissenschaften und Mathematik > 530 Physik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 17 Mai 2024 09:38
Letzte Änderung: 17 Mai 2024 09:38
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen