TU Darmstadt / ULB / TUbiblio

Improving Wearable-Based Activity Recognition Using Image Representations

Sanchez Guinea, Alejandro ; Sarabchian, Mehran ; Mühlhäuser, Max (2022)
Improving Wearable-Based Activity Recognition Using Image Representations.
In: Sensors, 22 (5)
doi: 10.3390/s22051840
Artikel, Bibliographie

Dies ist die neueste Version dieses Eintrags.

Kurzbeschreibung (Abstract)

Activity recognition based on inertial sensors is an essential task in mobile and ubiquitous computing. To date, the best performing approaches in this task are based on deep learning models. Although the performance of the approaches has been increasingly improving, a number of issues still remain. Specifically, in this paper we focus on the issue of the dependence of today’s state-of-the-art approaches to complex ad hoc deep learning convolutional neural networks (CNNs), recurrent neural networks (RNNs), or a combination of both, which require specialized knowledge and considerable effort for their construction and optimal tuning. To address this issue, in this paper we propose an approach that automatically transforms the inertial sensors time-series data into images that represent in pixel form patterns found over time, allowing even a simple CNN to outperform complex ad hoc deep learning models that combine RNNs and CNNs for activity recognition. We conducted an extensive evaluation considering seven benchmark datasets that are among the most relevant in activity recognition. Our results demonstrate that our approach is able to outperform the state of the art in all cases, based on image representations that are generated through a process that is easy to implement, modify, and extend further, without the need of developing complex deep learning models.

Typ des Eintrags: Artikel
Erschienen: 2022
Autor(en): Sanchez Guinea, Alejandro ; Sarabchian, Mehran ; Mühlhäuser, Max
Art des Eintrags: Bibliographie
Titel: Improving Wearable-Based Activity Recognition Using Image Representations
Sprache: Englisch
Publikationsjahr: 2022
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Sensors
Jahrgang/Volume einer Zeitschrift: 22
(Heft-)Nummer: 5
DOI: 10.3390/s22051840
URL / URN: https://www.mdpi.com/1424-8220/22/5/1840
Zugehörige Links:
Kurzbeschreibung (Abstract):

Activity recognition based on inertial sensors is an essential task in mobile and ubiquitous computing. To date, the best performing approaches in this task are based on deep learning models. Although the performance of the approaches has been increasingly improving, a number of issues still remain. Specifically, in this paper we focus on the issue of the dependence of today’s state-of-the-art approaches to complex ad hoc deep learning convolutional neural networks (CNNs), recurrent neural networks (RNNs), or a combination of both, which require specialized knowledge and considerable effort for their construction and optimal tuning. To address this issue, in this paper we propose an approach that automatically transforms the inertial sensors time-series data into images that represent in pixel form patterns found over time, allowing even a simple CNN to outperform complex ad hoc deep learning models that combine RNNs and CNNs for activity recognition. We conducted an extensive evaluation considering seven benchmark datasets that are among the most relevant in activity recognition. Our results demonstrate that our approach is able to outperform the state of the art in all cases, based on image representations that are generated through a process that is easy to implement, modify, and extend further, without the need of developing complex deep learning models.

Freie Schlagworte: emergenCITY_INF
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Telekooperation
LOEWE
LOEWE > LOEWE-Zentren
LOEWE > LOEWE-Zentren > emergenCITY
Hinterlegungsdatum: 07 Sep 2022 08:05
Letzte Änderung: 03 Jul 2024 02:58
PPN: 498974014
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen