Fu, Biying ; Kirchbuchner, Florian ; Kuijper, Arjan (2020)
Data augmentation for time series: traditional vs generative
models on capacitive proximity time series.
13th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA'20). Corfu, Greece (30.06.2020-03.07.2020)
doi: 10.1145/3389189.3392606
Konferenzveröffentlichung, Bibliographie
Kurzbeschreibung (Abstract)
Large labeled quantities and diversities of training data are often needed for supervised, data-based modelling. Data distribution should cover a rich representation to support the generalizability of the trained end-to-end inference model. However, this is often hindered by limited labeled data and the expensive data collection process, especially for human activity recognition tasks. Extensive manual labeling is required. Data augmentation is thus a widely used regularization method for deep learning, especially applied on image data to increase the classification accuracy. But it is less researched for time series. In this paper, we investigate the data augmentation task on continuous capacitive time series with the example on exercise recognition. We show that the traditional data augmentation can enrich the source distribution and thus make the trained inference model more generalized. This further increases the recognition performance for unseen target data around 21.4 percentage points compared to inference model without data augmentation. The generative models such as variational autoencoder or conditional variational autoencoder can further reduce the variance on the target data.
Typ des Eintrags: | Konferenzveröffentlichung |
---|---|
Erschienen: | 2020 |
Autor(en): | Fu, Biying ; Kirchbuchner, Florian ; Kuijper, Arjan |
Art des Eintrags: | Bibliographie |
Titel: | Data augmentation for time series: traditional vs generative models on capacitive proximity time series |
Sprache: | Englisch |
Publikationsjahr: | 2020 |
Ort: | New York, NY, United States |
Verlag: | ACM |
Buchtitel: | PETRA '20: Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments |
Veranstaltungstitel: | 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA'20) |
Veranstaltungsort: | Corfu, Greece |
Veranstaltungsdatum: | 30.06.2020-03.07.2020 |
DOI: | 10.1145/3389189.3392606 |
Kurzbeschreibung (Abstract): | Large labeled quantities and diversities of training data are often needed for supervised, data-based modelling. Data distribution should cover a rich representation to support the generalizability of the trained end-to-end inference model. However, this is often hindered by limited labeled data and the expensive data collection process, especially for human activity recognition tasks. Extensive manual labeling is required. Data augmentation is thus a widely used regularization method for deep learning, especially applied on image data to increase the classification accuracy. But it is less researched for time series. In this paper, we investigate the data augmentation task on continuous capacitive time series with the example on exercise recognition. We show that the traditional data augmentation can enrich the source distribution and thus make the trained inference model more generalized. This further increases the recognition performance for unseen target data around 21.4 percentage points compared to inference model without data augmentation. The generative models such as variational autoencoder or conditional variational autoencoder can further reduce the variance on the target data. |
Freie Schlagworte: | Activity recognition, Ambient intelligence (AmI), Sensor data exploration |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Mathematisches und angewandtes Visual Computing |
Hinterlegungsdatum: | 26 Okt 2020 12:16 |
Letzte Änderung: | 05 Jul 2024 07:29 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |