TU Darmstadt / ULB / TUbiblio

Multimodal Classification of Audiovisual Content

da Silva Santos, Pedro Bispo (2021)
Multimodal Classification of Audiovisual Content.
Technische Universität Darmstadt
doi: 10.26083/tuprints-00018590
Dissertation, Erstveröffentlichung, Verlagsversion

Kurzbeschreibung (Abstract)

This thesis is concerned with multimodal machine learning for digital humanities. Multimodal machine learning integrates vision, speech, and language to solve a particular set of tasks, such as sentiment analysis, emotion recognition, personality recognition, and deceptive behaviour detection. The usage of other modalities benefited these tasks since human communication is multimodal by its nature. The intersection between humanities and computational methods defines the so-called digital humanities, i.e., a subset in the humanities and social sciences, which leverages digital mechanisms to conduct their research. This thesis supports the claim that using audiovisual modalities when training computational models in digital humanities can benefit the performance of any cumbersome task where annotators use audiovisual sources of information to annotate the data. We hypothesise that audiovisual content studied by some areas from humanities and social sciences, such as psychology, pedagogy, and communication sciences, can be explained and categorised by audiovisual processing techniques. These techniques can increase humanities and socials sciences researchers' productivity by bootstrapping their analysis using machine learning techniques and allowing their research to scale to more massive amounts of data. Besides that, these methods could also implement more socially aware virtual agents. This kind of technology enables a more sophisticated computer-human interaction, which can enrich commercial applications' user experience. Problems tackled by natural language processing techniques sometimes reach an upper bound due to the limitations of the knowledge present in textual information. Humans use prosody to convey meaning. Machine learning models trying to predict the sentiment present in transcribed speech can lose much information if dealing solely with the text modality. Persuasiveness prediction is another excellent example since factors beyond argumentation, such as prosody, visual appearance, and body language, can persuade people. Previous work in opinion mining and persuasiveness prediction have shown that multimodal approaches are quite successful when combining multiple modalities. However, textual transcripts and visual information might not be available due to technical constraints, so one may ask how accurately machine learning models predict people's opinion using only prosodic information. Most of the work in computational paralinguistics rely on cumbersome feature-engineering approaches, so another question is whether domain-agnostic methods work in this field. Our results show that relying on a simple recurrent neural architecture trained on Mel-Frequency Cepstral Coefficients can predict speakers' opinion. Speech is not the only channel besides the textual one that signals critical information. The visual channel is also significant. Humans can express several expressions, defined as cues under the Lens Model of Brunswik. Researchers from humanities and social sciences try to understand how relevant those signals are by manually annotating information that might be present in the facial expressions of subjects under analysis. However, these tasks are very time-consuming and prone to human errors due to fatigue or lack of training. We present that automatically extracted low and high-level features using the latest computer vision methods can explain visual data from researchers of humanities and social sciences, especially from areas like pedagogy and communication sciences. We also demonstrate that an end-to-end approach can automatically predict the psychological construct of intrinsic motivation. Another problem widely studied in political sciences is the understanding of persuasive factors in speeches and debates. For instance, Nagel et al. (2012) have evaluated which features in all three modalities (text, speech, and vision) were forming an audience's impression in the national election debate between Angela Merkel and Gerhard Schroeder. However, there is no previous work in the literature, which presents an automated approach to predict what impression a politician is forming during a debate. Our results reveal that high-level features automatically extracted in a multimodal approach can indicate what elements in political communication mould an audience's impression and are also useful for training machine learning models to predict it. We run the experiments in this thesis with data from psychology, pedagogy, and communication science research, providing empirical evidence to the raised hypothesis that audiovisual content coming from humanities and social sciences can be explained and automatically classified by audiovisual processing methods. This thesis presents new applications of multimodal machine learning in digital humanities, presenting different ways of modelling the tasks, besides reinforcing the well-known issue of fairness in artificial intelligence. In conclusion, this thesis strengthens the notion that audiovisual modalities are primary communication channels which should be carefully analysed and explored in multimodal machine learning for digital humanities.

Typ des Eintrags: Dissertation
Erschienen: 2021
Autor(en): da Silva Santos, Pedro Bispo
Art des Eintrags: Erstveröffentlichung
Titel: Multimodal Classification of Audiovisual Content
Sprache: Englisch
Referenten: Gurevych, Prof. Dr. Iryna ; Maurer, Prof. Dr. Marcus ; Mihalcea, Prof. Dr. Rada
Publikationsjahr: 2021
Ort: Darmstadt
Kollation: xii, 156 Seiten
Datum der mündlichen Prüfung: 8 März 2021
DOI: 10.26083/tuprints-00018590
URL / URN: https://tuprints.ulb.tu-darmstadt.de/18590
Kurzbeschreibung (Abstract):

This thesis is concerned with multimodal machine learning for digital humanities. Multimodal machine learning integrates vision, speech, and language to solve a particular set of tasks, such as sentiment analysis, emotion recognition, personality recognition, and deceptive behaviour detection. The usage of other modalities benefited these tasks since human communication is multimodal by its nature. The intersection between humanities and computational methods defines the so-called digital humanities, i.e., a subset in the humanities and social sciences, which leverages digital mechanisms to conduct their research. This thesis supports the claim that using audiovisual modalities when training computational models in digital humanities can benefit the performance of any cumbersome task where annotators use audiovisual sources of information to annotate the data. We hypothesise that audiovisual content studied by some areas from humanities and social sciences, such as psychology, pedagogy, and communication sciences, can be explained and categorised by audiovisual processing techniques. These techniques can increase humanities and socials sciences researchers' productivity by bootstrapping their analysis using machine learning techniques and allowing their research to scale to more massive amounts of data. Besides that, these methods could also implement more socially aware virtual agents. This kind of technology enables a more sophisticated computer-human interaction, which can enrich commercial applications' user experience. Problems tackled by natural language processing techniques sometimes reach an upper bound due to the limitations of the knowledge present in textual information. Humans use prosody to convey meaning. Machine learning models trying to predict the sentiment present in transcribed speech can lose much information if dealing solely with the text modality. Persuasiveness prediction is another excellent example since factors beyond argumentation, such as prosody, visual appearance, and body language, can persuade people. Previous work in opinion mining and persuasiveness prediction have shown that multimodal approaches are quite successful when combining multiple modalities. However, textual transcripts and visual information might not be available due to technical constraints, so one may ask how accurately machine learning models predict people's opinion using only prosodic information. Most of the work in computational paralinguistics rely on cumbersome feature-engineering approaches, so another question is whether domain-agnostic methods work in this field. Our results show that relying on a simple recurrent neural architecture trained on Mel-Frequency Cepstral Coefficients can predict speakers' opinion. Speech is not the only channel besides the textual one that signals critical information. The visual channel is also significant. Humans can express several expressions, defined as cues under the Lens Model of Brunswik. Researchers from humanities and social sciences try to understand how relevant those signals are by manually annotating information that might be present in the facial expressions of subjects under analysis. However, these tasks are very time-consuming and prone to human errors due to fatigue or lack of training. We present that automatically extracted low and high-level features using the latest computer vision methods can explain visual data from researchers of humanities and social sciences, especially from areas like pedagogy and communication sciences. We also demonstrate that an end-to-end approach can automatically predict the psychological construct of intrinsic motivation. Another problem widely studied in political sciences is the understanding of persuasive factors in speeches and debates. For instance, Nagel et al. (2012) have evaluated which features in all three modalities (text, speech, and vision) were forming an audience's impression in the national election debate between Angela Merkel and Gerhard Schroeder. However, there is no previous work in the literature, which presents an automated approach to predict what impression a politician is forming during a debate. Our results reveal that high-level features automatically extracted in a multimodal approach can indicate what elements in political communication mould an audience's impression and are also useful for training machine learning models to predict it. We run the experiments in this thesis with data from psychology, pedagogy, and communication science research, providing empirical evidence to the raised hypothesis that audiovisual content coming from humanities and social sciences can be explained and automatically classified by audiovisual processing methods. This thesis presents new applications of multimodal machine learning in digital humanities, presenting different ways of modelling the tasks, besides reinforcing the well-known issue of fairness in artificial intelligence. In conclusion, this thesis strengthens the notion that audiovisual modalities are primary communication channels which should be carefully analysed and explored in multimodal machine learning for digital humanities.

Alternatives oder übersetztes Abstract:
Alternatives AbstractSprache

Gegenstand dieser Dissertation sind die Möglichkeiten des multimodalen maschinellen Lernens im Bereich der Digital Humanities. Charakteristisch für multimodales maschinelles Lernen ist die integrierte Nutzung von Bild, Ton und Text zur Lösung spezifischer Aufgaben. Typische Beispiel sind Sentimentanalyse, Emotions- und Persönlichkeitserkennung oder das Aufdecken irreführenden Verhaltens. Dass verschiedene Modalitäten einbezogen werden, ist bei diesen Aufgaben nicht zuletzt deshalb von Vorteil, weil menschliche Kommunikation per se multimodal erfolgt. Die Digital Humanities, also die digitalen Geisteswissenschaften, stellen die Schnittstelle zwischen Geisteswissenschaften und digitaler Technologie dar. Es handelt sich somit um einen Teilbereich der Geistes- und Sozialwissenschaften, in dem an zentraler Stelle im Forschungsprozess digitale Verfahren und Berechnungsmethoden zum Einsatz kommen. Die Annahme, die der vorliegenden Arbeit zugrunde liegt, ist hier, dass im Bereich der Digital Humanities eine bessere Aufgabenerfüllung erreicht werden kann, wenn die verschiedenen Modalitäten einbezogen werden, sofern die Annotatoren für die Datenanalyse auf audiovisuelle Informationsquellen zurückgreifen. Dahinter steht die Überzeugung, dass die in verschiedenen geistes- und sozialwissenschaftlichen Disziplinen (z. B. Psychologie, Pädagogik, Kommunikationswissenschaften) untersuchten audiovisuellen Inhalte sich mithilfe audiovisueller Verarbeitungstechnologien besser erklären und kategorisieren lassen. Die entsprechenden technischen Verfahren erlauben eine Steigerung der Produktivität von Geistes- und Sozialwissenschaftlerinnen und -wissenschaftlern, denn durch maschinelle Lerntechniken kann der Forschungsprozess einfacher initialisiert und die Analyse auf größere Mengen an Daten ausgeweitet werden. Hinzu kommt, dass durch entsprechende Methoden die Implementierung sozial bewussterer virtueller Agenten möglich wird. Der Technologieeinsatz sorgt für eine elaboriertere Interaktion von Mensch und Computer, was bei kommerziellen Anwendungen oft zu einem besseren Nutzererlebnis führt. Fragestellungen, welche in der Computerlinguistik mit technischen Verfahren behandelt werden, stoßen regelmäßig an ihre Grenzen, weil das in Texten zum Ausdruck kommende Wissen begrenzt ist. Menschen greifen zur Bedeutungsübermittlung nämlich auch auf die Prosodie zurück. Das bedeutet, dass wesentliche Informationen verlorengehen, wenn maschinelle Lernmodelle den Versuch unternehmen, allein auf Basis der Textmodalität die Gefühlsebene zu prognostizieren, welche in transkribierter Sprache vorhanden ist. Ein anderes einschlägiges Beispiel ist die Vorhersage der Überzeugungskraft, denn für den Menschen sind hier neben der reinen Argumentation weitere Faktoren relevant. Ältere Studien zum Opinion Mining sowie Arbeiten zur Überzeugungskraft-Vorhersage belegen, dass die Erfolgswahrscheinlichkeit bei multimodalen Ansätzen höher ist, weil sie verschiedene Modalitäten verknüpfen. Es kann jedoch sein, dass die technischen Rahmenbedingungen verhindern, dass neben Transkripten von Texten auch visuelle Informationen zur Verfügung stehen. In diesem Fall stellt sich die Frage, wie gut maschinelle Lernmodelle in der Lage sind, die Überzeugungen einer Person zu prognostizieren, wenn lediglich prosodische Informationen zur Verfügung stehen. Die Mehrzahl der vorhandenen Untersuchungen im Bereich der Computerparalinguistik greift auf merkmalsentwickelte Ansätzen zurück, die sehr komplex sind. Hier stellt sich deshalb die Frage, inwiefern sich domänenunabhängige Methoden für derartige Aufgaben eignen. Aus den Ergebnissen der durchgeführten Analysen, ergibt sich, dass es möglich ist, die Meinung der Sprechenden zu prognostizieren, wenn eine einfache rekurrente neuronale Architektur mit einem Training auf Mel-Frequenz-Cepstrum-Koeffizienten kombiniert wird. Für die Übermittlung kritischer Informationen spielt neben dem Text- und Sprachkanal auch der visuelle Kanal eine entscheidende Rolle. Der Mensch kann auf diesem Weg unterschiedliche Ausdrücke übermitteln, die sich beispielsweise anhand des Linsenmodells von Brunswik in die Analyse einbeziehen lassen. Geistes- und Sozialwissenschaftlern geht es dabei darum, die Relevanz dieser Signale zu erfassen, indem der Gesichtsausdruck des Untersuchungssubjekts betrachtet und die vorhandenen Informationen manuell notiert werden. Diese Vorgehensweise ist aber nicht nur mit einem hohen Zeitaufwand verbunden, sondern zeigt auch eine gewisse Anfälligkeit für menschliche Fehler, was auf Ermüdung oder fehlendes Training zurückzuführen ist. Die vorliegende Arbeit verdeutlicht daher, wie Low- und High-Level-Features, die mit aktuellen Computer-Vision-Methoden automatisch extrahiert wurden, im Rahmen der geistes- und sozialwissenschaftlichen Forschung, etwa im Bereich der Pädagogik oder der Kommunikationswissenschaften, zum Einsatz kommen können. Zudem gibt es deutliche Hinweise, dass ein End-to-End-Ansatz eine automatische Vorhersage des psychologischen Konstrukts der intrinsischen Motivation erlaubt. Eine für viele politikwissenschaftliche Fragestellungen wichtige Aufgabe besteht schließlich darin, feststellen zu können, welche Faktoren in Reden und Debatten für Überzeugungskraft sorgen. Beispielsweise haben Nagel et al. (2012) Sekunde für Sekunde untersucht, inwiefern bei der Fernsehdebatte zwischen Angela Merkel und Gerhard Schröder die Ausprägungen der drei Modalitäten Text, Sprache und Bild die Wahrnehmung des Publikums bestimmen. Bisher kam jedoch noch kein automatisiertes Verfahren zum Einsatz, um den Eindruck vorherzusagen, der im Zuge der Debatte beim Publikum entsteht. Aufgrund der in dieser Arbeit erzielten Ergebnisse lässt sich sagen, dass multimodale Merkmale, die automatisch und in hoher Qualität auf multimodale Weise erfasst werden, erkennen lassen, welche Faktoren der politischen Kommunikation den Eindruck beim Publikum bestimmen. Sie erweisen sich zudem als hilfreich, um Machine-Learning-Modelle zu trainieren, welche dann in der Lage sind, eine automatische Prognose für den Eindruck abzugeben. Die im Rahmen dieser Arbeit durchgeführten Experimente nutzen Daten aus den Disziplinen Psychologie, Pädagogik und Kommunikationswissenschaft, um empirische Belege für die vorab definierte Hypothese zu sammeln. Insgesamt lässt sich sagen, dass der empirische Befund dafür spricht, dass audiovisuelle geistes- und sozialwissenschaftliche Inhalte mithilfe audiovisueller Analyseverfahren besser erklärt werden und eine automatische Klassifikation möglich ist. Die Arbeit diskutiert dabei innovative Anwendungsmöglichkeiten des multimodalen maschinellen Lernens im Rahmen der Digital Humanities. Das umfasst auch unterschiedliche Formen der Aufgabenmodellierung und Lösungsansätze für das bekannte Fairness-Problem der künstlichen Intelligenz. Es hat sich bestätigt, dass es sich bei audiovisuellen Modalitäten um zentrale Kanäle der Kommunikation handelt, weshalb sie im Kontext der Digital Humanities mithilfe multimodalen maschinellen Lernens detailliert analysiert und in die Interpretation integriert werden sollten.

Deutsch

A presente tese de doutorado trata de técnicas de aprendizagem de máquina multimodal para a área de humanidades digitais. A aprendizagem de máquina multimodal foca na integração dos três canais de comunicação: o canal visual, o canal vocal e o canal verbal. Essas técnicas já foram aplicadas para resolver problemas como análise de sentimento, reconhecimento de emoções, identificação de personalidade e detecção de comportamento fraudulento. O uso de outras modalidades beneficiou essas tarefas pelo fato de a comunicação humana ser multimodal por natureza. A interseção entre a área de humanas com o uso de métodos computacionais é o que define a disciplina de humanidades digitais. Assim sendo, uma afirmação suportada por esta tese é a de que qualquer tarefa oriunda das humanidades digitais, em que os anotadores dispõem de fontes audiovisuais de informação para anotar as amostras que estão sob análise, pode ser beneficiada ao utilizar essas modalidades adicionais para treinar seus respectivos modelos computacionais. A hipótese levantada nesta tese é a de que conteúdo audiovisual analisado e estudado em certas áreas das ciências humanas, como a psicologia, a pedagogia e as ciências da comunicação, pode ser explicado e categorizado por meio de técnicas de processamento audiovisual. Essas técnicas podem aumentar a produtividade de pesquisadores dessas áreas mediante a inicialização automática da análise manual que eles geralmente fazem utilizando técnicas de aprendizagem de máquina, permitindo, assim, uma maior escalabilidade da quantidade de dados analisados em suas pesquisas. Além disso, essas técnicas também podem ser utilizadas para implementar agentes virtuais com uma maior sociabilidade. Isso capacita uma melhor comunicação com seres humanos, fazendo este tipo de interação mais natural. Certos problemas relacionados ao processamento de linguagem natural possuem uma limitação dado que a maioria dos métodos exploram somente informações que podem ser extraídas de fontes textuais. Seres humanos fazem uso da prosódia para transmitir o significado da mensagem que se deseja transmitir. Dessa forma, modelos de aprendizagem de máquina que tentam prever o sentimento presente em textos provenientes da transcrição de um discurso ou diálogo tendem a perder muita informação quando analisados apenas na modalidade textual. Um outro exemplo em que isso pode acontecer é na classificação automática do poder de persuasão, dado que pessoas são persuadidas por fatores que vão além da argumentação, como prosódia, linguagem corporal e aparência visual. Trabalhos relacionados a mineração de opinião e classificação de persuasão mostram que abordagens multimodais são bem sucedidas quando combinam múltiplas modalidades. Entretanto, transcrições textuais e informações visuais podem não estar disponíveis devido a problemas técnicos, então a pergunta que se vem em mente é quão preciso são esses modelos de aprendizagem de máquina ao aplicar somente informações prosódicas. A maioria dos trabalhos presentes na literatura lidando com paralinguística computacional baseia-se profundamente em abordagens que empregam feature engineering, então uma outra pergunta que vem à tona é se abordagens agnósticas de domínio de fato funcionam nessa área de aplicação. Os resultados (capítulo 3) mostram que a aplicação de uma arquitetura simples de redes neurais recorrentes treinadas com coeficientes mel cepstrais são capazes de automaticamente classificar a opinião de oradores. A fala não é o único canal de informação além do canal textual que é significativo. O canal visual também é bastante relevante. Seres humanos conseguem expressar diferentes expressões faciais, e essas expressões podem ser consideradas como sinais dentro do Modelo de Lentes de Brunswik. Pesquisadores da área de humanas tentam entender o quão importante são esses sinais anotando manualmente informações presentes nas expressões faciais dos indivíduos sob análise. Entretanto, essas atividades consomem muito tempo e são suscetíveis a erro humano devido à fadiga e à falta de treinamento adequado. Nesta tese nós conseguimos mostrar que features de baixo e alto nível extraídas por meio de métodos de visão computacional são capazes de explicar dados visuais provenientes de pesquisadores de certas áreas das ciências humanas, como pedagogia (capítulo 4) e ciências da comunicação (capítulo 5). Além disso, nós também demonstramos que o construto psicológico de motivação intrínseca pode ser automaticamente detectado com uma abordagem end-to-end. Um outro problema bastante estudado no âmbito das ciências políticas é o entendimento dos fatores persuasivos aplicados em discursos e debates. Nagel et al. (2012), por exemplo, avaliou para cada segundo do debate entre Angela Merkel e Gerhard Schröder quais features das três modalidades (texto, fala e visão) estavam formando a impressão da audiência assistindo ao debate. Entretanto, uma abordagem automática para prever a impressão causada na audiência durante um debate não tinha sido explorada até então. Os resultados exibem que features multimodais de alto nível extraídas automaticamente podem indicar quais elementos presentes em comunicação política formam a impressão de uma audiência além de serem úteis para treinar modelos de aprendizagem de máquina para prever automaticamente a impressão causada. Os experimentos realizados nesta tese foram feitos em dados provenientes de projetos de pesquisa das áreas de psicologia, pedagogia e ciências da comunicação. Em suma, nós provemos evidência empírica de que conteúdo audiovisual advindo das ciências humanas podem ser explicados e classificados automaticamente por meio de técnicas de processamento audiovisual. Esta tese apresenta novas aplicações de aprendizagem de máquina multimodal no contexto das humanidades digitais, apresentando diferentes maneiras de modelar as atividades, além de reforçar o problema já conhecido de imparcialidade em inteligência artificial. Modalidades audiovisuais são canais de comunicação essenciais que devem ser cuidadosamente analisados e explorados em aprendizagem de máquina multimodal para as humanidades digitais.

Portugiesisch
Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-185904
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 28 Jul 2021 08:16
Letzte Änderung: 03 Aug 2021 06:59
PPN:
Referenten: Gurevych, Prof. Dr. Iryna ; Maurer, Prof. Dr. Marcus ; Mihalcea, Prof. Dr. Rada
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: 8 März 2021
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen