TU Darmstadt / ULB / TUbiblio

Leveraging Lexical-Semantic Knowledge for Text Classification Tasks

Flekova, Lucie (2017)
Leveraging Lexical-Semantic Knowledge for Text Classification Tasks.
Technische Universität Darmstadt
Dissertation, Erstveröffentlichung

Kurzbeschreibung (Abstract)

This dissertation is concerned with the applicability of knowledge, contained in lexical-semantic resources, to text classification tasks. Lexical-semantic resources aim at systematically encoding various types of information about the meaning of words and their relations. Text classification is the task of sorting a set of documents into categories from a predefined set, for example, “spam” and “not spam”. With the increasing amount of digitized text, as well as the increased availability of the computing power, the techniques to automate text classification have witnessed a booming interest. The early techniques classified documents using a set of rules, manually defined by experts, e.g. computational linguists. The rise of big data led to the increased popularity of distributional hypothesis - i.e., ``a meaning of word comes from its context'' - and to the criticism of lexical-semantic resources as too academic for real-world NLP applications. For long, it was assumed that the lexical-semantic knowledge will not lead to better classification results, as the meaning of every word can be directly learned from the document itself. In this thesis, we show that this assumption is not valid as a general statement and present several approaches how lexicon-based knowledge will lead to better results. Moreover, we show why these improved results can be expected.

One of the first problems in natural language processing is the lexical-semantic ambiguity. In text classification tasks, the ambiguity problem has often been neglected. For example, to classify a topic of a document containing the word 'bank', we don’t need to explicitly disambiguate it, if we find the word 'river' or 'finance'. However, such additional word may not be always present. Conveniently, lexical-semantic resources typically enumerate all senses of a word, letting us choose which word sense is the most plausible in our context. What if we use the knowledge-based sense disambiguation methods in addition to the information provided implicitly by the word context in the document? In this thesis, we evaluate the performance of selected resource-based word sense disambiguation algorithms on a range of document classification tasks (Chapter 3). We note that the lexicographic sense distinctions provided by the lexical-semantic resources are not always optimal for every text classification task, and propose an alternative technique for disambiguation of word meaning in its context for sentiment analysis applications.

The second problem in text classification, and natural language processing in general, is the one with synonymy. The words used in training documents represent only a tiny fraction of the words in the total possible vocabulary. If we learn individual words, or senses, as features in the classification model, our system will not be able to interpret the paraphrases, where the synonymous meaning is conveyed using different expressions. How much would the classification performance improve if the system could determine that two very different words represent the same meaning? In this thesis, we propose to address the synonymy problem by automatically enriching the training and testing data with conceptual annotations accessible through lexical-semantic resources (Chapter 4). We show that such conceptual information (``supersenses''), in combination with the previous word sense disambiguation step, helps to build more robust classifiers and improves classification performance of multiple tasks (Chapter 5). We further circumvent the sense disambiguation step by training a supersense tagging model directly. Previous evidence suggests that the sense distinctions of expert lexical-semantic resources are far subtler than what is needed for downstream NLP applications, and by disambiguating the concepts directly on a supersense level (e.g., ``is the 'duck' an animal or a food?'' rather than choosing between its eight WordNet senses), we can reduce the number of errors.

The third problem in text classification is the curse of dimensionality. We want to know not only if each single word predicts certain document class, but which combinations of words predict it and which ones do not. Our need for training data thus grows exponentially with the number of words monitored. Several techniques for dimensionality reduction were proposed, most recently the representation learning, producing continuous word representations in a dense vector space, also known as word embeddings. However, these vectors are again produced on an ambiguous word level, and the valuable piece of information about possible distinct senses of the same word is lost, in favor of the most frequent one(s). In this thesis, we explore if, or how, we can use lexical-semantic resources to regain the sense-level notion of semantic relatedness back while operating within the deep learning paradigm, therefore still being able to access the high-level conceptual information. We propose and evaluate a method to integrate word and supersense embeddings from large sense-disambiguated resources such as Wikipedia. We examine the impact of different training data for the quality of these embeddings, and demonstrate how to employ them in deep learning text classification experiments. Using convolutional and recurrent neural networks, we achieve a significant performance improvement over word embeddings in a range of downstream classification tasks.

The application of methods proposed in this thesis is demonstrated on experiments estimating the demographics and personality of a text author, and labeling the text with its subjective charge and sentiment conveyed. We therefore also provide empirical insights into which types of features are informative for these document classification problems, and suggest explanations grounded in psychology and sociology. We further discuss the issues that can occur as human experts are prone to diverse biases when classifying data.

To summarize, we could show that lexical-semantic knowledge can improve text classification tasks by supplying the hierarchy of abstract concepts, which enable better generalization over words, and that these methods are effective also in combination with the deep learning techniques.

Typ des Eintrags: Dissertation
Erschienen: 2017
Autor(en): Flekova, Lucie
Art des Eintrags: Erstveröffentlichung
Titel: Leveraging Lexical-Semantic Knowledge for Text Classification Tasks
Sprache: Englisch
Referenten: Gurevych, Prof. Dr. Iryna ; Stein, Prof. Dr. Benno ; Daelemans, Prof. Dr. Walter
Publikationsjahr: 2017
Ort: Darmstadt
Datum der mündlichen Prüfung: 24 April 2017
URL / URN: http://tuprints.ulb.tu-darmstadt.de/6765
Kurzbeschreibung (Abstract):

This dissertation is concerned with the applicability of knowledge, contained in lexical-semantic resources, to text classification tasks. Lexical-semantic resources aim at systematically encoding various types of information about the meaning of words and their relations. Text classification is the task of sorting a set of documents into categories from a predefined set, for example, “spam” and “not spam”. With the increasing amount of digitized text, as well as the increased availability of the computing power, the techniques to automate text classification have witnessed a booming interest. The early techniques classified documents using a set of rules, manually defined by experts, e.g. computational linguists. The rise of big data led to the increased popularity of distributional hypothesis - i.e., ``a meaning of word comes from its context'' - and to the criticism of lexical-semantic resources as too academic for real-world NLP applications. For long, it was assumed that the lexical-semantic knowledge will not lead to better classification results, as the meaning of every word can be directly learned from the document itself. In this thesis, we show that this assumption is not valid as a general statement and present several approaches how lexicon-based knowledge will lead to better results. Moreover, we show why these improved results can be expected.

One of the first problems in natural language processing is the lexical-semantic ambiguity. In text classification tasks, the ambiguity problem has often been neglected. For example, to classify a topic of a document containing the word 'bank', we don’t need to explicitly disambiguate it, if we find the word 'river' or 'finance'. However, such additional word may not be always present. Conveniently, lexical-semantic resources typically enumerate all senses of a word, letting us choose which word sense is the most plausible in our context. What if we use the knowledge-based sense disambiguation methods in addition to the information provided implicitly by the word context in the document? In this thesis, we evaluate the performance of selected resource-based word sense disambiguation algorithms on a range of document classification tasks (Chapter 3). We note that the lexicographic sense distinctions provided by the lexical-semantic resources are not always optimal for every text classification task, and propose an alternative technique for disambiguation of word meaning in its context for sentiment analysis applications.

The second problem in text classification, and natural language processing in general, is the one with synonymy. The words used in training documents represent only a tiny fraction of the words in the total possible vocabulary. If we learn individual words, or senses, as features in the classification model, our system will not be able to interpret the paraphrases, where the synonymous meaning is conveyed using different expressions. How much would the classification performance improve if the system could determine that two very different words represent the same meaning? In this thesis, we propose to address the synonymy problem by automatically enriching the training and testing data with conceptual annotations accessible through lexical-semantic resources (Chapter 4). We show that such conceptual information (``supersenses''), in combination with the previous word sense disambiguation step, helps to build more robust classifiers and improves classification performance of multiple tasks (Chapter 5). We further circumvent the sense disambiguation step by training a supersense tagging model directly. Previous evidence suggests that the sense distinctions of expert lexical-semantic resources are far subtler than what is needed for downstream NLP applications, and by disambiguating the concepts directly on a supersense level (e.g., ``is the 'duck' an animal or a food?'' rather than choosing between its eight WordNet senses), we can reduce the number of errors.

The third problem in text classification is the curse of dimensionality. We want to know not only if each single word predicts certain document class, but which combinations of words predict it and which ones do not. Our need for training data thus grows exponentially with the number of words monitored. Several techniques for dimensionality reduction were proposed, most recently the representation learning, producing continuous word representations in a dense vector space, also known as word embeddings. However, these vectors are again produced on an ambiguous word level, and the valuable piece of information about possible distinct senses of the same word is lost, in favor of the most frequent one(s). In this thesis, we explore if, or how, we can use lexical-semantic resources to regain the sense-level notion of semantic relatedness back while operating within the deep learning paradigm, therefore still being able to access the high-level conceptual information. We propose and evaluate a method to integrate word and supersense embeddings from large sense-disambiguated resources such as Wikipedia. We examine the impact of different training data for the quality of these embeddings, and demonstrate how to employ them in deep learning text classification experiments. Using convolutional and recurrent neural networks, we achieve a significant performance improvement over word embeddings in a range of downstream classification tasks.

The application of methods proposed in this thesis is demonstrated on experiments estimating the demographics and personality of a text author, and labeling the text with its subjective charge and sentiment conveyed. We therefore also provide empirical insights into which types of features are informative for these document classification problems, and suggest explanations grounded in psychology and sociology. We further discuss the issues that can occur as human experts are prone to diverse biases when classifying data.

To summarize, we could show that lexical-semantic knowledge can improve text classification tasks by supplying the hierarchy of abstract concepts, which enable better generalization over words, and that these methods are effective also in combination with the deep learning techniques.

Alternatives oder übersetztes Abstract:
Alternatives AbstractSprache

Mit dem Aufkommen von großen Datensätzen und fortgeschrittenen Klassifikationsalgorithmen wurde für lange Zeit angenommen, dass die Verwendung lexikalisch-semantischen Wissens zu keinen besseren Klassifikationsergebnissen führt. Grundlage dieser Annahme war, dass die Bedeutung von Wörtern direkt aus den verfügbaren Textdokumenten vom gewählten Klassifikationsalgorithmus erlernt werden kann. In der vorliegenden Arbeit wird gezeigt, dass diese Annahme nicht als allgemeine Aussage gelten kann. Insbesondere werden mehrere Ansätze vorgestellt, wie lexikonbasiertes Wissen zu signifikant besseren Ergebnissen führt. Darüber hinaus wird erörtert, unter welchen Bedingungen bessere Ergebnisse zu erwarten sind.

Bei Textklassifikationsaufgaben muss eine Menge von Dokumenten in verschiedene Kategorien automatisch sortiert werden. Mit der zunehmenden Menge an digitalisierten Texten sowie mit der erhöhten Verfügbarkeit von Rechenleistung hat die Forschung und Entwicklung an automatisierten Techniken der Textklassifikation in den vergangenen Jahren enorme Fortschritte erzielt. Trotz dieser Fortschritte bleibt die lexikalisch-semantische Ambiguität wichtiges Problem: Ein einzelnes Wort kann mehrere Bedeutungen haben und kann nur unter Berücksichtigung des Kontextes bestimmt werden.

In den bisherigen Ansätzen zu Textklassifikationsaufgaben wurde dieses Mehrdeutigkeitsproblem oft mit der Annahme vernachlässigt, dass ein Dokument typischerweise genügend Worte enthält, um die mehrdeutigen Fälle ignorieren zu können. Um beispielsweise das Thema eines Textdokuments zu bestimmen, das das Wort ``Bank'' enthält, muss der Sinn des Wortes ``Bank'' nicht bestimmt werden, wenn zusätzlich Wörter wie ``Fluss'' oder ``Finanzen'' im Text zu finden sind. Jedoch muss ein solches zusätzliche Wort nicht notwendigerweise vorhanden sein. Zweckmäßigerweise zählen lexikalisch-semantische Ressourcen in der Regel alle Sinne eines Wortes auf und organisieren sie durch konzeptuell-semantische und lexikalische Beziehungen zu einem Netzwerk. Dies eröffnet eine Vielzahl von Optionen, um zu entscheiden, welche Wortbedeutung im gegebenen Fall am plausibelsten ist. Der erste Fragenkomplex, der in dieser Arbeit behandelt wird, lautet: Was sind die Konsequenzen, wenn die wissensbasierten Disambiguierungsmethoden zusätzlich zu den implizit im Wortkontext des Dokuments enthaltenen Informationen genutzt werden können? Wie sehr würde sich die Klassifikationsleistung verbessern? In dieser Arbeit wird die Auswirkung von Wort-Ambiguitäten auf eine Reihe von Textklassifikationsaufgaben untersucht und die Leistung ausgewählter Ressourcen-basierter Algorithmen zur Erfassung von Worterklärungen bewertet. Es wird gezeigt, dass die lexikographischen Sinnunterscheidungen, die durch die lexikalisch-semantischen Ressourcen zur Verfügung gestellt werden, nicht für jede Textklassifikationsaufgabe geeignet sind. Daher wurden alternative Techniken zur Begriffsdefinition aus dem Kontext für Anwendungen in der Semantikanalyse entwickelt.

Das zweite Problem in der Textklassifikation ist das Problem der Synonymie. Für jedes Dokument repräsentieren die verwendeten Wörter nur einen kleinen Bruchteil der Wörter aus dem insgesamt möglichen Wortschatz. Wenn wir einzelne Wörter oder Sinne als Merkmale im Klassifikationsmodell verwenden, wird das Klassifikationssystem die Paraphrasen der Trainingswörter nicht interpretieren können. Daher stellt sich die Frage, wie sich die Klassifizierungsleistung verbessern würde, wenn das automatisierte System feststellen könnte, dass zwei sehr unterschiedliche Wörter eine ähnliche Bedeutung teilen? In dieser Arbeit werden mögliche Lösungen untersucht, die mit Hilfe von hierarchischen Relationen ermöglicht werden, welche in lexikalisch-semantischen Ressourcen enthalten sind. In der Arbeit wird ein Ansatz entwickelt, wie das Synonymieproblem durch automatisches Bereichern der Trainings- und Testdaten mit konzeptuellen Annotationen, die über lexikalische-semantische Ressourcen zugänglich sind, vermindert werden kann. Es wird gezeigt, dass solche konzeptionellen Informationen robustere Klassifizierungsleistungen für eine Vielzahl von Aufgaben ermöglichen.

Das dritte untersuchte Problem bei der Textklassifikation, das eng mit den Obigen zusammenhängt, betrifft die Dimensionalität der Trainingsdaten. Soll zum Beispiel nicht nur jedes Wort, sondern auch Kombinationen von Wörtern für die Klassifikation verwendet werden, so wächst die Eingabe an den Klassifikator exponentiell. Mehrere Techniken zur Reduktion der Dimensionalität wurden vorgeschlagen. In den vergangenen Jahren setzte sich hier der Ansatz des sogenannten „Word-embeddings“ zunehmend durch. Da die zugehörigen Trainingsvektoren wieder auf einer mehrdeutigen Wortebene statt Sinnebene erzeugt werden, geht wertvolle Information über mögliche Bedeutungsunterschiede verloren. In dieser Arbeit wurde ein neuartiger Ansatz entwickelt, mit dem durch lexikalisch-semantische Ressourcen, die semantische Verwandtschaft von Eingabedaten zurückzugewonnen werden konnte. Die Vorteile dieses Ansatzes konnten in verschiedenen Anwendungen demonstriert werden. Insbesondere konnte bewiesen werden, dass dieser Ansatz selbst bei der Verwendung von faltenden und wiederkehrenden neuronalen Netzwerken (convolutional and recurrent neural networks) signifikant verbesserte Ergebnisse erzielen kann.

Diese Arbeit zeigt, dass die gezielte Verwendung von lexikalisch-semantischem Wissen bei automatisierten Textklassifikationsaufgaben mittels fortgeschrittener Klassifikationsansätze (z.B. ``deep learning'') zu signifikant besseren Ergebnissen führen kann. Dies wird durch die Bereitstellung von Hierarchien abstrakter Konzepte, die eine bessere Verallgemeinerung von Wörtern ermöglichen, erreicht.

Deutsch
URN: urn:nbn:de:tuda-tuprints-67656
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 17 Sep 2017 19:55
Letzte Änderung: 17 Sep 2017 19:55
PPN:
Referenten: Gurevych, Prof. Dr. Iryna ; Stein, Prof. Dr. Benno ; Daelemans, Prof. Dr. Walter
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: 24 April 2017
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen