TU Darmstadt / ULB / TUbiblio

Unsupervised Methods for Learning and Using Semantics of Natural Language

Riedl, Martin (2016)
Unsupervised Methods for Learning and Using Semantics of Natural Language.
Technische Universität Darmstadt
Dissertation, Erstveröffentlichung

Kurzbeschreibung (Abstract)

Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure — e.g. grammar, semantics or syntax — from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources.

In this thesis, we present so-called unsupervised methods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data.

In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis.

Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be operationalized for computers. Based on this operationalization, we introduce a formalism for representing words and their context. This formalism is used in the following chapters in order to compute similarities between words.

In Chapter 3 we give a brief description of methods in the field of computational semantics, which are targeted to compute similarities between words. All these methods have in common that they extract a contextual representation for a word that is generated from text. Then, this representation is used to compute similarities between words. In addition, we also present examples of the word similarities that are computed with these methods.

Segmenting text into its topically related units is intuitively performed by humans and helps to extract connections between words in text. We equip the computer with these abilities by introducing a text segmentation algorithm in Chapter 4. This algorithm is based on a statistical topic model, which learns to cluster words into topics solely on the basis of the text. Using the segmentation algorithm, we demonstrate the influence of the parameters provided by the topic model. In addition, our method yields state-of-the-art performances on two datasets.

In order to represent the meaning of words, we use context information (e.g. neighboring words), which is utilized to compute similarities. Whereas we described methods for word similarity computations in Chapter 3, we introduce a generic symbolic framework in Chapter 5. As we follow a symbolic approach, we do not represent words using dense numeric vectors but we use symbols (e.g. neighboring words or syntactic dependency parses) directly. Such a representation is readable for humans and is preferred in sensitive applications like the medical domain, where the reason for decisions needs to be provided. This framework enables the processing of arbitrarily large data. Furthermore, it is able to compute the most similar words for all words within a text collection resulting in a distributional thesaurus. We show the influence of various parameters deployed in our framework and examine the impact of different corpora used for computing similarities. Performing computations based on various contextual representations, we obtain the best results when using syntactic dependencies between words within sentences. However, these syntactic dependencies are predicted using a supervised dependency parser, which is trained on language-dependent and human-annotated resources.

To avoid such language-specific preprocessing for computing distributional thesauri, we investigate the replacement of language-dependent dependency parsers by language-independent unsupervised parsers in Chapter 6. Evaluating the syntactic dependencies from unsupervised and supervised parses against human-annotated resources reveals that the unsupervised methods are not capable to compete with the supervised ones. In this chapter we use the predicted structure of both types of parses as context representation in order to compute word similarities. Then, we evaluate the quality of the similarities, which provides an extrinsic evaluation setup for both unsupervised and supervised dependency parsers. In an evaluation on English text, similarities computed based on contextual representations generated with unsupervised parsers do not outperform the similarities computed with the context representation extracted from supervised parsers. However, we observe the best results when applying context retrieved by the unsupervised parser for computing distributional thesauri on German language. Furthermore, we demonstrate that our framework is capable to combine different context representations, as we obtain the best performance with a combination of both flavors of syntactic dependencies for both languages.

Most languages are not composed of single-worded terms only, but also contain many multi-worded terms that form a unit, called multiword expressions. The identification of multiword expressions is particularly important for semantics, as e.g. the term New York has a different meaning than its single terms New or York. Whereas most research on semantics avoids handling these expressions, we target on the extraction of multiword expressions in Chapter 7. Most previously introduced methods rely on part-of-speech tags and apply a ranking function to rank term sequences according to their multiwordness. Here, we introduce a language-independent and knowledge-free ranking method that uses information from distributional thesauri. Performing evaluations on English and French textual data, our method achieves the best results in comparison to methods from the literature.

In Chapter 8 we apply information from distributional thesauri as features for various applications. First, we introduce a general setting for tackling the out-of-vocabulary problem. This problem describes the inferior performance of supervised methods according to words that are not contained in the training data. We alleviate this issue by replacing these unseen words with the most similar ones that are known, extracted from a distributional thesaurus. Using a supervised part-of-speech tagging method, we show substantial improvements in the classification performance for out-of-vocabulary words based on German and English textual data. The second application introduces a system for replacing words within a sentence with a word of the same meaning. For this application, the information from a distributional thesaurus provides the highest-scoring features. In the last application, we introduce an algorithm that is capable to detect the different meanings of a word and groups them into coarse-grained categories, called supersenses. Generating features by means of supersenses and distributional thesauri yields an performance increase when plugged into a supervised system that recognized named entities (e.g. names, organizations or locations).

Further directions for using distributional thesauri are presented in Chapter 9. First, we lay out a method, which is capable of incorporating background information (e.g. source of the text collection or sense information) into a distributional thesaurus. Furthermore, we describe an approach on building thesauri for different text domains (e.g. medical or finance domain) and how they can be combined to have a high coverage of domain-specific knowledge as well as a broad background for the open domain. In the last section we characterize yet another method, suited to enrich existing knowledge bases. All three directions might be further extensions, which induce further structure based on textual data.

The last chapter gives a summary of this work: we demonstrate that without language-dependent knowledge, a computer can learn to extract useful structure from text by using computational semantics. Due to the unsupervised nature of the introduced methods, we are able to extract new structure from raw textual data. This is important especially for languages, for which less manually created resources are available as well as for special domains e.g. medical or finance. We have demonstrated that our methods achieve state-of-the-art performance. Furthermore, we have proven their impact by applying the extracted structure in three natural language processing tasks. We have also applied the methods to different languages and large amounts of data. Thus, we have not proposed methods, which are suited for extracting structure for a single language, but methods that are capable to explore structure for “language” in general.

Typ des Eintrags: Dissertation
Erschienen: 2016
Autor(en): Riedl, Martin
Art des Eintrags: Erstveröffentlichung
Titel: Unsupervised Methods for Learning and Using Semantics of Natural Language
Sprache: Englisch
Referenten: Biemann, Prof. Dr. Chris ; Anders, Prof. Dr. Søgaard
Publikationsjahr: 4 Mai 2016
Ort: Darmstadt
Datum der mündlichen Prüfung: 24 Februar 2016
URL / URN: http://tuprints.ulb.tu-darmstadt.de/5435
Kurzbeschreibung (Abstract):

Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure — e.g. grammar, semantics or syntax — from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources.

In this thesis, we present so-called unsupervised methods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data.

In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis.

Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be operationalized for computers. Based on this operationalization, we introduce a formalism for representing words and their context. This formalism is used in the following chapters in order to compute similarities between words.

In Chapter 3 we give a brief description of methods in the field of computational semantics, which are targeted to compute similarities between words. All these methods have in common that they extract a contextual representation for a word that is generated from text. Then, this representation is used to compute similarities between words. In addition, we also present examples of the word similarities that are computed with these methods.

Segmenting text into its topically related units is intuitively performed by humans and helps to extract connections between words in text. We equip the computer with these abilities by introducing a text segmentation algorithm in Chapter 4. This algorithm is based on a statistical topic model, which learns to cluster words into topics solely on the basis of the text. Using the segmentation algorithm, we demonstrate the influence of the parameters provided by the topic model. In addition, our method yields state-of-the-art performances on two datasets.

In order to represent the meaning of words, we use context information (e.g. neighboring words), which is utilized to compute similarities. Whereas we described methods for word similarity computations in Chapter 3, we introduce a generic symbolic framework in Chapter 5. As we follow a symbolic approach, we do not represent words using dense numeric vectors but we use symbols (e.g. neighboring words or syntactic dependency parses) directly. Such a representation is readable for humans and is preferred in sensitive applications like the medical domain, where the reason for decisions needs to be provided. This framework enables the processing of arbitrarily large data. Furthermore, it is able to compute the most similar words for all words within a text collection resulting in a distributional thesaurus. We show the influence of various parameters deployed in our framework and examine the impact of different corpora used for computing similarities. Performing computations based on various contextual representations, we obtain the best results when using syntactic dependencies between words within sentences. However, these syntactic dependencies are predicted using a supervised dependency parser, which is trained on language-dependent and human-annotated resources.

To avoid such language-specific preprocessing for computing distributional thesauri, we investigate the replacement of language-dependent dependency parsers by language-independent unsupervised parsers in Chapter 6. Evaluating the syntactic dependencies from unsupervised and supervised parses against human-annotated resources reveals that the unsupervised methods are not capable to compete with the supervised ones. In this chapter we use the predicted structure of both types of parses as context representation in order to compute word similarities. Then, we evaluate the quality of the similarities, which provides an extrinsic evaluation setup for both unsupervised and supervised dependency parsers. In an evaluation on English text, similarities computed based on contextual representations generated with unsupervised parsers do not outperform the similarities computed with the context representation extracted from supervised parsers. However, we observe the best results when applying context retrieved by the unsupervised parser for computing distributional thesauri on German language. Furthermore, we demonstrate that our framework is capable to combine different context representations, as we obtain the best performance with a combination of both flavors of syntactic dependencies for both languages.

Most languages are not composed of single-worded terms only, but also contain many multi-worded terms that form a unit, called multiword expressions. The identification of multiword expressions is particularly important for semantics, as e.g. the term New York has a different meaning than its single terms New or York. Whereas most research on semantics avoids handling these expressions, we target on the extraction of multiword expressions in Chapter 7. Most previously introduced methods rely on part-of-speech tags and apply a ranking function to rank term sequences according to their multiwordness. Here, we introduce a language-independent and knowledge-free ranking method that uses information from distributional thesauri. Performing evaluations on English and French textual data, our method achieves the best results in comparison to methods from the literature.

In Chapter 8 we apply information from distributional thesauri as features for various applications. First, we introduce a general setting for tackling the out-of-vocabulary problem. This problem describes the inferior performance of supervised methods according to words that are not contained in the training data. We alleviate this issue by replacing these unseen words with the most similar ones that are known, extracted from a distributional thesaurus. Using a supervised part-of-speech tagging method, we show substantial improvements in the classification performance for out-of-vocabulary words based on German and English textual data. The second application introduces a system for replacing words within a sentence with a word of the same meaning. For this application, the information from a distributional thesaurus provides the highest-scoring features. In the last application, we introduce an algorithm that is capable to detect the different meanings of a word and groups them into coarse-grained categories, called supersenses. Generating features by means of supersenses and distributional thesauri yields an performance increase when plugged into a supervised system that recognized named entities (e.g. names, organizations or locations).

Further directions for using distributional thesauri are presented in Chapter 9. First, we lay out a method, which is capable of incorporating background information (e.g. source of the text collection or sense information) into a distributional thesaurus. Furthermore, we describe an approach on building thesauri for different text domains (e.g. medical or finance domain) and how they can be combined to have a high coverage of domain-specific knowledge as well as a broad background for the open domain. In the last section we characterize yet another method, suited to enrich existing knowledge bases. All three directions might be further extensions, which induce further structure based on textual data.

The last chapter gives a summary of this work: we demonstrate that without language-dependent knowledge, a computer can learn to extract useful structure from text by using computational semantics. Due to the unsupervised nature of the introduced methods, we are able to extract new structure from raw textual data. This is important especially for languages, for which less manually created resources are available as well as for special domains e.g. medical or finance. We have demonstrated that our methods achieve state-of-the-art performance. Furthermore, we have proven their impact by applying the extracted structure in three natural language processing tasks. We have also applied the methods to different languages and large amounts of data. Thus, we have not proposed methods, which are suited for extracting structure for a single language, but methods that are capable to explore structure for “language” in general.

Alternatives oder übersetztes Abstract:
Alternatives AbstractSprache

Welche Schritte sind notwendig, damit ein Computer Sprache versteht? Dies ist die fundamentale Frage, mit der sich der Bereich der Computerlinguistik beschäftigt. In den letzten Jahrzehnten haben überwachte Systeme beachtliche Erfolge erzielt. Diese Systeme lernen Sprachstruktur (z. B. Grammatik, Semantik oder Syntax) anhand im Text manuell markierten (annotierten) Sprachstrukturen und sind anschließend in Lage diese in unbekannten Texten vorherzusagen. Die Generierung solcher Daten ist allerdings zeitaufwendig. Weiterhin sind die meisten überwachten Systeme nur in der Lage gute Vorhersagen auf Texten zu treffen, die den manuell annotierten Texten ähnlich sind. Um einem Computer generelles Wissen über Sprachen beizubringen sind daher unüberwachte Methoden von Interesse, die Sprachstrukturen anhand von Textkollektionen selbst extrahieren. Dies ermöglicht eine Adaption solcher Methoden sowohl an verschiedenartigen Texten (medizinische Texte, Zeitungstexte) als auch an andere Sprachen. Um dabei Sprachstrukturen zuverlässig zu lernen, ist es notwendig, dieses Wissen aus großen Textkollektionen zu extrahieren. Im Rahmen dieser Arbeit präsentieren wir unüberwachte Methoden, welche die Bedeutung von Wörtern — sprich ihre Semantik — erfassen, und zeigen deren Leistungsfähigkeit.

Im ersten Kapitel geben wir eine Einführung zu automatischen Verfahren, die Computern ein gewisses „Verständnis” von Sprache ermöglichen. Anschließend beschreiben wir den wissenschaftlichen Beitrag dieser Arbeit und nennen die Publikationen, auf denen diese Arbeit aufbaut.

Die Grundlagen sowie die Terminologie, die in dieser Arbeit verwendet werden, sind Bestandteil des zweiten Kapitels. Danach beschreiben wir aus linguistischer Sicht wie Menschen Sprache verstehen und stellen eine Verbindung zu den Themen in dieser Arbeit her. Weiterhin definieren wir eine formale graphenbasierte Repräsentation, die im Rahmen dieser Arbeit verwendet wird. Anschließend zeigen wir, wie diese Repräsentation eingesetzt werden kann um Wörter und ihre Kontextrepräsentation (z. B. benachbarte Wörter) darzustellen. Das Kapitel 3 beschreibt existierende Methoden zur Berechnung von semantischen Ähnlichkeiten zwischen Wörtern, die allesamt auf der distributionellen Hypothese basieren. Diese besagt, dass sich Wörter ähnlicher sind je häufiger sie im gleichen Kontext auftreten. Als Kontext werden dabei z. B. benachbarte Wörter betrachtet oder auch der Satz in dem ein Wort auftritt. Wir erklären die Funktionsweise dieser Methoden und zeigen Beispiele für Wort- und Dokumentenähnlichkeiten.

Da ein Wort nicht nur eine Bedeutung haben kann, ist der Kontext, in dem Wörter verwendet werden, wichtig. Anhand des Kontextes in dem ein Wort vorkommt, können Menschen oftmals bereits dessen Bedeutung erkennen. In Kapitel 4 beschreiben wir eine Methode, die — mittels eines statistischen Modells zur Themenerkennung — Texte in thematisch kohärente Abschnitte aufteilt. Wir demonstrieren die Leistungsfähigkeit unseres Systems mittels zweier Datensätze und erzielen auf diesen bessere oder vergleichbare Resultate zu aktuellen Forschungsergebnissen.

Im Kapitel 5 präsentieren wir eine Methode, mit der Ähnlichkeiten zwischen Wörtern berechnet werden können. Unsere Methode verfolgt dafür einen „symbolischen Ansatz”, der Wörter mit einer Kontextrepräsentation darstellt, welche für Menschen nachvollziehbar ist. Durch den Vergleich dieser Kontextrepräsentationen zwischen zwei Wörtern können Wortähnlichkeiten berechnet werden. Unsere Methode kann auf beliebig große Textkollektionen angewendet werden und ermöglicht die Berechnung von Ähnlichkeiten für alle Wörter im Text. Dies führt zu einem sogenannten distributionellen Thesaurus. Aufgrund des symbolischen Ansatzes können dadurch Begründungen für die Ähnlichkeit zweier Begriffe gegeben werden, was im Rahmen von sensitiven Anwendungen (z. B. medizinische Anwendungen) von Bedeutung ist. In der Folge zeigen wir den Einfluss verschiedener Parameter und vergleichen unseren Ansatz mit Standardmethoden. Die besten Ergebnisse werden dabei von unserer Methode unter der Verwendung syntaktischer Kontextmerkmale erzielt. Diese werden allerdings mit einem überwachten Dependenzparser generiert, der mit Hilfe von manuell erstellten Trainingsdaten trainiert wird.

Aus diesem Grund ersetzten wir in Kapitel 6 den Kontext von überwachten Dependenzparsern durch Kontext von unüberwachten Dependenzparsern zur Berechnung von Wortähnlichkeiten. Dieses Experiment soll die Leistungsfähigkeit von unüberwachten Dependenzparsern zeigen, wenn diese als Kontextmerkmal eingesetzt werden. Für die Berechnung distributioneller Thesauri auf englischsprachigen Texten können keine Verbesserungen gegenüber überwachten Dependenzparsern erzielt werden. Allerdings erzielen wir bessere Resultate unter Verwendung eines unüberwachten Dependenzparsers bei deutschen Texten. Weiterhin ist unser Verfahren zur Ähnlichkeitsberechnung aus Kapitel 5 in der Lage verschiedene Kontextmerkmale sowohl von unüberwachten als auch überwachten Dependenzparsern zu kombinieren und erzielt mit dieser Kombination die besten Ergebnisse. Ein weiterer Schritt um Sprache zu verstehen, ist zu erkennen, welche Wörter eine Einheit bilden. Dazu wird in Kapitel 7 eine Methode vorgestellt, die sogenannte Mehrwortbegriffe (z. B. New York) extrahiert und dazu Informationen aus einem distributionellen Thesaurus verwendet. Im Vergleich zu existierenden Verfahren benötigt unsere Methode kein sprachabhängiges Wissen. Die Leistungsfähigkeit unseres Ansatzes wird dabei anhand von französischen und englischen Texten evaluiert. Im Vergleich zu Methoden aus der Literatur erzielt unser Ansatz die besten Ergebnisse.

Im Kapitel 8 demonstrieren wir wie Sprachstrukturen, die von unseren Methoden extrahiert werden, in Anwendungen integriert werden können. Zuerst stellen wir einen generellen Ansatz vor, mit dem unbekannte Wörter, die nicht im Training von überwachten Methoden enthalten sind, durch Wörter aus den Trainingsdaten ersetzt werden können. Wird stellen dazu ein Verfahren vor, welches dafür einen distributionellen Thesaurus verwendet, der auf einer großen Textkollektion berechnet wird. Das Verfahren sucht dazu die unbekannte Wörter in dem berechneten Thesaurus nach und ersetzt diese durch ein ähnliches Wort das in den Trainingsdaten enthalten ist. Wir testen dieses Verfahren anhand einer Methode, die Wortarten bestimmt. Durch die Verwendung der „Ersetzungen“ können Verbesserungen erzielt werden. Als Zweites zeigen wir den Einfluss von semantischen Merkmalen in einem System, das Wörter innerhalb eines Satzes durch Wörter mit einer gleichen Bedeutung ersetzt. Dabei zeigt sich, dass die semantischen Merkmale aus einem distributionellen Thesaurus die größte positive Auswirkung auf die Performanz des Systems haben. Aufgrund der Mehrdeutigkeit von Sprache können Wörter nicht nur eine Bedeutung haben sondern mehrere. Um die verschiedenen Bedeutungen von Wörtern zu erkennen, stellen wird im letzten Anwendungsfall eine Methode vor, die Wörter aufgrund ihrer Bedeutungen in grobe Kategorien einteilt. Sowohl diese Kategorien also auch Informationen aus einem distributionellen Thesaurus werden anschließend in einem System zur Erkennung von Namensentitäten (z. B. Orte, Namen oder Unternehmen) evaluiert. Erneut werden Verbesserungen unter Verwendung dieser Merkmale beobachtet.

In Kapitel 9 beschreiben wir zukünftige Forschungsideen. Zuerst zeigen wir wie semantische Ähnlichkeiten aus Kapitel 5 dazu verwendet werden können zusätzliche Informationen in Wörtern zu speichern. Dies ermöglicht z. B. die Domäne der Textkollektion zu erfassen oder Wortbedeutungen direkt zu unterscheiden. Anschließend stellen wir ein automatisches Verfahren vor, mit dem Thesauri für spezielle Textarten (z. B. medizinische Texte) erstellt werden können. Das nächste Verfahren beschreibt, wie Methoden aus dieser Arbeit dazu verwendet werden können um existierende Wissensdatenbanken, wie Taxonomien oder Ontologien, anzureichern.

Das letzte Kapitel fasst die Forschungsinhalte dieser Arbeit zusammen: unter Verwendung großer Datenmengen können wir zeigen, dass unsere Methoden in der Lage sind, Sprachstrukturen sprachunabhängig zu extrahieren. Dabei haben wir Lösungen für drei Forschungsprobleme dargelegt: die thematischen Textsegmentierung, die Berechnung von Wortähnlichkeiten und die Erkennung von Mehrwortbegriffen. Weiterhin haben wir die Leistungsfähigkeit unserer Modelle in drei Anwendungen demonstriert. Dies ist ein weiterer Schritt dem Computer beizubringen, Sprache nicht nur anhand von Beispielen zu lernen, sondern selbstständig Strukturen aus Text zu extrahieren.

Deutsch
Freie Schlagworte: natural language processing, distributional semantics, statistical semantics, unsupervised learning, multiword expressions, lexical substitution, text segmentation, topic models
Schlagworte:
Einzelne SchlagworteSprache
natürliche Sprachverarbeitung, Distributionelle Semantik, unüberwachte Methoden, Mehrwortbegriffe, lexikalische Ersetzung, Textsegmentierung, ThemenmodelleDeutsch
URN: urn:nbn:de:tuda-tuprints-54355
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
400 Sprache > 400 Sprache, Linguistik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
Hinterlegungsdatum: 22 Mai 2016 19:55
Letzte Änderung: 22 Mai 2016 19:55
PPN:
Referenten: Biemann, Prof. Dr. Chris ; Anders, Prof. Dr. Søgaard
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: 24 Februar 2016
Schlagworte:
Einzelne SchlagworteSprache
natürliche Sprachverarbeitung, Distributionelle Semantik, unüberwachte Methoden, Mehrwortbegriffe, lexikalische Ersetzung, Textsegmentierung, ThemenmodelleDeutsch
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen