TU Darmstadt / ULB / TUbiblio

C4Corpus: Multilingual Web-size corpus with free license

Habernal, Ivan ; Zayed, Omnia ; Gurevych, Iryna
Hrsg.: Calzolari, Nicoletta ; Choukri, Khalid ; Declerck, Thierry ; Grobelnik, Marko ; Maegaard, Bente ; Mariani, Joseph ; Moreno, Asuncion ; Odijk, Jan ; Piperidis, Stelios (2016)
C4Corpus: Multilingual Web-size corpus with free license.
Portoroz, Slovenia
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Large Web corpora containing full documents with permissive licenses are crucial for many NLP tasks. In this article we present the construction of 12 million-pages Web corpus (over 10 billion tokens) licensed under CreativeCommons license family in 50+ languages that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date with about 2 billion crawled URLs. Our highly-scalable Hadoop-based framework is able to process the full CommonCrawl corpus on 2000+ CPU cluster on the Amazon Elastic Map/Reduce infrastructure. The processing pipeline includes license identification, state-of-the-art boilerplate removal, exact duplicate and near-duplicate document removal, and language detection. The construction of the corpus is highly configurable and fully reproducible, and we provide both the framework (DKPro C4CorpusTools) and the resulting data (C4Corpus) to the research community.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2016
Herausgeber: Calzolari, Nicoletta ; Choukri, Khalid ; Declerck, Thierry ; Grobelnik, Marko ; Maegaard, Bente ; Mariani, Joseph ; Moreno, Asuncion ; Odijk, Jan ; Piperidis, Stelios
Autor(en): Habernal, Ivan ; Zayed, Omnia ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: C4Corpus: Multilingual Web-size corpus with free license
Sprache: Englisch
Publikationsjahr: Mai 2016
Verlag: European Language Resources Association (ELRA)
Buchtitel: Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
Veranstaltungsort: Portoroz, Slovenia
URL / URN: http://www.lrec-conf.org/proceedings/lrec2016/pdf/388_Paper....
Zugehörige Links:
Kurzbeschreibung (Abstract):

Large Web corpora containing full documents with permissive licenses are crucial for many NLP tasks. In this article we present the construction of 12 million-pages Web corpus (over 10 billion tokens) licensed under CreativeCommons license family in 50+ languages that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date with about 2 billion crawled URLs. Our highly-scalable Hadoop-based framework is able to process the full CommonCrawl corpus on 2000+ CPU cluster on the Amazon Elastic Map/Reduce infrastructure. The processing pipeline includes license identification, state-of-the-art boilerplate removal, exact duplicate and near-duplicate document removal, and language detection. The construction of the corpus is highly configurable and fully reproducible, and we provide both the framework (DKPro C4CorpusTools) and the resulting data (C4Corpus) to the research community.

Freie Schlagworte: UKP_reviewed;AIPHES_corpus
ID-Nummer: TUD-CS-2016-0023
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Graduiertenkolleg 1994 Adaptive Informationsaufbereitung aus heterogenen Quellen
Hinterlegungsdatum: 31 Dez 2016 14:29
Letzte Änderung: 24 Jan 2020 12:03
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen