TU Darmstadt / ULB / TUbiblio

What Can Natural Language Processing Do for Peer Review?

Kuznetsov, Ilia ; Afzal, Osama Mohammed ; Dercksen, Koen ; Dycke, Nils ; Goldberg, Alexander ; Hope, Tom ; Hovy, Dirk ; Kummerfeld, Jonathan K. ; Lauscher, Anne ; Leyton-Brown, Kevin ; Lu, Sheng ; Mausam, Mausam ; Mieskes, Margot ; Névéol, Aurélie ; Pruthi, Danish ; Qu, Lizhen ; Schwartz, Roy ; Smith, Noah A. ; Solorio, Thamar ; Wang, Jingyan ; Zhu, Xiaodan ; Rogers, Anna ; Shah, Nihar B. ; Gurevych, Iryna (2024)
What Can Natural Language Processing Do for Peer Review?
doi: 10.48550/arXiv.2405.06563
Report, Bibliographie

Kurzbeschreibung (Abstract)

The number of scientific articles produced every year is growing rapidly. Providing quality control over them is crucial for scientists and, ultimately, for the public good. In modern science, this process is largely delegated to peer review -- a distributed procedure in which each submission is evaluated by several independent experts in the field. Peer review is widely used, yet it is hard, time-consuming, and prone to error. Since the artifacts involved in peer review -- manuscripts, reviews, discussions -- are largely text-based, Natural Language Processing has great potential to improve reviewing. As the emergence of large language models (LLMs) has enabled NLP assistance for many new tasks, the discussion on machine-assisted peer review is picking up the pace. Yet, where exactly is help needed, where can NLP help, and where should it stand aside? The goal of our paper is to provide a foundation for the future efforts in NLP for peer-reviewing assistance. We discuss peer review as a general process, exemplified by reviewing at AI conferences. We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance, illustrated by existing work. We then turn to the big challenges in NLP for peer review as a whole, including data acquisition and licensing, operationalization and experimentation, and ethical issues. To help consolidate community efforts, we create a companion repository that aggregates key datasets pertaining to peer review. Finally, we issue a detailed call for action for the scientific community, NLP and AI researchers, policymakers, and funding bodies to help bring the research in NLP for peer review forward. We hope that our work will help set the agenda for research in machine-assisted scientific quality control in the age of AI, within the NLP community and beyond.

Typ des Eintrags: Report
Erschienen: 2024
Autor(en): Kuznetsov, Ilia ; Afzal, Osama Mohammed ; Dercksen, Koen ; Dycke, Nils ; Goldberg, Alexander ; Hope, Tom ; Hovy, Dirk ; Kummerfeld, Jonathan K. ; Lauscher, Anne ; Leyton-Brown, Kevin ; Lu, Sheng ; Mausam, Mausam ; Mieskes, Margot ; Névéol, Aurélie ; Pruthi, Danish ; Qu, Lizhen ; Schwartz, Roy ; Smith, Noah A. ; Solorio, Thamar ; Wang, Jingyan ; Zhu, Xiaodan ; Rogers, Anna ; Shah, Nihar B. ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: What Can Natural Language Processing Do for Peer Review?
Sprache: Englisch
Publikationsjahr: 10 Mai 2024
Verlag: arXiv
Reihe: Computation and Language
Auflage: 1. Version
DOI: 10.48550/arXiv.2405.06563
URL / URN: https://arxiv.org/abs/2405.06563
Kurzbeschreibung (Abstract):

The number of scientific articles produced every year is growing rapidly. Providing quality control over them is crucial for scientists and, ultimately, for the public good. In modern science, this process is largely delegated to peer review -- a distributed procedure in which each submission is evaluated by several independent experts in the field. Peer review is widely used, yet it is hard, time-consuming, and prone to error. Since the artifacts involved in peer review -- manuscripts, reviews, discussions -- are largely text-based, Natural Language Processing has great potential to improve reviewing. As the emergence of large language models (LLMs) has enabled NLP assistance for many new tasks, the discussion on machine-assisted peer review is picking up the pace. Yet, where exactly is help needed, where can NLP help, and where should it stand aside? The goal of our paper is to provide a foundation for the future efforts in NLP for peer-reviewing assistance. We discuss peer review as a general process, exemplified by reviewing at AI conferences. We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance, illustrated by existing work. We then turn to the big challenges in NLP for peer review as a whole, including data acquisition and licensing, operationalization and experimentation, and ethical issues. To help consolidate community efforts, we create a companion repository that aggregates key datasets pertaining to peer review. Finally, we issue a detailed call for action for the scientific community, NLP and AI researchers, policymakers, and funding bodies to help bring the research in NLP for peer review forward. We hope that our work will help set the agenda for research in machine-assisted scientific quality control in the age of AI, within the NLP community and beyond.

Zusätzliche Informationen:

Preprint

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 17 Okt 2024 12:33
Letzte Änderung: 17 Okt 2024 12:33
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen