Şahinuç, Furkan ; Tran, Thy Thy ; Grishina, Yulia ; Hou, Yufang ; Chen, Bei ; Gurevych, Iryna (2024)
Efficient Performance Tracking: Leveraging Large Language Models for Automated Construction of Scientific Leaderboards.
29th Conference on Empirical Methods in Natural Language Processing. Miami, USA (12.11.2024 - 16.11.2024)
doi: 10.18653/v1/2024.emnlp-main.453
Konferenzveröffentlichung, Bibliographie
Kurzbeschreibung (Abstract)
Scientific leaderboards are standardized ranking systems that facilitate evaluating and comparing competitive methods. Typically, a leaderboard is defined by a task, dataset, and evaluation metric (TDM) triple, allowing objective performance assessment and fostering innovation through benchmarking. However, the exponential increase in publications has made it infeasible to construct and maintain these leaderboards manually. Automatic leaderboard construction has emerged as a solution to reduce manual labor. Existing datasets for this task are based on the community-contributed leaderboards without additional curation. Our analysis shows that a large portion of these leaderboards are incomplete, and some of them contain incorrect information. In this work, we present SciLead, a manually-curated Scientific Leaderboard dataset that overcomes the aforementioned problems. Building on this dataset, we propose three experimental settings that simulate real-world scenarios where TDM triples are fully defined, partially defined, or undefined during leaderboard construction. While previous research has only explored the first setting, the latter two are more representative of real-world applications. To address these diverse settings, we develop a comprehensive LLM-based framework for constructing leaderboards. Our experiments and analysis reveal that various LLMs often correctly identify TDM triples while struggling to extract result values from publications. We make our code and data publicly available.
Typ des Eintrags: | Konferenzveröffentlichung |
---|---|
Erschienen: | 2024 |
Autor(en): | Şahinuç, Furkan ; Tran, Thy Thy ; Grishina, Yulia ; Hou, Yufang ; Chen, Bei ; Gurevych, Iryna |
Art des Eintrags: | Bibliographie |
Titel: | Efficient Performance Tracking: Leveraging Large Language Models for Automated Construction of Scientific Leaderboards |
Sprache: | Englisch |
Publikationsjahr: | November 2024 |
Verlag: | ACL |
Buchtitel: | EMNLP 2024: The 2024 Conference on Empirical Methods in Natural Language Processing: Proceedings of the Conference |
Veranstaltungstitel: | 29th Conference on Empirical Methods in Natural Language Processing |
Veranstaltungsort: | Miami, USA |
Veranstaltungsdatum: | 12.11.2024 - 16.11.2024 |
DOI: | 10.18653/v1/2024.emnlp-main.453 |
URL / URN: | https://aclanthology.org/2024.emnlp-main.453/ |
Kurzbeschreibung (Abstract): | Scientific leaderboards are standardized ranking systems that facilitate evaluating and comparing competitive methods. Typically, a leaderboard is defined by a task, dataset, and evaluation metric (TDM) triple, allowing objective performance assessment and fostering innovation through benchmarking. However, the exponential increase in publications has made it infeasible to construct and maintain these leaderboards manually. Automatic leaderboard construction has emerged as a solution to reduce manual labor. Existing datasets for this task are based on the community-contributed leaderboards without additional curation. Our analysis shows that a large portion of these leaderboards are incomplete, and some of them contain incorrect information. In this work, we present SciLead, a manually-curated Scientific Leaderboard dataset that overcomes the aforementioned problems. Building on this dataset, we propose three experimental settings that simulate real-world scenarios where TDM triples are fully defined, partially defined, or undefined during leaderboard construction. While previous research has only explored the first setting, the latter two are more representative of real-world applications. To address these diverse settings, we develop a comprehensive LLM-based framework for constructing leaderboards. Our experiments and analysis reveal that various LLMs often correctly identify TDM triples while struggling to extract result values from publications. We make our code and data publicly available. |
Freie Schlagworte: | UKP_p_amazon |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung |
Hinterlegungsdatum: | 09 Dez 2024 13:03 |
Letzte Änderung: | 09 Dez 2024 13:03 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |