TU Darmstadt / ULB / TUbiblio

Towards Foundation Models for Relational Databases Vision Paper

Vogel, Liane ; Hilprecht, Benjamin ; Binnig, Carsten (2022)
Towards Foundation Models for Relational Databases Vision Paper.
36th Conference on Neural Information Processing Systems (NeurIPS 2022). New Orleans, USA (28.11.2022-09.12.2022)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Tabular representation learning has recently gained a lot of attention. However, existing approaches only learn a representation from a single table, and thus ignore the potential to learn from the full structure of relational databases, including neighboring tables that can contain important information for a contextualized representation. Moreover, current models are significantly limited in scale, which prevents that they learn from large databases. In this paper, we thus introduce our vision of relational representation learning, that can not only learn from the full relational structure, but also can scale to larger database sizes that are commonly found in real-world. Moreover, we also discuss opportunities and challenges we see along the way to enable this vision and present initial very promising results. Overall, we argue that this direction can lead to foundation models for relational databases that are today only available for text and images.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Vogel, Liane ; Hilprecht, Benjamin ; Binnig, Carsten
Art des Eintrags: Bibliographie
Titel: Towards Foundation Models for Relational Databases Vision Paper
Sprache: Englisch
Publikationsjahr: 10 Dezember 2022
Veranstaltungstitel: 36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Veranstaltungsort: New Orleans, USA
Veranstaltungsdatum: 28.11.2022-09.12.2022
URL / URN: https://openreview.net/forum?id=s1KlNOQq71_
Kurzbeschreibung (Abstract):

Tabular representation learning has recently gained a lot of attention. However, existing approaches only learn a representation from a single table, and thus ignore the potential to learn from the full structure of relational databases, including neighboring tables that can contain important information for a contextualized representation. Moreover, current models are significantly limited in scale, which prevents that they learn from large databases. In this paper, we thus introduce our vision of relational representation learning, that can not only learn from the full relational structure, but also can scale to larger database sizes that are commonly found in real-world. Moreover, we also discuss opportunities and challenges we see along the way to enable this vision and present initial very promising results. Overall, we argue that this direction can lead to foundation models for relational databases that are today only available for text and images.

Freie Schlagworte: dm_nhr4ces
Zusätzliche Informationen:

Table Representation Learning Workshop at NeurIPS 2022

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Data and AI Systems
Hinterlegungsdatum: 08 Feb 2023 08:53
Letzte Änderung: 08 Feb 2023 08:53
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen