TU Darmstadt / ULB / TUbiblio

Trust in Artificial Intelligence: Producing Ontological Security through Governmental Visions

Schmid, Stefka ; Pham, Bao-Chau ; Ferl, Anna-Katharina (2024)
Trust in Artificial Intelligence: Producing Ontological Security through Governmental Visions.
In: Cooperation and Conflict, 2024
doi: 10.1177/00108367241288073
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

With developments in Artificial Intelligence widely framed as security concern in both military and civilian realms, governments have turned their attention to regulating and governing AI. In a study of US, Chinese, and EU AI documents, we go beyond instrumental understandings of AI as a technological capability, which serves states' self-interests and the maintenance of their (supra)national security. Our specific interest lies in how AI policies tap into both problem-solving approaches and affective registers to achieve both physical as well as ontological security. We find that in governmental visions, AI is perceived as a capability that enhances societal, and geopolitical interests while its risks are framed as manageable. This echoes strands within Human-Computer Interaction that draw on human-centered perceptions of technology and assumptions about human-AI relationships of trust. Despite different cultural and institutional settings, the visions of future AI development are shaped by this (shared) understanding of human-AI interaction, offering common ground in the navigation of innovation policies.

Typ des Eintrags: Artikel
Erschienen: 2024
Autor(en): Schmid, Stefka ; Pham, Bao-Chau ; Ferl, Anna-Katharina
Art des Eintrags: Bibliographie
Titel: Trust in Artificial Intelligence: Producing Ontological Security through Governmental Visions
Sprache: Englisch
Publikationsjahr: 19 Oktober 2024
Verlag: Sage Publishing
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Cooperation and Conflict
Jahrgang/Volume einer Zeitschrift: 2024
DOI: 10.1177/00108367241288073
Kurzbeschreibung (Abstract):

With developments in Artificial Intelligence widely framed as security concern in both military and civilian realms, governments have turned their attention to regulating and governing AI. In a study of US, Chinese, and EU AI documents, we go beyond instrumental understandings of AI as a technological capability, which serves states' self-interests and the maintenance of their (supra)national security. Our specific interest lies in how AI policies tap into both problem-solving approaches and affective registers to achieve both physical as well as ontological security. We find that in governmental visions, AI is perceived as a capability that enhances societal, and geopolitical interests while its risks are framed as manageable. This echoes strands within Human-Computer Interaction that draw on human-centered perceptions of technology and assumptions about human-AI relationships of trust. Despite different cultural and institutional settings, the visions of future AI development are shaped by this (shared) understanding of human-AI interaction, offering common ground in the navigation of innovation policies.

Freie Schlagworte: Peace, Projekt-TraCe, A-Paper, Ranking-ImpactFactor, AuswahlPeace
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)
Forschungsfelder
Forschungsfelder > Information and Intelligence
Forschungsfelder > Information and Intelligence > Cybersecurity & Privacy
Hinterlegungsdatum: 23 Jan 2025 09:29
Letzte Änderung: 23 Jan 2025 09:29
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen