TU Darmstadt / ULB / TUbiblio

Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems

Salmani, Mehran ; Ghafouri, Saeid ; Sanaee, Alireza ; Razavi, Kamran ; Mühlhäuser, Max ; Doyle, Joseph ; Jamshidi, Pooyan ; Sharifi, Mohsen (2023)
Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems.
3rd Workshop on Machine Learning and Systems. Rome, Italy (08.05.2023-08.05.2023)
doi: 10.1145/3578356.3592578
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

The use of machine learning (ML) inference for various applications is growing drastically. ML inference services engage with users directly, requiring fast and accurate responses. Moreover, these services face dynamic workloads of requests, imposing changes in their computing resources. Failing to right-size computing resources results in either latency service level objectives (SLOs) violations or wasted computing resources. Adapting to dynamic workloads considering all the pillars of accuracy, latency, and resource cost is challenging. In response to these challenges, we propose InfAdapter, which proactively selects a set of ML model variants with their resource allocations to meet latency SLO while maximizing an objective function composed of accuracy and cost. InfAdapter decreases SLO violation and costs up to 65 and 33, respectively, compared to a popular industry autoscaler (Kubernetes Vertical Pod Autoscaler).

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2023
Autor(en): Salmani, Mehran ; Ghafouri, Saeid ; Sanaee, Alireza ; Razavi, Kamran ; Mühlhäuser, Max ; Doyle, Joseph ; Jamshidi, Pooyan ; Sharifi, Mohsen
Art des Eintrags: Bibliographie
Titel: Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems
Sprache: Deutsch
Publikationsjahr: 8 Mai 2023
Verlag: ACM
Buchtitel: EuroMLSys '23: Proceedings of the 3rd Workshop on Machine Learning and Systems
Veranstaltungstitel: 3rd Workshop on Machine Learning and Systems
Veranstaltungsort: Rome, Italy
Veranstaltungsdatum: 08.05.2023-08.05.2023
DOI: 10.1145/3578356.3592578
Kurzbeschreibung (Abstract):

The use of machine learning (ML) inference for various applications is growing drastically. ML inference services engage with users directly, requiring fast and accurate responses. Moreover, these services face dynamic workloads of requests, imposing changes in their computing resources. Failing to right-size computing resources results in either latency service level objectives (SLOs) violations or wasted computing resources. Adapting to dynamic workloads considering all the pillars of accuracy, latency, and resource cost is challenging. In response to these challenges, we propose InfAdapter, which proactively selects a set of ML model variants with their resource allocations to meet latency SLO while maximizing an objective function composed of accuracy and cost. InfAdapter decreases SLO violation and costs up to 65 and 33, respectively, compared to a popular industry autoscaler (Kubernetes Vertical Pod Autoscaler).

Freie Schlagworte: machine learning, inference serving systems, autoscaling
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Telekooperation
TU-Projekte: DFG|SFB1053|SFB1053 TPA01 Mühlhä
DFG|SFB1053|SFB1053 TPB02 Mühlhä
Hinterlegungsdatum: 02 Aug 2023 14:09
Letzte Änderung: 04 Aug 2023 07:32
PPN: 510353878
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen