Chen, Kefeng (2006)
Eine wieder verwendbare dynamische Ressourcenverteilung für verschiedene Tasks auf einem Workstationcluster.
Technische Universität Darmstadt
Masterarbeit, Bibliographie
Kurzbeschreibung (Abstract)
The parallel computing is a very important approach to improve the efficiency of certain programs. Parts of a parallel program run simultaneously on several processors or computers so that the computing time will be greatly reduced. Computer clusters are several computers linked with Ethernet . It's the cheapest way to utilize the power of parallel computing and can be logically infinitely extended. To take advantage of a cluster, one must employ certain APIs like MPI or PVM. Such APIs use process as the smallest calculating unit to distribute on the cluster. With these APIs one must explicit program the communication between processes. This diplom thesis proposed a new way to program the cluster: using the thread. A framework were designed and a prototype of that framework, with which a program can distribute it's threads on a cluster, was implemented based upon MPI. The framework takes over the communication between threads and provides tools such as mutex, semaphore und wait condition to control the work flow between threads. A default stepwise load balancing algorithm was also provided to distribute the workload on the cluster as balanced as possible. Design patterns were used to provide flexibility so that the algorithms can be easily updated. The prototype was tested at the end on a workstation cluster with the marching cubes algorithm.
Typ des Eintrags: | Masterarbeit |
---|---|
Erschienen: | 2006 |
Autor(en): | Chen, Kefeng |
Art des Eintrags: | Bibliographie |
Titel: | Eine wieder verwendbare dynamische Ressourcenverteilung für verschiedene Tasks auf einem Workstationcluster |
Sprache: | Deutsch |
Publikationsjahr: | 2006 |
Kurzbeschreibung (Abstract): | The parallel computing is a very important approach to improve the efficiency of certain programs. Parts of a parallel program run simultaneously on several processors or computers so that the computing time will be greatly reduced. Computer clusters are several computers linked with Ethernet . It's the cheapest way to utilize the power of parallel computing and can be logically infinitely extended. To take advantage of a cluster, one must employ certain APIs like MPI or PVM. Such APIs use process as the smallest calculating unit to distribute on the cluster. With these APIs one must explicit program the communication between processes. This diplom thesis proposed a new way to program the cluster: using the thread. A framework were designed and a prototype of that framework, with which a program can distribute it's threads on a cluster, was implemented based upon MPI. The framework takes over the communication between threads and provides tools such as mutex, semaphore und wait condition to control the work flow between threads. A default stepwise load balancing algorithm was also provided to distribute the workload on the cluster as balanced as possible. Design patterns were used to provide flexibility so that the algorithms can be easily updated. The prototype was tested at the end on a workstation cluster with the marching cubes algorithm. |
Freie Schlagworte: | Parallel computing, Clustering, Multi-thread handling |
Zusätzliche Informationen: | 95 S. |
Fachbereich(e)/-gebiet(e): | nicht bekannt 20 Fachbereich Informatik 20 Fachbereich Informatik > Graphisch-Interaktive Systeme |
Hinterlegungsdatum: | 16 Apr 2018 09:04 |
Letzte Änderung: | 16 Apr 2018 09:04 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |