TY - JOUR

T1 - Implementing parallel shortest path for parallel transportation applications

AU - Hribar, Michelle R.

AU - Taylor, Valerie E.

AU - Boyce, David E.

N1 - Funding Information:
This research was part of Michelle Hribar's Ph.D. dissertation completed at Northwestern University. She was supported by a National Science Foundation Graduate Fellowship and the Department of Energy. Valerie Taylor is supported by NSF Young Investigator award under Grant CCR-9215482. David Boyce is supported by the National Science Foundation through Grant DMS-9313113 to the National Institute of Statistical Sciences.

PY - 2001/11

Y1 - 2001/11

N2 - Shortest path algorithms are required by several transportation applications; furthermore, the shortest path computation in these applications can account for a large percentage of the total execution time. Since these algorithms are very computationally intense, parallel processing can provide the compute power and memory required to solve large problems quickly. Therefore, good parallel shortest algorithms are critical for efficient parallel implementations of transportation applications. The experimental work related to parallel shortest path algorithms has focused on the development of parallel algorithms; however, very little work has been done with analyzing and understanding the performance impact of various implementation issues. In this paper, we conduct a thorough experimental analysis of parallel shortest path algorithms for sparse networks, concentrating on three implementation issues: (1) choice of shortest path algorithm, (2) termination detection and (3) network decomposition. The paper focuses on the choice of shortest path algorithm and network decomposition since the work on termination detection was published previously. We determine that all three issues affect the communication and convergence of the shortest path algorithm. Furthermore, we find that communicating the most information at a time results in the best convergence; this is contrary to most scientific applications where it is optimal to minimize communication.

AB - Shortest path algorithms are required by several transportation applications; furthermore, the shortest path computation in these applications can account for a large percentage of the total execution time. Since these algorithms are very computationally intense, parallel processing can provide the compute power and memory required to solve large problems quickly. Therefore, good parallel shortest algorithms are critical for efficient parallel implementations of transportation applications. The experimental work related to parallel shortest path algorithms has focused on the development of parallel algorithms; however, very little work has been done with analyzing and understanding the performance impact of various implementation issues. In this paper, we conduct a thorough experimental analysis of parallel shortest path algorithms for sparse networks, concentrating on three implementation issues: (1) choice of shortest path algorithm, (2) termination detection and (3) network decomposition. The paper focuses on the choice of shortest path algorithm and network decomposition since the work on termination detection was published previously. We determine that all three issues affect the communication and convergence of the shortest path algorithm. Furthermore, we find that communicating the most information at a time results in the best convergence; this is contrary to most scientific applications where it is optimal to minimize communication.

KW - Network decomposition

KW - Parallel performance

KW - Parallel shortest path

KW - Parallel transportation applications

KW - Shortest path algorithms

KW - Traffic equilibrium

UR - http://www.scopus.com/inward/record.url?scp=0035501909&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0035501909&partnerID=8YFLogxK

U2 - 10.1016/S0167-8191(01)00105-3

DO - 10.1016/S0167-8191(01)00105-3

M3 - Article

AN - SCOPUS:0035501909

SN - 0167-8191

VL - 27

SP - 1537

EP - 1568

JO - Parallel Computing

JF - Parallel Computing

IS - 12

ER -