1. Synchronization in graph analysis algorithms on the Partially Ordered Event-Triggered Systems many-core architecture
- Author
-
Rafiev, A, Yakovlev, A, Tarawneh, G, Naylor, MF, Moore, SW, Thomas, DB, Bragg, GM, Vousden, ML, Brown, AD, Rafiev, A [0000-0002-7387-5970], Yakovlev, A [0000-0003-0826-9330], Naylor, MF [0000-0001-9827-8497], Moore, SW [0000-0002-2806-495X], Thomas, DB [0000-0002-9671-0917], Bragg, GM [0000-0002-5201-7977], Vousden, ML [0000-0002-6552-5831], and Apollo - University of Cambridge Repository
- Subjects
4009 Electronics, Sensors and Digital Hardware ,4606 Distributed Computing and Systems Software ,46 Information and Computing Sciences ,Hardware and Architecture ,Bioengineering ,Electrical and Electronic Engineering ,Software ,40 Engineering - Abstract
One of the key problems in designing and implementing graph analysis algorithms for distributed platforms is to find an optimal way of managing communication flows in the massively parallel processing network. Message‐passing and global synchronization are powerful abstractions in this regard, especially when used in combination. This paper studies the use of a hardware‐implemented refutable global barrier as a design optimization technique aimed at unifying these abstractions at the API level. The paper explores the trade‐offs between the related overheads and performance factors on a message‐passing prototype machine with 49,152 RISC‐V threads distributed over 48 FPGAs (called the Partially Ordered Event‐Triggered Systems platform). Our experiments show that some graph applications favour synchronized communication, but the effect is hard to predict in general because of the interplay between multiple hardware and software factors. A classifier model is therefore proposed and implemented to perform such a prediction based on the application graph topology parameters: graph diameter, degree of connectivity, and reconvergence metric. The presented experimental results demonstrate that the correct choice of communication mode, granted by the new model‐driven approach, helps to achieve 3.22 times faster computation time on average compared to the baseline platform operation.
- Published
- 2022