Back to Search Start Over

Towards a Standardized Representation for Deep Learning Collective Algorithms

Authors :
Yoo, Jinsun
Won, William
Cowan, Meghan
Jiang, Nan
Klenk, Benjamin
Sridharan, Srinivas
Krishna, Tushar
Publication Year :
2024

Abstract

The explosion of machine learning model size has led to its execution on distributed clusters at a very large scale. Many works have tried to optimize the process of producing collective algorithms and running collective communications, which act as a bottleneck to distributed machine learning. However, different works use their own collective algorithm representation, pushing away from co-optimizing collective communication and the rest of the workload. The lack of a standardized collective algorithm representation has also hindered interoperability between collective algorithm producers and consumers. Additionally, tool-specific conversions and modifications have to be made for each pair of tools producing and consuming collective algorithms which adds to engineering efforts. In this position paper, we propose a standardized workflow leveraging a common collective algorithm representation. Upstream producers and downstream consumers converge to a common representation format based on Chakra Execution Trace, a commonly used graph based representation of distributed machine learning workloads. Such a common representation enables us to view collective communications at the same level as workload operations and decouple producer and consumer tools, enhance interoperability, and relieve the user from the burden of having to focus on downstream implementations. We provide a proof-of-concept of this standardized workflow by simulating collective algorithms generated by the MSCCLang domain-specific language through the ASTRA-sim distributed machine learning simulator using various network configurations.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.11008
Document Type :
Working Paper