1. Improving FPGA-Based Logic Emulation Systems through Machine Learning
- Author
-
Jose Escobedo Del Cid, Sung Kyu Lim, Anthony Agnesina, and Etienne Lepercq
- Subjects
010302 applied physics ,Job scheduler ,Exploit ,business.industry ,Computer science ,02 engineering and technology ,computer.software_genre ,Machine learning ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,020202 computer hardware & architecture ,Computer Science Applications ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Netlist ,Key (cryptography) ,System on a chip ,Artificial intelligence ,Place and route ,Electrical and Electronic Engineering ,business ,Field-programmable gate array ,computer ,Compile time - Abstract
We present a machine learning (ML) framework to improve the use of computing resources in the FPGA compilation step of a commercial FPGA-based logic emulation flow. Our ML models enable highly accurate predictability of the final place and route design qualities, runtime, and optimal mapping parameters. We identify key compilation features that may require aggressive compilation efforts using our ML models. Experiments based on our large-scale database from an industry’s emulation system show that our ML models help reduce the total number of jobs required for a given netlist by 33%. Moreover, our job scheduling algorithm based on our ML model reduces the overall time to completion of concurrent compilation runs by 24%. In addition, we propose a new method to compute “recommendations” from our ML model to perform re-partitioning of difficult partitions. Tested on a large-scale industry system on chip design, our recommendation flow provides additional 15% compile time savings for the entire system on chip. To exploit our ML model inside the time-critical multi-FPGA partitioning step, we implement it in an optimized multi-threaded representation.
- Published
- 2020