1. Deep Reinforcement Learning-Based Optimal Decoupling Capacitor Design Method for Silicon Interposer-Based 2.5-D/3-D ICs
- Author
-
Seongguk Kim, HyunWook Park, Boogyo Sim, Youngwoo Kim, Subin Kim, Junyong Park, Seungtaek Jeong, Kyungjun Cho, Joungho Kim, Gapyeol Park, Daehwan Lho, and Shinyoung Park
- Subjects
Computer science ,Solution set ,Order (ring theory) ,020206 networking & telecommunications ,Power integrity ,02 engineering and technology ,Integrated circuit ,Topology ,Decoupling capacitor ,Industrial and Manufacturing Engineering ,Electronic, Optical and Magnetic Materials ,Silicon interposer ,law.invention ,law ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Electrical and Electronic Engineering ,Electrical impedance - Abstract
In this article, we first propose a deep reinforcement learning (RL)-based optimal decoupling capacitor (decap) design method for silicon interposer-based 2.5-D/3-D integrated circuits (ICs). The proposed method provides an optimal decap design that satisfies target impedance with a minimum area. Using deep RL algorithms based on reward feedback mechanisms, an optimal decap design guideline can be derived. For verification, the proposed method was applied to test power distribution networks (PDNs) and self-PDN impedance was compared with full search simulation results. We successfully verified by the full search simulation that the proposed method provides one of the solution sets. Conventional approaches are based on complex analytical models from power integrity (PI) domain expertise. However, the proposed method requires only specifications of the PDN structure and decap, along with a simple reward model, achieving fast and accurate data-driven results. Computing time of the proposed method was a few minutes, significantly reduced than that of the full search simulation, which took more than a month. Furthermore, the proposed deep RL method covered up to $10^{17}$ – $10^{18}$ cases, an approximately $10^{12}$ – $10^{13}$ order increase compared to the previous RL-based methods that did not utilize deep-learning techniques.
- Published
- 2020
- Full Text
- View/download PDF