1. Graph Reinforcement Learning for Network Control via Bi-Level Optimization
- Author
-
Gammelli, Daniele, Harrison, James, Yang, Kaidi, Pavone, Marco, Rodrigues, Filipe, and Pereira, Francisco C.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Optimization and Control (math.OC) ,FOS: Electrical engineering, electronic engineering, information engineering ,FOS: Mathematics ,Systems and Control (eess.SY) ,Electrical Engineering and Systems Science - Systems and Control ,Mathematics - Optimization and Control ,Machine Learning (cs.LG) - Abstract
Optimization problems over dynamic networks have been extensively studied and widely used in the past decades to formulate numerous real-world problems. However, (1) traditional optimization-based approaches do not scale to large networks, and (2) the design of good heuristics or approximation algorithms often requires significant manual trial-and-error. In this work, we argue that data-driven strategies can automate this process and learn efficient algorithms without compromising optimality. To do so, we present network control problems through the lens of reinforcement learning and propose a graph network-based framework to handle a broad class of problems. Instead of naively computing actions over high-dimensional graph elements, e.g., edges, we propose a bi-level formulation where we (1) specify a desired next state via RL, and (2) solve a convex program to best achieve it, leading to drastically improved scalability and performance. We further highlight a collection of desirable features to system designers, investigate design decisions, and present experiments on real-world control problems showing the utility, scalability, and flexibility of our framework., 9 pages, 4 figures
- Published
- 2023