1. Graph-Attention-Based Casual Discovery With Trust Region-Navigated Clipping Policy Optimization
- Author
-
Shixuan Liu, Wu Keyu, Yanghe Feng, Guangquan Cheng, Jincai Huang, and Zhong Liu
- Subjects
Trust region ,Optimization problem ,Computer science ,business.industry ,GRASP ,Directed acyclic graph ,Machine learning ,computer.software_genre ,Computer Science Applications ,Human-Computer Interaction ,Constraint (information theory) ,Control and Systems Engineering ,Robustness (computer science) ,Reinforcement learning ,Graph (abstract data type) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Software ,Information Systems - Abstract
In many domains of empirical sciences, discovering the causal structure within variables remains an indispensable task. Recently, to tackle unoriented edges or latent assumptions violation suffered by conventional methods, researchers formulated a reinforcement learning (RL) procedure for causal discovery and equipped a REINFORCE algorithm to search for the best rewarded directed acyclic graph. The two keys to the overall performance of the procedure are the robustness of RL methods and the efficient encoding of variables. However, on the one hand, REINFORCE is prone to local convergence and unstable performance during training. Neither trust region policy optimization, being computationally expensive, nor proximal policy optimization (PPO), suffering from aggregate constraint deviation, is a decent alternative for combinatory optimization problems with considerable individual subactions. We propose a trust region-navigated clipping policy optimization method for causal discovery that guarantees both better search efficiency and steadiness in policy optimization, in comparison with REINFORCE, PPO, and our prioritized sampling-guided REINFORCE implementation. On the other hand, to boost the efficient encoding of variables, we propose a refined graph attention encoder called SDGAT that can grasp more feature information without priori neighborhood information. With these improvements, the proposed method outperforms the former RL method in both synthetic and benchmark datasets in terms of output results and optimization robustness.
- Published
- 2023
- Full Text
- View/download PDF