Back to Search
Start Over
A Multi-Agent Reinforcement Learning Approach to Price and Comfort Optimization in HVAC-Systems
- Source :
- Energies, Vol 14, Iss 22, p 7491 (2021)
- Publication Year :
- 2021
- Publisher :
- MDPI AG, 2021.
-
Abstract
- This paper addresses the challenge of minimizing training time for the control of Heating, Ventilation, and Air-conditioning (HVAC) systems with online Reinforcement Learning (RL). This is done by developing a novel approach to Multi-Agent Reinforcement Learning (MARL) to HVAC systems. In this paper, the environment formed by the HVAC system is formulated as a Markov Game (MG) in a general sum setting. The MARL algorithm is designed in a decentralized structure, where only relevant states are shared between agents, and actions are shared in a sequence, which are sensible from a system’s point of view. The simulation environment is a domestic house located in Denmark and designed to resemble an average house. The heat source in the house is an air-to-water heat pump, and the HVAC system is an Underfloor Heating system (UFH). The house is subjected to weather changes from a data set collected in Copenhagen in 2006, spanning the entire year except for June, July, and August, where heat is not required. It is shown that: (1) When comparing Single Agent Reinforcement Learning (SARL) and MARL, training time can be reduced by 70% for a four temperature-zone UFH system, (2) the agent can learn and generalize over seasons, (3) the cost of heating can be reduced by 19% or the equivalent to 750 kWh of electric energy per year for an average Danish domestic house compared to a traditional control method, and (4) oscillations in the room temperature can be reduced by 40% when comparing the RL control methods with a traditional control method.
Details
- Language :
- English
- ISSN :
- 19961073
- Volume :
- 14
- Issue :
- 22
- Database :
- Directory of Open Access Journals
- Journal :
- Energies
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.8030b101c8894434b618d3475c0b545a
- Document Type :
- article
- Full Text :
- https://doi.org/10.3390/en14227491