Multiobjective Reinforcement Learning Based Energy Consumption in C-RAN enabled Massive MIMO
Loading...
Date
2022-04
Authors
ORCID
Advisor
Referee
Mark
Journal Title
Journal ISSN
Volume Title
Publisher
Společnost pro radioelektronické inženýrství
Altmetrics
Abstract
Multiobjective optimization has become a suitable method to resolve conflicting objectives and enhance the performance evaluation of wireless networks. In this study, we consider a multiobjective reinforcement learning (MORL) approach for the resource allocation and energy consumption in C-RANs. We propose the MORL method with two conflicting objectives. Herein, we define the state and action spaces, and reward for the MORL agent. Furthermore, we develop a Q-learning algorithm that controls the ON-OFF action of remote radio heads (RRHs) depending on the position and nearby users with goal of selecting the best single policy that optimizes the trade-off between EE and QoS. We analyze the performance of our Q-learning algorithm by comparing it with simple ON-OFF scheme and heuristic algorithm. The simulation results demonstrated that normalized ECs of simple ON-OFF, heuristic and Q-learning algorithm were 0.99, 0.85, and 0.8 respectively. Our proposed MORL-based Q-learning algorithm achieves superior EE performance compared with simple ON-OFF scheme and heuristic algorithms.
Description
Citation
Radioengineering. 2022 vol. 31, č. 1, s. 155-163. ISSN 1210-2512
https://www.radioeng.cz/fulltexts/2022/22_01_0155_0163.pdf
https://www.radioeng.cz/fulltexts/2022/22_01_0155_0163.pdf
Document type
Peer-reviewed
Document version
Published version
Date of access to the full text
Language of document
en