MAXIMIZING 5G NETWORKS PERFORMANCE USING A M-DRL TECHNIQUE FOR ADMISSION CONTROL
Abstract
In the evolving landscape of 5G networks, effective admission control plays a crucial role in maximizing network operator revenue and ensuring Quality of Service (QoS) and Quality of Experience (QoE) for diverse vertical applications. This paper presents a modified Deep Reinforcement Learning (DRL) approach for admission control in 5G networks, addressing the limitations of existing Reinforcement Learning (RL) and DRL-based methods. Our proposed methodology incorporates a custom state space, action space, and a modified Deep Q-Network (DQN) algorithm to balance the acceptance of different network slice types while considering QoS/QoE requirements and available network resources. Using a custom-built Python-based event-driven simulator, we demonstrate that our modified DRL-based admission control approach significantly outperforms existing algorithms in terms of profit and acceptance ratio. The results reveal a 9% increase in profit and improved acceptance ratios compared to state-of-the-art algorithms, attributed to the enhanced learning capability and better action selection provided by our modified DQN algorithm. This study covers the way for further research and development of advanced DRL-based admission control techniques for 5G/6G networks, ensuring optimal resource utilization and meeting the performance demands of emerging vertical applications.