Ayuda
Ir al contenido

Dialnet


Enhancing energy efficiency and throughput in 5g networks using reinforcement learning

  • Autores: Silvestre Lomba Malta
  • Directores de la Tesis: Pedro Cruz Pinto (dir. tes.), Manuel Fernández Veiga (dir. tes.)
  • Lectura: En la Universidade de Vigo ( España ) en 2024
  • Idioma: español
  • Tribunal Calificador de la Tesis: Manuel Alberto Pereira Ricardo (presid.), Ana Fernández Vilas (secret.), Jorge Enrique López de Vergara Méndez (voc.)
  • Programa de doctorado: Programa de Doctorado en Tecnologías de la Información y las Comunicaciones por la Universidad de Vigo
  • Materias:
  • Enlaces
  • Resumen
    • This thesis presents two main original contributions to the field of 5th Generation Mobile Network (5G) mobile networks. The first contribution leverages Reinforcement Learning (RL) to reduce energy consumption in ultra-dense networks while adhering to latency requirements of 5G use cases, based on 3rd Generation Partnership Project (3GPP) New Radio (NR) Release 18 recommendations. The proposed method transmits data, including latency specifications, from User End (UE) to Base Station (BS), aiding in configuring sleep modes and communication settings. Multiple 5G use cases with distinct latency and traffic load requirements were evaluated to ensure optimal power savings without compromising service quality. A sleep mode strategy was developed, extending BS sleep duration by monitoring incoming traffic and packet latency. The strategy balances energy reduction and latency management through a weighting system and buffer load conditions. Simulations show significant energy savings, especially in low traffic scenarios, with up to 80% energy reduction for certain latency thresholds, highlighting the potential for operators to balance energy efficiency and Quality of Service (QoS).

      The second contribution introduces a Deep Reinforcement Learning (DRL) agent for optimizing 5G network slicing by adaptively selecting the best decoding scheme Orthogonal Multiple Access (OMA), Nonorthogonal Multiple Access (NOMA), or Rate Splitting Multiple Access (RSMA) based on traffic demands. The agent computes rewards from various decoding schemes and establishes policies for dynamic network conditions. It assumes a random count of active Ultra-Reliable Low Latency Communications (URLLC) and Massive Machine Type Communications (mMTC) devices and applies rate equations for finite packet lengths. The agents performance was evaluated in scenarios with Enhanced Mobile Broadband (eMBB) coexisting with URLLC or mMTC use cases, showing improved resource management and spectral efficiency. The DRL agent demonstrated high efficacy in adapting to different scenarios, learning to dynamically allocate frequencies and choose optimal decoding schemes.

      This adaptability enables network operators to balance the requirements of eMBB, URLLC, and mMTC use cases, optimizing network capacity and performance. Overall, this thesis provides insights into leveraging RL and DRL for energy efficiency and resource optimization in 5G networks, addressing the diverse requirements of various 5G use cases and advancing the state of mobile network management.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno