Mobile networks, particularly with the advancement towards 5G and beyond, confront numerous challenges. One of the most critical is ensuring ultra-reliable low-latency communication (URLLC), vital for applications like industrial automation and autonomous vehicles. Achieving URLLC icnludes managing cloud and edge resources effectively, dynamically adapting these resources to meet fluctuating demands. This challenge extends to balancing the processing tasks between the cloud, which provides greater computational power but comes with higher latency, and the network edge, which reduces latency but may face higher costs and computational power limitations. Striking the right balance is essential for maintaining URLLC guarantees and optimizing resource allocation based on the specific needs of each application and moment.
Balancing cloud and edge processing involves a trade-off between the computational power of the cloud and higher latency versus the edge's lower latency but potential cost and power constraints. This balance is crucial for sustaining URLLC guarantees. Equally important is dynamic resource scaling, which adjusts the number of active servers based on demand. Proper system dimensioning for peak traffic can ensure URLLC service availability but may lead to resource waste during off-peak hours. Efficiently scaling resources while maintaining reliability and latency guarantees is a major challenge.
This thesis contributes to address these dual challenges of balancing and scaling resources in future mobile networks. We first identify different strategies to support highly reliable and energy-efficient services. These strategies range from deploying a few reliable blade servers to many less reliable but more energy-efficient nano servers. The motivation here is to explore the trade-offs between reliability and energy efficiency. By understanding these trade-offs, network operators can make informed decisions that balance cost and performance, ensuring that critical applications receive the necessary resources without incurring unnecessary expenses.
Then, we effectively design and analyze a closed-loop system for adaptive scaling of server farms in network function virtualization (NFV) contexts. This system uses control theory to automatically optimize the balance between reliability and energy consumption, proving to be faster and more suitable than traditional reinforcement learning algorithms. The motivation for developing the control-theoretic system is to provide a more responsive and efficient mechanism for resource scaling. Traditional methods may not be able to react quickly enough to changing conditions, leading to suboptimal performance. The system addresses this by providing real-time adjustments, ensuring that resources are always allocated optimally.
We also provide an analytical model of the aforementioned server farm to characterize the performance using a threshold-based activation and deactivation policy. This model considers both reliability and infrastructure costs, providing a comprehensive understanding of system performance.
Lastly, we provide an optimization problem to minimize the operational costs (monetary and energy) of cloud/edge resources while ensuring the latency and reliability requirements of vehicular URLLC services. An efficient algorithm with low computational complexity is developed and its effectiveness is evaluated using real-world traffic data. The motivation behind this contribution is the significant cost associated with maintaining high levels of service reliability and low latency. By optimizing these costs, the thesis provides a practical solution for network operators to manage their budgets more effectively while still meeting the stringent requirements of URLLC applications.
© 2001-2026 Fundación Dialnet · Todos los derechos reservados