Machine learning-based optimal load balancing in software-defined networks
- Authors: Sharma, Aakanksha
- Date: 2022
- Type: Text , Thesis , PhD
- Full Text:
- Description: The global advancement of the Internet of Things (IoT) has poised the existing network traffic for explosive growth. The prediction in the literature shows that in the future, trillions of smart devices will connect to transfer useful information. Accommodating such proliferation of devices in the existing network infrastructure, referred to as the traditional network, is a significant challenge due to the absence of centralized control, making it tedious to implement the device management and network protocol updates. In addition, due to their inherently distributed features, applying machine learning mechanisms in traditional networks is demanding. Consequently, it leads to an imbalanced load in the network that affects the overall network Quality of Service (QoS). Expanding the existing infrastructure and manual traffic control methods are inadequate to cope with the exponential growth of IoT devices. Therefore, an intelligent system is necessary for future networks that can efficiently organize, manage, maintain, and optimize the growing networks. Software-defined network (SDN) has a holistic view of the network and is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for a single controller or multiple distributed controllers that faces severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (SDN) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, the research was enhanced with a priority scheduling and congestion control algorithm in the standard SDN, named extended SDN (eSDN), which minimized the network congestion and performed better than the existing SDN. However, enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic controller mapping in the network. Often, the same controller gets overloaded, leading to a single point of failure. Our exhaustive literature review shows that the majority of proposed solutions are based on static controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among controllers in real-time, eventually increasing the network latency. Often, the switch experiences a traffic burst, and consequently, the corresponding controller might overload. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static controller to neutralize the on-the-fly traffic burst. Addressing the above-mentioned issues demands research critical to improving the QoS in load balancing, latency minimisation, and network reliability for next- generation networks. Our novel dynamic controller mapping algorithm with multiple- controller placement in the SDN is critical in solving the identified issues. In the dynamic controller approach (dSDN), the controllers are mapped dynamically as the load fluctuates. If any controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. In addition, our novel approach adds more intelligence to the network with a Temporal Deep Q Learning (tDQN) approach for dynamic controller mapping when the flow fluctuates. In this technique, a multi-objective optimization problem for flow fluctuation is formulated to dynamically divert the traffic to the best-suited controller. The formulated technique is placed as an agent in the network controller to take care of all the routing decisions, which can solve the dynamic flow mapping and latency optimization without increasing the number of optimally placed controllers. Extensive simulation results show that the novel approach proposed in this thesis solves dynamic flow mapping by maintaining a balanced load among controllers and outperforms the existing traditional networks and SDN with priority scheduling and congestion control. Compared to traditional networks, tDQN provides a 47.48% increase in throughput, a 99.10% reduction in delay and a 97.98% reduction in jitter for heavy network traffic. The thesis also presents a few future research directions as possible extensions of the current work for further enhancement.
- Description: Doctor of Philosophy
- Authors: Sharma, Aakanksha
- Date: 2022
- Type: Text , Thesis , PhD
- Full Text:
- Description: The global advancement of the Internet of Things (IoT) has poised the existing network traffic for explosive growth. The prediction in the literature shows that in the future, trillions of smart devices will connect to transfer useful information. Accommodating such proliferation of devices in the existing network infrastructure, referred to as the traditional network, is a significant challenge due to the absence of centralized control, making it tedious to implement the device management and network protocol updates. In addition, due to their inherently distributed features, applying machine learning mechanisms in traditional networks is demanding. Consequently, it leads to an imbalanced load in the network that affects the overall network Quality of Service (QoS). Expanding the existing infrastructure and manual traffic control methods are inadequate to cope with the exponential growth of IoT devices. Therefore, an intelligent system is necessary for future networks that can efficiently organize, manage, maintain, and optimize the growing networks. Software-defined network (SDN) has a holistic view of the network and is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for a single controller or multiple distributed controllers that faces severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (SDN) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, the research was enhanced with a priority scheduling and congestion control algorithm in the standard SDN, named extended SDN (eSDN), which minimized the network congestion and performed better than the existing SDN. However, enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic controller mapping in the network. Often, the same controller gets overloaded, leading to a single point of failure. Our exhaustive literature review shows that the majority of proposed solutions are based on static controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among controllers in real-time, eventually increasing the network latency. Often, the switch experiences a traffic burst, and consequently, the corresponding controller might overload. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static controller to neutralize the on-the-fly traffic burst. Addressing the above-mentioned issues demands research critical to improving the QoS in load balancing, latency minimisation, and network reliability for next- generation networks. Our novel dynamic controller mapping algorithm with multiple- controller placement in the SDN is critical in solving the identified issues. In the dynamic controller approach (dSDN), the controllers are mapped dynamically as the load fluctuates. If any controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. In addition, our novel approach adds more intelligence to the network with a Temporal Deep Q Learning (tDQN) approach for dynamic controller mapping when the flow fluctuates. In this technique, a multi-objective optimization problem for flow fluctuation is formulated to dynamically divert the traffic to the best-suited controller. The formulated technique is placed as an agent in the network controller to take care of all the routing decisions, which can solve the dynamic flow mapping and latency optimization without increasing the number of optimally placed controllers. Extensive simulation results show that the novel approach proposed in this thesis solves dynamic flow mapping by maintaining a balanced load among controllers and outperforms the existing traditional networks and SDN with priority scheduling and congestion control. Compared to traditional networks, tDQN provides a 47.48% increase in throughput, a 99.10% reduction in delay and a 97.98% reduction in jitter for heavy network traffic. The thesis also presents a few future research directions as possible extensions of the current work for further enhancement.
- Description: Doctor of Philosophy
- Mukherjee, Subhasis, Huda, Shamsul, Yearwood, John
- Authors: Mukherjee, Subhasis , Huda, Shamsul , Yearwood, John
- Date: 2011
- Type: Text , Book chapter
- Relation: Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing p. 169-183
- Full Text: false
- Reviewed:
- Description: Robocup is a popular test bed for AI programs around the world. Robosoccer is one of the two major parts of Robocup, in which AIBO entertainment robots take part in the middle sized soccer event. The three key challenges that robots need to face in this event are manoeuvrability, image recognition and decision making skills. This paper focuses on the decision making problem in Robosoccer-The goal keeper problem. We investigate whether reinforcement learning (RL) as a form of semi-supervised learning can effectively contribute to the goal keeper's decision making process when penalty shot and two attacker problem are considered. Currently, the decision making process in Robosoccer is carried out using rule-base system. RL also is used for quadruped locomotion and navigation purpose in Robosoccer using AIBO. Moreover the ball distance is being calculated using IR sensors available at the nose of the robot. In this paper, we propose a reinforcement learning based approach that uses a dynamic state-action mapping using back propagation of reward and Q-learning along with spline fit (QLSF) for the final choice of high level functions in order to save the goal. The novelty of our approach is that the agent learns while playing and can take independent decision which overcomes the limitations of rule-base system due to fixed and limited predefined decision rules. The spline fit method used with the nose camera was also able to find out the location and the ball distance more accurately compare to the IR sensors. The noise source and near and far sensor dilemma problem with IR sensor was neutralized using the proposed spline fit method. Performance of the proposed method has been verified against the bench mark data set made with Upenn'03 code logic and a base line experiment with IR sensors. It was found that the efficiency of our QLSF approach in goalkeeping was better than the rule based approach in conjunction with the IR sensors. The QLSF develops a semi-supervised learning process over the rule-base system's input-output mapping process, given in the Upenn'03 code. © 2011 Springer-Verlag Berlin Heidelberg.
- Mukherjee, Subhasis, Yearwood, John, Vamplew, Peter, Huda, Shamsul
- Authors: Mukherjee, Subhasis , Yearwood, John , Vamplew, Peter , Huda, Shamsul
- Date: 2011
- Type: Text , Conference proceedings
- Full Text: false
- Description: Robocup is a popular test bed for AI programs around the world. Robosoccer is one of the two major parts of Robocup, in which AIBO entertainment robots take part in the middle sized soccer event. The three key challenges that robots need to face in this event are manoeuvrability, image recognition and decision making skills. This paper focuses on the decision making problem in Robosoccer - The goal keeper problem. We investigate whether reinforcement learning (RL) as a form of semi-supervised learning can effectively contribute to the goal keeper's decision making process when penalty shot and two attacker problem are considered. Currently, the decision making process in Robosoccer is carried out using rule-base system. RL also is used for quadruped locomotion and navigation purpose in Robosoccer using AIBO. In this paper, we propose a reinforcement learning based approach that uses a dynamic state-action mapping using back propagation of reward and space quantized Q-learning (SQQL) for the choice of high level functions in order to save the goal. The novelty of our approach is that the agent learns while playing and can take independent decision which overcomes the limitations of rule-base system due to fixed and limited predefined decision rules. Performance of the proposed method has been verified against the bench mark data set made with Upenn'03 code logic. It was found that the efficiency of our SQQL approach in goalkeeping was better than the rule based approach. The SQQL develops a semi-supervised learning process over the rule-base system's input-output mapping process, given in the Upenn'03 code. © 2011 IEEE.
- «
- ‹
- 1
- ›
- »