DQN approach for adaptive self-healing of VNFs in cloud-native network
- Arulappan, Arunkumar, Mahanti, Aniket, Passi, Kalpdrum, Srinivasan, Thiruvenkadam, Naha, Ranesh, Raja, Gunasekaran
- Authors: Arulappan, Arunkumar , Mahanti, Aniket , Passi, Kalpdrum , Srinivasan, Thiruvenkadam , Naha, Ranesh , Raja, Gunasekaran
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 34489-34504
- Full Text:
- Reviewed:
- Description: The transformation from physical network function to Virtual Network Function (VNF) requires a fundamental design change in how applications and services are tested and assured in a hybrid virtual network. Once the VNFs are onboarded in a cloud network infrastructure, operators need to test VNFs in real-time at the time of instantiation automatically. This paper explicitly analyses the problem of adaptive self-healing of a Virtual Machine (VM) allocated by the VNF with the Deep Reinforcement Learning (DRL) approach. The DRL-based big data collection and analytics engine performs aggregation to probe and analyze data for troubleshooting and performance management. This engine helps to determine corrective actions (self-healing), such as scaling or migrating VNFs. Hence, we proposed a Deep Queue Learning (DQL) based Deep Queue Networks (DQN) mechanism for self-healing VNFs in the virtualized infrastructure manager. Virtual network probes of closed-loop orchestration perform the automation of the VNF and provide analytics for real-time, policy-driven orchestration in an open networking automation platform through the stochastic gradient descent method for VNF service assurance and network reliability. The proposed DQN/DDQN mechanism optimizes the price and lowers the cost by 18% for resource usage without disrupting the Quality of Service (QoS) provided by the VNF. The outcome of adaptive self-healing of the VNFs enhances the computational performance by 27% compared to other state-of-the-art algorithms. © 2013 IEEE.
- Authors: Arulappan, Arunkumar , Mahanti, Aniket , Passi, Kalpdrum , Srinivasan, Thiruvenkadam , Naha, Ranesh , Raja, Gunasekaran
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 34489-34504
- Full Text:
- Reviewed:
- Description: The transformation from physical network function to Virtual Network Function (VNF) requires a fundamental design change in how applications and services are tested and assured in a hybrid virtual network. Once the VNFs are onboarded in a cloud network infrastructure, operators need to test VNFs in real-time at the time of instantiation automatically. This paper explicitly analyses the problem of adaptive self-healing of a Virtual Machine (VM) allocated by the VNF with the Deep Reinforcement Learning (DRL) approach. The DRL-based big data collection and analytics engine performs aggregation to probe and analyze data for troubleshooting and performance management. This engine helps to determine corrective actions (self-healing), such as scaling or migrating VNFs. Hence, we proposed a Deep Queue Learning (DQL) based Deep Queue Networks (DQN) mechanism for self-healing VNFs in the virtualized infrastructure manager. Virtual network probes of closed-loop orchestration perform the automation of the VNF and provide analytics for real-time, policy-driven orchestration in an open networking automation platform through the stochastic gradient descent method for VNF service assurance and network reliability. The proposed DQN/DDQN mechanism optimizes the price and lowers the cost by 18% for resource usage without disrupting the Quality of Service (QoS) provided by the VNF. The outcome of adaptive self-healing of the VNFs enhances the computational performance by 27% compared to other state-of-the-art algorithms. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Khan, Zahoor, Amjad, Sana, Ahmed, Farwa, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
- Lu, Kui, Sultan, Ibrahim, Phung, Truong
- Authors: Lu, Kui , Sultan, Ibrahim , Phung, Truong
- Date: 2023
- Type: Text , Journal article
- Relation: International Journal of Refrigeration Vol. 145, no. (2023), p. 467-480
- Full Text: false
- Reviewed:
- Description: As an emerging technology, the limaçon rotary compressor possesses great potential for fluid-processing applications. However, the technology and associated cost required to fabricate the limaçon machine could sometimes be beyond the capability of some manufacturers. To reduce the production cost, circolimaçon embodiment whose rotor and housing are constructed of circular arcs has been proposed. This paper is intended to investigate the viability of the circolimaçon embodiment of limaçon technology based on sealing performance. A nonlinear three-degree of freedom model is presented to describe the dynamic behaviour of the apex seal during the machine operation. Additionally, the leakage through the seal-housing gap is formulated by considering the inertia and viscous effects on the flow. A numerical illustration is offered to compare the performance of the circolimaçon embodiment with that of the limaçon-to-limaçon (L2L) type machine at different pressure ratios and operating speeds. The effect of limaçon aspect ratio on the apex seal dynamics is also investigated. Based on the results, it is found that the circolimaçon embodiment exhibits comparable performance to the L2L-type machine, despite having more significant seal vibrations. The differences in the volumetric and isentropic efficiencies between the two machines are found within 8% and 3%, respectively. Additionally, it is also discovered that the circolimaçon compressor with a small capacity undergoes lower level of seal dynamics, suggesting a better machine reliability. © 2022
- Wang, Yanping, Wang, Xiaofen, Dai, Hong-Ning, Zhang, Xiaosong, Imran, Muhammad
- Authors: Wang, Yanping , Wang, Xiaofen , Dai, Hong-Ning , Zhang, Xiaosong , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 19, no. 6 (2023), p. 7835-7847
- Full Text: false
- Reviewed:
- Description: Intelligent Transport Systems (ITS) have received growing attention recently driven by technical advances in Industrial Internet of Vehicles (IIoV). In IIoV, vehicles report traffic data to management infrastructures to achieve better ITS services. To ensure security and privacy, many anonymous authentication-enabled data reporting protocols are proposed. However, these protocols usually require a large number of preloaded pseudonyms or involve a costly and irrevocable group signature. Thus, they are not ready for realistic deployment due to large storage overhead, expensive computation costs, or absence of malicious users' revocation. To address these issues, we present a novel data reporting protocol for edge-assisted ITS in this paper, where the traffic data is sent to distributed edge nodes for local processing. Specifically, we propose a new anonymous authentication scheme fine-tuned to fulfill the needs of vehicular data reporting, which allows authenticated vehicles to report unlimited unlinkable messages to edge nodes without huge pseudonyms download and storage costs. Moreover, we designed an efficient certificate update scheme based on a bivariate polynomial function. In this way, malicious vehicles can be revoked with time complexity O(1). The security analysis demonstrates that our protocol satisfies source authentication, anonymity, unlinkability, traceability, revocability, nonframeability, and nonrepudiation. Further, extensive simulation results show that the performance of our protocol is greatly improved since the signature size is reduced by at least 8%, the computation costs in message signing and verification are reduced by at least 56% and 67%, respectively, and the packet loss rate is reduced by at least 14%. © 2005-2012 IEEE.
A literature review of the positive displacement compressor : current challenges and future opportunities
- Lu, Kui, Sultan, Ibrahim, Phung, Truong
- Authors: Lu, Kui , Sultan, Ibrahim , Phung, Truong
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Energies Vol. 16, no. 20 (2023), p.
- Full Text:
- Reviewed:
- Description: Positive displacement compressors are essential in many engineering systems, from domestic to industrial applications. Many studies have been devoted to providing more insights into the workings and proposing solutions for performance improvements of these machines. This study aims to present a systematic review of published research on positive displacement compressors of various geometrical structures. This paper discusses the literature on compressor topics, including leakage, heat transfer, friction and lubrication, valve dynamics, port characteristics, and capacity control strategies. Moreover, the current status of the application of machine learning methods in positive displacement compressors is also discussed. The challenges and opportunities for future work are presented at the end of the paper. © 2023 by the authors.
- Authors: Lu, Kui , Sultan, Ibrahim , Phung, Truong
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Energies Vol. 16, no. 20 (2023), p.
- Full Text:
- Reviewed:
- Description: Positive displacement compressors are essential in many engineering systems, from domestic to industrial applications. Many studies have been devoted to providing more insights into the workings and proposing solutions for performance improvements of these machines. This study aims to present a systematic review of published research on positive displacement compressors of various geometrical structures. This paper discusses the literature on compressor topics, including leakage, heat transfer, friction and lubrication, valve dynamics, port characteristics, and capacity control strategies. Moreover, the current status of the application of machine learning methods in positive displacement compressors is also discussed. The challenges and opportunities for future work are presented at the end of the paper. © 2023 by the authors.
- Yu, Kelai, Yang, Zhenjun, Li, Hui, Ooi, Ean Tat, Li, Shangming, Liu, GuoHua
- Authors: Yu, Kelai , Yang, Zhenjun , Li, Hui , Ooi, Ean Tat , Li, Shangming , Liu, GuoHua
- Date: 2023
- Type: Text , Journal article
- Relation: Engineering Fracture Mechanics Vol. 278, no. (2023), p.
- Full Text: false
- Reviewed:
- Description: This study develops an innovative numerical approach for simulating complex mesoscale fracture in concrete. In this approach, the concrete meso-structures are generated using a random aggregate generation and packing algorithm. Each aggregate is modelled by a single scaled boundary finite element method (SBFEM) based polygon with the boundary discretized only. The damage and fracture in the mortar is simulated by the continuous damage phase-field regularized cohesive zone model (PF-CZM), and the aggregate-mortar interfaces are modelled by zero-thickness cohesive interface elements (CIEs) with nonlinear softening separation-traction laws. This new approach thus takes full advantages of different methods, including the semi-analytical accuracy and high flexibility in mesh generation and transition of SBFEM, the mesh and length-scale independence of PF-CZM, and the ease-of-use of CIEs in modelling discrete interfacial fracture. These advantages are demonstrated by successful simulations of a few 2D and 3D benchmark examples in mode-I and mixed-mode fracture. © 2022 Elsevier Ltd
A novel dynamic software-defined networking approach to neutralize traffic burst
- Sharma, Aakanksha, Balasubramanian, Venki, Kamruzzaman, Joarder
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
A robust local texture descriptor in the parametric space of the weibull distribution
- Tania, Sheikh, Karmakar, Gour, Teng, Shyh, Murshed, Manzur
- Authors: Tania, Sheikh , Karmakar, Gour , Teng, Shyh , Murshed, Manzur
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Multimedia Vol. 25, no. (2023), p. 6053-6066
- Full Text: false
- Reviewed:
- Description: Research in texture feature approximation is still in the embryonic stage because of difficulties in developing a sound theoretical model to express the unique pattern in the intensity-variation of pixels in the neighbourhood of the pixel-of-interest so that it can sufficiently discriminate different textures. Local texture descriptors are widely used in image segmentation as they comprise pixel-wise features. The Weber local descriptor (WLD) with differential excitation and gradient orientation components, inspired by Weber's Law, has been leveraged in the state-of-the-art iterative contraction and merging (ICM) image segmentation technique. However, WLD has inherent drawbacks in the formulation of the components that limit its discriminatory capability. This paper introduces a novel texture descriptor by directly modelling the distribution of intensity-variation in the parametric space of the Weibull distribution using its shape and scale parameters. A unified 'joint scale' texture property is introduced, which can discriminate textures better than the individual parameters while keeping the length of the descriptor shorter. Additionally, the accuracy of WLD's gradient orientation component is improved by using an extended Sobel operator and expressing gradients in -
Adaptive phase-field modelling of fracture propagation in poroelastic media using the scaled boundary finite element method
- Wijesinghe, Dakshith, Natarajan, Sundararajan, You, Greg, Khandelwal, Manoj, Dyson, Ashley, Song, Chongmin, Ooi, Ean Tat
- Authors: Wijesinghe, Dakshith , Natarajan, Sundararajan , You, Greg , Khandelwal, Manoj , Dyson, Ashley , Song, Chongmin , Ooi, Ean Tat
- Date: 2023
- Type: Text , Journal article
- Relation: Computer Methods in Applied Mechanics and Engineering Vol. 411, no. (2023), p.
- Full Text:
- Reviewed:
- Description: A scaled boundary finite element-based phase field formulation is proposed to model two-dimensional fracture in saturated poroelastic media. The mechanical response of the poroelastic media is simulated following Biot's theory, and the fracture surface evolution is modelled according to the phase field formulation. To avoid the application of fine uniform meshes that are constrained by the element size requirement when adopting phase field models, an adaptive refinement strategy based on quadtree meshes is adopted. The unique advantage of the scaled boundary finite element method is conducive to the application of quadtree adaptivity, as it can be directly formulated on quadtree meshes without the need for any special treatment of hanging nodes. Efficient computation is achieved by exploiting the unique patterns of the quadtree cells. An appropriate scaling is applied to the relevant matrices and vectors according the physical size of the cells in the mesh during the simulations. This avoids repetitive calculations of cells with the same configurations. The proposed model is validated using a benchmark with a known analytical solution. Numerical examples of hydraulic fractures driven by the injected fluid in cracks are modelled to illustrate the capabilities of the proposed model in handling crack propagation problems involving complex geometries. © 2023 The Author(s)
- Authors: Wijesinghe, Dakshith , Natarajan, Sundararajan , You, Greg , Khandelwal, Manoj , Dyson, Ashley , Song, Chongmin , Ooi, Ean Tat
- Date: 2023
- Type: Text , Journal article
- Relation: Computer Methods in Applied Mechanics and Engineering Vol. 411, no. (2023), p.
- Full Text:
- Reviewed:
- Description: A scaled boundary finite element-based phase field formulation is proposed to model two-dimensional fracture in saturated poroelastic media. The mechanical response of the poroelastic media is simulated following Biot's theory, and the fracture surface evolution is modelled according to the phase field formulation. To avoid the application of fine uniform meshes that are constrained by the element size requirement when adopting phase field models, an adaptive refinement strategy based on quadtree meshes is adopted. The unique advantage of the scaled boundary finite element method is conducive to the application of quadtree adaptivity, as it can be directly formulated on quadtree meshes without the need for any special treatment of hanging nodes. Efficient computation is achieved by exploiting the unique patterns of the quadtree cells. An appropriate scaling is applied to the relevant matrices and vectors according the physical size of the cells in the mesh during the simulations. This avoids repetitive calculations of cells with the same configurations. The proposed model is validated using a benchmark with a known analytical solution. Numerical examples of hydraulic fractures driven by the injected fluid in cracks are modelled to illustrate the capabilities of the proposed model in handling crack propagation problems involving complex geometries. © 2023 The Author(s)
An agriprecision decision support system for weed management in pastures
- Chegini, Hossein, Naha, Ranesh, Mahanti, Aniket, Gong, Mingwei, Passi, Kalpdrum
- Authors: Chegini, Hossein , Naha, Ranesh , Mahanti, Aniket , Gong, Mingwei , Passi, Kalpdrum
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 92660-92675
- Full Text:
- Reviewed:
- Description: Pastures are a vital source of dairy products and cattle nutrition, and as such, play a significant role in New Zealand's agricultural economy. However, weeds can be a major problem for pastures, making it a challenge for dairy farmers to monitor and control them. Currently, most of the tasks for weed management are done manually, and farmers lack persistent technology for weed control. This motivated us to design, implement, and evaluate a Decision Support System (DSS) to detect weeds in pastures and provide decisions for the cleanup of weeds. Our proposed system uses two primary inputs: weeds and bare patches. We created a synthetic dataset to train a weed detection model and designed a fuzzy inference system to assess a pasture. We also used a neuro-fuzzy system in our DSS to evaluate our fuzzy model and tune its parameters for better functioning and accuracy. Our work aims to assist dairy farmers in better weed monitoring, as well as to provide 2D maps of weed density and yield score, which can be of significant value when no digital and meaningful images of pastures exist. The system can also support farmers in scheduling, recommending prohibitive tasks, and storing historical data for pasture analysis, collaborated by stakeholders. © 2013 IEEE.
- Authors: Chegini, Hossein , Naha, Ranesh , Mahanti, Aniket , Gong, Mingwei , Passi, Kalpdrum
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 92660-92675
- Full Text:
- Reviewed:
- Description: Pastures are a vital source of dairy products and cattle nutrition, and as such, play a significant role in New Zealand's agricultural economy. However, weeds can be a major problem for pastures, making it a challenge for dairy farmers to monitor and control them. Currently, most of the tasks for weed management are done manually, and farmers lack persistent technology for weed control. This motivated us to design, implement, and evaluate a Decision Support System (DSS) to detect weeds in pastures and provide decisions for the cleanup of weeds. Our proposed system uses two primary inputs: weeds and bare patches. We created a synthetic dataset to train a weed detection model and designed a fuzzy inference system to assess a pasture. We also used a neuro-fuzzy system in our DSS to evaluate our fuzzy model and tune its parameters for better functioning and accuracy. Our work aims to assist dairy farmers in better weed monitoring, as well as to provide 2D maps of weed density and yield score, which can be of significant value when no digital and meaningful images of pastures exist. The system can also support farmers in scheduling, recommending prohibitive tasks, and storing historical data for pasture analysis, collaborated by stakeholders. © 2013 IEEE.
Anti-aliasing deep image classifiers using novel depth adaptive blurring and activation function
- Hossain, Md Tahmid, Teng, Shyh, Lu, Guojun, Rahman, Mohammad Arifur, Sohel, Ferdous
- Authors: Hossain, Md Tahmid , Teng, Shyh , Lu, Guojun , Rahman, Mohammad Arifur , Sohel, Ferdous
- Date: 2023
- Type: Text , Journal article
- Relation: Neurocomputing Vol. 536, no. (2023), p. 164-174
- Full Text: false
- Reviewed:
- Description: Deep convolutional networks are vulnerable to image translation or shift, partly due to common down-sampling layers, e.g., max-pooling and strided convolution. These operations violate the Nyquist sampling rate and cause aliasing. The textbook solution is low-pass filtering (blurring) before down-sampling, which can benefit deep networks as well. Even so, non-linearity units, such as ReLU, often re-introduce the problem, suggesting that blurring alone may not suffice. In this work, first, we analyse deep features with Fourier transform and show that Depth Adaptive Blurring is more effective, as opposed to monotonic blurring. To this end, we propose a novel Depth Adaptive Blur-pool (DAB-pool) module to replace existing down-sampling methods. Second, we introduce a novel activation function – with a built-in low pass filter, as an additional measure, to keep the problem from reappearing. From experiments, we observe generalisation on other forms of transformations and corruptions as well, e.g., rotation, scale, and noise. We evaluate our method under three challenging settings: (1) a variety of image translations; (2) adversarial attacks – both
Application of various robust techniques to study and evaluate the role of effective parameters on rock fragmentation
- Mehrdanesh, Amirhossein, Monjezi, Masoud, Khandelwal, Manoj, Bayat, Parichehr
- Authors: Mehrdanesh, Amirhossein , Monjezi, Masoud , Khandelwal, Manoj , Bayat, Parichehr
- Date: 2023
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 39, no. 2 (2023), p. 1317-1327
- Full Text:
- Reviewed:
- Description: In this paper, an attempt has been made to implement various robust techniques to predict rock fragmentation due to blasting in open pit mines using effective parameters. As rock fragmentation prediction is very complex and complicated, and due to that various artificial intelligence-based techniques, such as artificial neural network (ANN), classification and regression tree and support vector machines were selected for the modeling. To validate and compare the prediction results, conventional multivariate regression analysis was also utilized on the same data sets. Since accuracy and generality of the modeling is dependent on the number of inputs, it was tried to collect enough required information from four different open pit mines of Iran. According to the obtained results, it was revealed that ANN with a determination coefficient of 0.986 is the most precise method of modeling as compared to the other applied techniques. Also, based on the performed sensitivity analysis, it was observed that the most prevailing parameters on the rock fragmentation are rock quality designation, Schmidt hardness value, mean in-situ block size and the minimum effective ones are hole diameter, burden and spacing. The advantage of back propagation neural network technique for using in this study compared to other soft computing methods is that they are able to describe complex and nonlinear multivariable problems in a transparent way. Furthermore, ANN can be used as a first approach, where much knowledge about the influencing parameters are missing. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Mehrdanesh, Amirhossein , Monjezi, Masoud , Khandelwal, Manoj , Bayat, Parichehr
- Date: 2023
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 39, no. 2 (2023), p. 1317-1327
- Full Text:
- Reviewed:
- Description: In this paper, an attempt has been made to implement various robust techniques to predict rock fragmentation due to blasting in open pit mines using effective parameters. As rock fragmentation prediction is very complex and complicated, and due to that various artificial intelligence-based techniques, such as artificial neural network (ANN), classification and regression tree and support vector machines were selected for the modeling. To validate and compare the prediction results, conventional multivariate regression analysis was also utilized on the same data sets. Since accuracy and generality of the modeling is dependent on the number of inputs, it was tried to collect enough required information from four different open pit mines of Iran. According to the obtained results, it was revealed that ANN with a determination coefficient of 0.986 is the most precise method of modeling as compared to the other applied techniques. Also, based on the performed sensitivity analysis, it was observed that the most prevailing parameters on the rock fragmentation are rock quality designation, Schmidt hardness value, mean in-situ block size and the minimum effective ones are hole diameter, burden and spacing. The advantage of back propagation neural network technique for using in this study compared to other soft computing methods is that they are able to describe complex and nonlinear multivariable problems in a transparent way. Furthermore, ANN can be used as a first approach, where much knowledge about the influencing parameters are missing. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Bundle enrichment method for nonsmooth difference of convex programming problems
- Gaudioso, Manilo, Taheri, Sona, Bagirov, Adil, Karmitsa, Napsu
- Authors: Gaudioso, Manilo , Taheri, Sona , Bagirov, Adil , Karmitsa, Napsu
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 8 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. © 2023 by the authors.
- Authors: Gaudioso, Manilo , Taheri, Sona , Bagirov, Adil , Karmitsa, Napsu
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 8 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. © 2023 by the authors.
Construction of generalized shape functions over arbitrary polytopes based on scaled boundary finite element method's solution of Poisson's equation
- Xiao, B., Natarajan, Sundararajan, Birk, Carolin, Ooi, Ean Hin, Song, Chongmin, Ooi, Ean Tat
- Authors: Xiao, B. , Natarajan, Sundararajan , Birk, Carolin , Ooi, Ean Hin , Song, Chongmin , Ooi, Ean Tat
- Date: 2023
- Type: Text , Journal article
- Relation: International Journal for Numerical Methods in Engineering Vol. 124, no. 17 (2023), p. 3603-3636
- Full Text:
- Reviewed:
- Description: A general technique to develop arbitrary-sided polygonal elements based on the scaled boundary finite element method is presented. Shape functions are derived from the solution of the Poisson's equation in contrast to the well-known Laplace shape functions that are only linearly complete. The application of the Poisson shape functions can be complete up to any specific order. The shape functions retain the advantage of the scaled boundary finite element method allowing direct formulation on polygons with arbitrary number of sides and quadtree meshes. The resulting formulation is similar to the finite element method where each field variable is interpolated by the same set of shape functions in parametric space and differs only in the integration of the stiffness and mass matrices. Well-established finite element procedures can be applied with the developed shape functions, to solve a variety of engineering problems including, for example, coupled field problems, phase field fracture, and addressing volumetric locking in the near-incompressibility limit by adopting a mixed formulation. Application of the formulation is demonstrated in several engineering problems. Optimal convergence rates are observed. © 2023 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons Ltd.
- Authors: Xiao, B. , Natarajan, Sundararajan , Birk, Carolin , Ooi, Ean Hin , Song, Chongmin , Ooi, Ean Tat
- Date: 2023
- Type: Text , Journal article
- Relation: International Journal for Numerical Methods in Engineering Vol. 124, no. 17 (2023), p. 3603-3636
- Full Text:
- Reviewed:
- Description: A general technique to develop arbitrary-sided polygonal elements based on the scaled boundary finite element method is presented. Shape functions are derived from the solution of the Poisson's equation in contrast to the well-known Laplace shape functions that are only linearly complete. The application of the Poisson shape functions can be complete up to any specific order. The shape functions retain the advantage of the scaled boundary finite element method allowing direct formulation on polygons with arbitrary number of sides and quadtree meshes. The resulting formulation is similar to the finite element method where each field variable is interpolated by the same set of shape functions in parametric space and differs only in the integration of the stiffness and mass matrices. Well-established finite element procedures can be applied with the developed shape functions, to solve a variety of engineering problems including, for example, coupled field problems, phase field fracture, and addressing volumetric locking in the near-incompressibility limit by adopting a mixed formulation. Application of the formulation is demonstrated in several engineering problems. Optimal convergence rates are observed. © 2023 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons Ltd.
Critical data detection for dynamically adjustable product quality in IIoT-enabled manufacturing
- Sen, Sachin, Karmakar, Gour, Pang, Shaoning
- Authors: Sen, Sachin , Karmakar, Gour , Pang, Shaoning
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 49464-49480
- Full Text:
- Reviewed:
- Description: The IIoT technologies, due to the widespread use of sensors, generate massive data that are key in providing innovative and efficient industrial management, operation, and product quality control processes. The significance of data has prompted relevant research communities and application developers how to harness the values of these data in secure manufacturing. Critical data analysis, identification of critical factors to improve the manufacturing process and critical data associated with product quality have been investigated in the current literature. However, the current works on product quality control are mainly based on static data analysis, where data may change, but there is no way to adjust them dynamically. Thus, they are not applicable for product quality control, at which point their adjustment is instantly required. However, many manufacturing systems exist, like beverages and food, where ingredients must be adjusted instantaneously to maintain product quality. To address this research gap, we introduce a method that identifies the critical data based on their ranking by exploiting three criticality assessment criteria that capture the instantaneous product quality change during manufacturing. These three criteria are - (1) correlation, (2) percentage quality change and (3) sensitivity for the assessment of data criticality. The product quality is estimated using polynomial regression (POLY), SVM, and DNN. The proposed method is validated using wine manufacturing data. Our proposed method accurately identifies critical data, where SVM produces the lowest average production quality prediction error (10.40%) compared with that of POLY (11%) and DNN (14.40%). © 2013 IEEE.
- Authors: Sen, Sachin , Karmakar, Gour , Pang, Shaoning
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 49464-49480
- Full Text:
- Reviewed:
- Description: The IIoT technologies, due to the widespread use of sensors, generate massive data that are key in providing innovative and efficient industrial management, operation, and product quality control processes. The significance of data has prompted relevant research communities and application developers how to harness the values of these data in secure manufacturing. Critical data analysis, identification of critical factors to improve the manufacturing process and critical data associated with product quality have been investigated in the current literature. However, the current works on product quality control are mainly based on static data analysis, where data may change, but there is no way to adjust them dynamically. Thus, they are not applicable for product quality control, at which point their adjustment is instantly required. However, many manufacturing systems exist, like beverages and food, where ingredients must be adjusted instantaneously to maintain product quality. To address this research gap, we introduce a method that identifies the critical data based on their ranking by exploiting three criticality assessment criteria that capture the instantaneous product quality change during manufacturing. These three criteria are - (1) correlation, (2) percentage quality change and (3) sensitivity for the assessment of data criticality. The product quality is estimated using polynomial regression (POLY), SVM, and DNN. The proposed method is validated using wine manufacturing data. Our proposed method accurately identifies critical data, where SVM produces the lowest average production quality prediction error (10.40%) compared with that of POLY (11%) and DNN (14.40%). © 2013 IEEE.
Data evolution governance for ontology-based digital twin product lifecycle management
- Ren, Zijie, Shi, Jianhua, Imran, Muhammad
- Authors: Ren, Zijie , Shi, Jianhua , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 19, no. 2 (2023), p. 1791-1802
- Full Text: false
- Reviewed:
- Description: Product lifecycle management (PLM) is an effective method for enhancing the market competitiveness of modern manufacturing industries. The digital twin is characterized by a profound integration of physics and information systems, which provides a technical means for integrating multisource information and breaking the time and space barrier of communication at each link of the lifecycle. Currently, however, the application of this technology focuses primarily on the product itself and 'service-oriented' application results. There is a lack of focus on twin data and its internal evolutionary mechanisms separately. In the management of global data resources, the benefits of digital twin technology cannot be fully realized. This article applies ontology technology in an innovative manner to the field of the digital twin to increase the reusability of twin data. Initially, a four-layered ontology-based twin data management architecture is presented. Then, a three-dimensional and three-granularity unified evolution model of full lifecycle twin data is proposed, as well as its ontology model. Then, the service mode of data components at each stage of the lifecycle is defined, a knowledge-sharing plane is established in the digital twin, and a data governance method based on ontology reasoning using data components on the shared plane is proposed. The ICandyBox simulation platform is then used to demonstrate the concept of the proposed method, and future research directions are proposed. © 2005-2012 IEEE.
Defending SDN against packet injection attacks using deep learning
- Phu, Anh, Li, Bo, Ullah, Faheem, Ul Huque, Tanvir, Naha, Ranesh, Babar, Muhammad, Nguyen, Hung
- Authors: Phu, Anh , Li, Bo , Ullah, Faheem , Ul Huque, Tanvir , Naha, Ranesh , Babar, Muhammad , Nguyen, Hung
- Date: 2023
- Type: Text , Journal article
- Relation: Computer Networks Vol. 234, no. (2023), p.
- Full Text:
- Reviewed:
- Description: The (logically) centralized architecture of software-defined networks makes them an easy target for packet injection attacks. In these attacks, the attacker injects malicious packets into the SDN network to affect the services and performance of the SDN controller and overflows the capacity of the SDN switches. Such attacks have been shown to ultimately stop the network functioning in real-time, leading to network breakdowns. There have been significant works on detecting and defending against similar DoS attacks in non-SDN networks, but detection and protection techniques for SDN against packet injection attacks are still in their infancy. Furthermore, many of the proposed solutions have been shown to be easily bypassed by simple modifications to the attacking packets or by altering the attacking profile. In this paper, we develop novel Graph Convolutional Neural Network models and algorithms for grouping network nodes/users into security classes by learning from network data. We start with two simple classes — nodes that engage in suspicious packet injection attacks and nodes that are not. From these classes, we then partition the network into separate segments with different security policies using distributed Ryu controllers in an SDN network. We show in experiments on an emulated SDN that our detection solution outperforms alternative approaches with above 99% detection accuracy for various types (both old and new) of injection attacks. More importantly, our mitigation solution maintains continuous functions of non-compromised nodes while isolating compromised/suspicious nodes in real-time. All code and data are publicly available for the reproducibility of our results. © 2023 The Author(s)
- Authors: Phu, Anh , Li, Bo , Ullah, Faheem , Ul Huque, Tanvir , Naha, Ranesh , Babar, Muhammad , Nguyen, Hung
- Date: 2023
- Type: Text , Journal article
- Relation: Computer Networks Vol. 234, no. (2023), p.
- Full Text:
- Reviewed:
- Description: The (logically) centralized architecture of software-defined networks makes them an easy target for packet injection attacks. In these attacks, the attacker injects malicious packets into the SDN network to affect the services and performance of the SDN controller and overflows the capacity of the SDN switches. Such attacks have been shown to ultimately stop the network functioning in real-time, leading to network breakdowns. There have been significant works on detecting and defending against similar DoS attacks in non-SDN networks, but detection and protection techniques for SDN against packet injection attacks are still in their infancy. Furthermore, many of the proposed solutions have been shown to be easily bypassed by simple modifications to the attacking packets or by altering the attacking profile. In this paper, we develop novel Graph Convolutional Neural Network models and algorithms for grouping network nodes/users into security classes by learning from network data. We start with two simple classes — nodes that engage in suspicious packet injection attacks and nodes that are not. From these classes, we then partition the network into separate segments with different security policies using distributed Ryu controllers in an SDN network. We show in experiments on an emulated SDN that our detection solution outperforms alternative approaches with above 99% detection accuracy for various types (both old and new) of injection attacks. More importantly, our mitigation solution maintains continuous functions of non-compromised nodes while isolating compromised/suspicious nodes in real-time. All code and data are publicly available for the reproducibility of our results. © 2023 The Author(s)
Device agent assisted blockchain leveraged framework for Internet of Things
- Nasrullah, Tarique, Islam, Md Manowarul, Uddin, Md Ashraf, Khan, Md Anisauzzaman, Layek, Md Abu, Stranieri, Andrew, Huh, Eui-Nam
- Authors: Nasrullah, Tarique , Islam, Md Manowarul , Uddin, Md Ashraf , Khan, Md Anisauzzaman , Layek, Md Abu , Stranieri, Andrew , Huh, Eui-Nam
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 1254-1268
- Full Text:
- Reviewed:
- Description: Blockchain (BC) is a burgeoning technology that has emerged as a promising solution to peer-to-peer communication security and privacy challenges. As a revolutionary technology, blockchain has drawn the attention of academics and researchers. Cryptocurrencies have already effectively utilized BC technology. Many researchers have sought to implement this technique in different sectors, including the Internet of Things. To store and manage IoT data, we present in this paper a lightweight BC-based architecture with a modified raft algorithm-based consensus protocol. We designed a Device Agent that executes a novel registration procedure to connect IoT devices to the blockchain. We implemented the framework on Docker using the Go programming language. We have simulated the framework on a Linux environment hosted in the cloud. We have conducted a detailed performance analysis using a variety of measures. The results demonstrate that our suggested solution is suitable for facilitating the management of IoT data with increased security and privacy. In terms of throughput and block generation time, the results indicate that our solution might be 40% to 45% faster than the existing blockchain. © 2013 IEEE.
- Authors: Nasrullah, Tarique , Islam, Md Manowarul , Uddin, Md Ashraf , Khan, Md Anisauzzaman , Layek, Md Abu , Stranieri, Andrew , Huh, Eui-Nam
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 1254-1268
- Full Text:
- Reviewed:
- Description: Blockchain (BC) is a burgeoning technology that has emerged as a promising solution to peer-to-peer communication security and privacy challenges. As a revolutionary technology, blockchain has drawn the attention of academics and researchers. Cryptocurrencies have already effectively utilized BC technology. Many researchers have sought to implement this technique in different sectors, including the Internet of Things. To store and manage IoT data, we present in this paper a lightweight BC-based architecture with a modified raft algorithm-based consensus protocol. We designed a Device Agent that executes a novel registration procedure to connect IoT devices to the blockchain. We implemented the framework on Docker using the Go programming language. We have simulated the framework on a Linux environment hosted in the cloud. We have conducted a detailed performance analysis using a variety of measures. The results demonstrate that our suggested solution is suitable for facilitating the management of IoT data with increased security and privacy. In terms of throughput and block generation time, the results indicate that our solution might be 40% to 45% faster than the existing blockchain. © 2013 IEEE.
Domestic load management with coordinated photovoltaics, battery storage and electric vehicle operation
- Das, Narottam, Haque, Akramul, Zaman, Hasneen, Morsalin, Sayidul, Islam, Syed
- Authors: Das, Narottam , Haque, Akramul , Zaman, Hasneen , Morsalin, Sayidul , Islam, Syed
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 12075-12087
- Full Text:
- Reviewed:
- Description: Coordinated power demand management at residential or domestic levels allows energy participants to efficiently manage load profiles, increase energy efficiency and reduce operational cost. In this paper, a hierarchical coordination framework to optimally manage domestic load using photovoltaic (PV) units, battery-energy-storage-systems (BESs) and electric vehicles (EVs) is presented. The bidirectional power flow of EV with vehicle to grid (V2G) operation manages real-time domestic load profile and takes appropriate coordinated action using its controller when necessary. The proposed system has been applied to a real power distribution network and tested with real load patterns and load dynamics. This also includes various test scenarios and prosumer's preferences e.g., with or without EVs, number of EV owners, number of households, and prosumer's daily activities. This is a combined hybrid system for hierarchical coordination that consists of PV units, BES systems and EVs. The system performance was analyzed with different commercial EV types with charging/ discharging constraints and the result shows that the domestic load demand on the distribution grid during the peak period has been reduced significantly. In the end, this proposed system's performance was compared with the prediction-based test techniques and the financial benefits were estimated. © 2013 IEEE.
- Authors: Das, Narottam , Haque, Akramul , Zaman, Hasneen , Morsalin, Sayidul , Islam, Syed
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 12075-12087
- Full Text:
- Reviewed:
- Description: Coordinated power demand management at residential or domestic levels allows energy participants to efficiently manage load profiles, increase energy efficiency and reduce operational cost. In this paper, a hierarchical coordination framework to optimally manage domestic load using photovoltaic (PV) units, battery-energy-storage-systems (BESs) and electric vehicles (EVs) is presented. The bidirectional power flow of EV with vehicle to grid (V2G) operation manages real-time domestic load profile and takes appropriate coordinated action using its controller when necessary. The proposed system has been applied to a real power distribution network and tested with real load patterns and load dynamics. This also includes various test scenarios and prosumer's preferences e.g., with or without EVs, number of EV owners, number of households, and prosumer's daily activities. This is a combined hybrid system for hierarchical coordination that consists of PV units, BES systems and EVs. The system performance was analyzed with different commercial EV types with charging/ discharging constraints and the result shows that the domestic load demand on the distribution grid during the peak period has been reduced significantly. In the end, this proposed system's performance was compared with the prediction-based test techniques and the financial benefits were estimated. © 2013 IEEE.