DQN approach for adaptive self-healing of VNFs in cloud-native network
- Arulappan, Arunkumar, Mahanti, Aniket, Passi, Kalpdrum, Srinivasan, Thiruvenkadam, Naha, Ranesh, Raja, Gunasekaran
- Authors: Arulappan, Arunkumar , Mahanti, Aniket , Passi, Kalpdrum , Srinivasan, Thiruvenkadam , Naha, Ranesh , Raja, Gunasekaran
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 34489-34504
- Full Text:
- Reviewed:
- Description: The transformation from physical network function to Virtual Network Function (VNF) requires a fundamental design change in how applications and services are tested and assured in a hybrid virtual network. Once the VNFs are onboarded in a cloud network infrastructure, operators need to test VNFs in real-time at the time of instantiation automatically. This paper explicitly analyses the problem of adaptive self-healing of a Virtual Machine (VM) allocated by the VNF with the Deep Reinforcement Learning (DRL) approach. The DRL-based big data collection and analytics engine performs aggregation to probe and analyze data for troubleshooting and performance management. This engine helps to determine corrective actions (self-healing), such as scaling or migrating VNFs. Hence, we proposed a Deep Queue Learning (DQL) based Deep Queue Networks (DQN) mechanism for self-healing VNFs in the virtualized infrastructure manager. Virtual network probes of closed-loop orchestration perform the automation of the VNF and provide analytics for real-time, policy-driven orchestration in an open networking automation platform through the stochastic gradient descent method for VNF service assurance and network reliability. The proposed DQN/DDQN mechanism optimizes the price and lowers the cost by 18% for resource usage without disrupting the Quality of Service (QoS) provided by the VNF. The outcome of adaptive self-healing of the VNFs enhances the computational performance by 27% compared to other state-of-the-art algorithms. © 2013 IEEE.
- Authors: Arulappan, Arunkumar , Mahanti, Aniket , Passi, Kalpdrum , Srinivasan, Thiruvenkadam , Naha, Ranesh , Raja, Gunasekaran
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 34489-34504
- Full Text:
- Reviewed:
- Description: The transformation from physical network function to Virtual Network Function (VNF) requires a fundamental design change in how applications and services are tested and assured in a hybrid virtual network. Once the VNFs are onboarded in a cloud network infrastructure, operators need to test VNFs in real-time at the time of instantiation automatically. This paper explicitly analyses the problem of adaptive self-healing of a Virtual Machine (VM) allocated by the VNF with the Deep Reinforcement Learning (DRL) approach. The DRL-based big data collection and analytics engine performs aggregation to probe and analyze data for troubleshooting and performance management. This engine helps to determine corrective actions (self-healing), such as scaling or migrating VNFs. Hence, we proposed a Deep Queue Learning (DQL) based Deep Queue Networks (DQN) mechanism for self-healing VNFs in the virtualized infrastructure manager. Virtual network probes of closed-loop orchestration perform the automation of the VNF and provide analytics for real-time, policy-driven orchestration in an open networking automation platform through the stochastic gradient descent method for VNF service assurance and network reliability. The proposed DQN/DDQN mechanism optimizes the price and lowers the cost by 18% for resource usage without disrupting the Quality of Service (QoS) provided by the VNF. The outcome of adaptive self-healing of the VNFs enhances the computational performance by 27% compared to other state-of-the-art algorithms. © 2013 IEEE.
A novel dynamic software-defined networking approach to neutralize traffic burst
- Sharma, Aakanksha, Balasubramanian, Venki, Kamruzzaman, Joarder
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
Application of various robust techniques to study and evaluate the role of effective parameters on rock fragmentation
- Mehrdanesh, Amirhossein, Monjezi, Masoud, Khandelwal, Manoj, Bayat, Parichehr
- Authors: Mehrdanesh, Amirhossein , Monjezi, Masoud , Khandelwal, Manoj , Bayat, Parichehr
- Date: 2023
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 39, no. 2 (2023), p. 1317-1327
- Full Text:
- Reviewed:
- Description: In this paper, an attempt has been made to implement various robust techniques to predict rock fragmentation due to blasting in open pit mines using effective parameters. As rock fragmentation prediction is very complex and complicated, and due to that various artificial intelligence-based techniques, such as artificial neural network (ANN), classification and regression tree and support vector machines were selected for the modeling. To validate and compare the prediction results, conventional multivariate regression analysis was also utilized on the same data sets. Since accuracy and generality of the modeling is dependent on the number of inputs, it was tried to collect enough required information from four different open pit mines of Iran. According to the obtained results, it was revealed that ANN with a determination coefficient of 0.986 is the most precise method of modeling as compared to the other applied techniques. Also, based on the performed sensitivity analysis, it was observed that the most prevailing parameters on the rock fragmentation are rock quality designation, Schmidt hardness value, mean in-situ block size and the minimum effective ones are hole diameter, burden and spacing. The advantage of back propagation neural network technique for using in this study compared to other soft computing methods is that they are able to describe complex and nonlinear multivariable problems in a transparent way. Furthermore, ANN can be used as a first approach, where much knowledge about the influencing parameters are missing. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Mehrdanesh, Amirhossein , Monjezi, Masoud , Khandelwal, Manoj , Bayat, Parichehr
- Date: 2023
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 39, no. 2 (2023), p. 1317-1327
- Full Text:
- Reviewed:
- Description: In this paper, an attempt has been made to implement various robust techniques to predict rock fragmentation due to blasting in open pit mines using effective parameters. As rock fragmentation prediction is very complex and complicated, and due to that various artificial intelligence-based techniques, such as artificial neural network (ANN), classification and regression tree and support vector machines were selected for the modeling. To validate and compare the prediction results, conventional multivariate regression analysis was also utilized on the same data sets. Since accuracy and generality of the modeling is dependent on the number of inputs, it was tried to collect enough required information from four different open pit mines of Iran. According to the obtained results, it was revealed that ANN with a determination coefficient of 0.986 is the most precise method of modeling as compared to the other applied techniques. Also, based on the performed sensitivity analysis, it was observed that the most prevailing parameters on the rock fragmentation are rock quality designation, Schmidt hardness value, mean in-situ block size and the minimum effective ones are hole diameter, burden and spacing. The advantage of back propagation neural network technique for using in this study compared to other soft computing methods is that they are able to describe complex and nonlinear multivariable problems in a transparent way. Furthermore, ANN can be used as a first approach, where much knowledge about the influencing parameters are missing. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Bundle enrichment method for nonsmooth difference of convex programming problems
- Gaudioso, Manilo, Taheri, Sona, Bagirov, Adil, Karmitsa, Napsu
- Authors: Gaudioso, Manilo , Taheri, Sona , Bagirov, Adil , Karmitsa, Napsu
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 8 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. © 2023 by the authors.
- Authors: Gaudioso, Manilo , Taheri, Sona , Bagirov, Adil , Karmitsa, Napsu
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 8 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. © 2023 by the authors.
Deep learning-based digital image forgery detection using transfer learning
- Qazi, Emad, Zia, Tanveer, Imran, Muhammad, Faheem, Muhammad
- Authors: Qazi, Emad , Zia, Tanveer , Imran, Muhammad , Faheem, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Intelligent Automation and Soft Computing Vol. 38, no. 3 (2023), p. 225-240
- Full Text:
- Reviewed:
- Description: Deep learning is considered one of the most efficient and reliable methods through which the legitimacy of a digital image can be verified. In the current cyber world where deepfakes have shaken the global community, confirming the legitimacy of a digital image is of great importance. With the advancements made in deep learning techniques, now we can efficiently train and develop state-of-the-art digital image forensic models. The most traditional and widely used method by researchers is convolution neural networks (CNN) for verification of image authenticity but it consumes a considerable number of resources and requires a large dataset for training. Therefore, in this study, a transfer learning based deep learning technique for image forgery detection is proposed. The proposed methodology consists of three modules namely; preprocessing module, convolutional module, and the classification module. By using our proposed technique, the training time is drastically reduced by utilizing the pre-trained weights. The performance of the proposed technique is evaluated by using benchmark datasets, i.e., BOW and BOSSBase that detect five forensic types which include JPEG compression, contrast enhancement (CE), median filtering (MF), additive Gaussian noise, and resampling. We evaluated the performance of our proposed technique by conducting various experiments and case scenarios and achieved an accuracy of 99.92%. The results show the superiority of the proposed system. © 2023, Tech Science Press. All rights reserved.
- Authors: Qazi, Emad , Zia, Tanveer , Imran, Muhammad , Faheem, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Intelligent Automation and Soft Computing Vol. 38, no. 3 (2023), p. 225-240
- Full Text:
- Reviewed:
- Description: Deep learning is considered one of the most efficient and reliable methods through which the legitimacy of a digital image can be verified. In the current cyber world where deepfakes have shaken the global community, confirming the legitimacy of a digital image is of great importance. With the advancements made in deep learning techniques, now we can efficiently train and develop state-of-the-art digital image forensic models. The most traditional and widely used method by researchers is convolution neural networks (CNN) for verification of image authenticity but it consumes a considerable number of resources and requires a large dataset for training. Therefore, in this study, a transfer learning based deep learning technique for image forgery detection is proposed. The proposed methodology consists of three modules namely; preprocessing module, convolutional module, and the classification module. By using our proposed technique, the training time is drastically reduced by utilizing the pre-trained weights. The performance of the proposed technique is evaluated by using benchmark datasets, i.e., BOW and BOSSBase that detect five forensic types which include JPEG compression, contrast enhancement (CE), median filtering (MF), additive Gaussian noise, and resampling. We evaluated the performance of our proposed technique by conducting various experiments and case scenarios and achieved an accuracy of 99.92%. The results show the superiority of the proposed system. © 2023, Tech Science Press. All rights reserved.
Defending SDN against packet injection attacks using deep learning
- Phu, Anh, Li, Bo, Ullah, Faheem, Ul Huque, Tanvir, Naha, Ranesh, Babar, Muhammad, Nguyen, Hung
- Authors: Phu, Anh , Li, Bo , Ullah, Faheem , Ul Huque, Tanvir , Naha, Ranesh , Babar, Muhammad , Nguyen, Hung
- Date: 2023
- Type: Text , Journal article
- Relation: Computer Networks Vol. 234, no. (2023), p.
- Full Text:
- Reviewed:
- Description: The (logically) centralized architecture of software-defined networks makes them an easy target for packet injection attacks. In these attacks, the attacker injects malicious packets into the SDN network to affect the services and performance of the SDN controller and overflows the capacity of the SDN switches. Such attacks have been shown to ultimately stop the network functioning in real-time, leading to network breakdowns. There have been significant works on detecting and defending against similar DoS attacks in non-SDN networks, but detection and protection techniques for SDN against packet injection attacks are still in their infancy. Furthermore, many of the proposed solutions have been shown to be easily bypassed by simple modifications to the attacking packets or by altering the attacking profile. In this paper, we develop novel Graph Convolutional Neural Network models and algorithms for grouping network nodes/users into security classes by learning from network data. We start with two simple classes — nodes that engage in suspicious packet injection attacks and nodes that are not. From these classes, we then partition the network into separate segments with different security policies using distributed Ryu controllers in an SDN network. We show in experiments on an emulated SDN that our detection solution outperforms alternative approaches with above 99% detection accuracy for various types (both old and new) of injection attacks. More importantly, our mitigation solution maintains continuous functions of non-compromised nodes while isolating compromised/suspicious nodes in real-time. All code and data are publicly available for the reproducibility of our results. © 2023 The Author(s)
- Authors: Phu, Anh , Li, Bo , Ullah, Faheem , Ul Huque, Tanvir , Naha, Ranesh , Babar, Muhammad , Nguyen, Hung
- Date: 2023
- Type: Text , Journal article
- Relation: Computer Networks Vol. 234, no. (2023), p.
- Full Text:
- Reviewed:
- Description: The (logically) centralized architecture of software-defined networks makes them an easy target for packet injection attacks. In these attacks, the attacker injects malicious packets into the SDN network to affect the services and performance of the SDN controller and overflows the capacity of the SDN switches. Such attacks have been shown to ultimately stop the network functioning in real-time, leading to network breakdowns. There have been significant works on detecting and defending against similar DoS attacks in non-SDN networks, but detection and protection techniques for SDN against packet injection attacks are still in their infancy. Furthermore, many of the proposed solutions have been shown to be easily bypassed by simple modifications to the attacking packets or by altering the attacking profile. In this paper, we develop novel Graph Convolutional Neural Network models and algorithms for grouping network nodes/users into security classes by learning from network data. We start with two simple classes — nodes that engage in suspicious packet injection attacks and nodes that are not. From these classes, we then partition the network into separate segments with different security policies using distributed Ryu controllers in an SDN network. We show in experiments on an emulated SDN that our detection solution outperforms alternative approaches with above 99% detection accuracy for various types (both old and new) of injection attacks. More importantly, our mitigation solution maintains continuous functions of non-compromised nodes while isolating compromised/suspicious nodes in real-time. All code and data are publicly available for the reproducibility of our results. © 2023 The Author(s)
Enhancing ultimate bearing capacity prediction of cohesionless soils beneath shallow foundations with grey box and hybrid AI models
- Kiany, Katayoon, Baghbani, Abolfazl, Abuel-Naga, Hossam, Baghbani, Hasan, Arabani, Mahyar, Shalchian, Mohammad
- Authors: Kiany, Katayoon , Baghbani, Abolfazl , Abuel-Naga, Hossam , Baghbani, Hasan , Arabani, Mahyar , Shalchian, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 10 (2023), p.
- Full Text:
- Reviewed:
- Description: This study examines the potential of the soft computing technique, namely, multiple linear regression (MLR), genetic programming (GP), classification and regression trees (CART) and GA-ENN (genetic algorithm-emotional neuron network), to predict the ultimate bearing capacity (UBC) of cohesionless soils beneath shallow foundations. For the first time, two grey-box AI models, GP and CART, and one hybrid AI model, GA-ENN, were used in the literature to predict UBC. The inputs of the model are the width of footing (B), depth of footing (D), footing geometry (ratio of length to width, L/B), unit weight of sand (
- Authors: Kiany, Katayoon , Baghbani, Abolfazl , Abuel-Naga, Hossam , Baghbani, Hasan , Arabani, Mahyar , Shalchian, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 10 (2023), p.
- Full Text:
- Reviewed:
- Description: This study examines the potential of the soft computing technique, namely, multiple linear regression (MLR), genetic programming (GP), classification and regression trees (CART) and GA-ENN (genetic algorithm-emotional neuron network), to predict the ultimate bearing capacity (UBC) of cohesionless soils beneath shallow foundations. For the first time, two grey-box AI models, GP and CART, and one hybrid AI model, GA-ENN, were used in the literature to predict UBC. The inputs of the model are the width of footing (B), depth of footing (D), footing geometry (ratio of length to width, L/B), unit weight of sand (
Multi-aspect annotation and analysis of Nepali tweets on anti-establishment election discourse
- Rauniyar, Kritesh, Poudel, Sweta, Shiwakoti, Shuvam, Thapa, Surendrabikram, Rashid, Junaid, Kim, Jungeun, Imran, Muhammad, Naseem, Usman
- Authors: Rauniyar, Kritesh , Poudel, Sweta , Shiwakoti, Shuvam , Thapa, Surendrabikram , Rashid, Junaid , Kim, Jungeun , Imran, Muhammad , Naseem, Usman
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 143092-143115
- Full Text:
- Reviewed:
- Description: In today's social media-dominated landscape, digital platforms wield substantial influence over public opinion, particularly during crucial political events such as electoral processes. These platforms become hubs for diverse discussions, encompassing topics, reforms, and desired changes. Notably, in times of government dissatisfaction, they serve as arenas for anti-establishment discourse, highlighting the need to analyze public sentiment in these conversations. However, the analysis of such discourse is notably scarce, even in high-resource languages, and entirely non-existent in the context of the Nepali language. To address this critical gap, we present Nepal Anti Establishment discourse Tweets (NAET), a novel dataset comprising 4,445 multi-aspect annotated Nepali tweets, facilitating a comprehensive understanding of political conversations. Our contributions encompass evaluating tweet relevance, sentiment, and satire, while also exploring the presence of hate speech, identifying its targets, and distinguishing directed and non-directed expressions. Additionally, we investigate hope speech, an underexplored aspect crucial in the context of anti-establishment discourse, as it reflects the aspirations and expectations from new political figures and parties. Furthermore, we set NLP-based baselines for all these tasks. To ensure a holistic analysis, we also employ topic modeling, a powerful technique that helps us identify and understand the prevalent themes and patterns emerging from the discourse. Our research thus presents a comprehensive and multi-faceted perspective on anti-establishment election discourse in a low-resource language setting. The dataset is publicly available, facilitating in-depth analysis of political tweets in Nepali discourse and further advancing NLP research for the Nepali language through labeled data and baselines for various NLP tasks. The dataset for this work is made available at https://github.com/rkritesh210/NAET. © 2013 IEEE.
- Authors: Rauniyar, Kritesh , Poudel, Sweta , Shiwakoti, Shuvam , Thapa, Surendrabikram , Rashid, Junaid , Kim, Jungeun , Imran, Muhammad , Naseem, Usman
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 143092-143115
- Full Text:
- Reviewed:
- Description: In today's social media-dominated landscape, digital platforms wield substantial influence over public opinion, particularly during crucial political events such as electoral processes. These platforms become hubs for diverse discussions, encompassing topics, reforms, and desired changes. Notably, in times of government dissatisfaction, they serve as arenas for anti-establishment discourse, highlighting the need to analyze public sentiment in these conversations. However, the analysis of such discourse is notably scarce, even in high-resource languages, and entirely non-existent in the context of the Nepali language. To address this critical gap, we present Nepal Anti Establishment discourse Tweets (NAET), a novel dataset comprising 4,445 multi-aspect annotated Nepali tweets, facilitating a comprehensive understanding of political conversations. Our contributions encompass evaluating tweet relevance, sentiment, and satire, while also exploring the presence of hate speech, identifying its targets, and distinguishing directed and non-directed expressions. Additionally, we investigate hope speech, an underexplored aspect crucial in the context of anti-establishment discourse, as it reflects the aspirations and expectations from new political figures and parties. Furthermore, we set NLP-based baselines for all these tasks. To ensure a holistic analysis, we also employ topic modeling, a powerful technique that helps us identify and understand the prevalent themes and patterns emerging from the discourse. Our research thus presents a comprehensive and multi-faceted perspective on anti-establishment election discourse in a low-resource language setting. The dataset is publicly available, facilitating in-depth analysis of political tweets in Nepali discourse and further advancing NLP research for the Nepali language through labeled data and baselines for various NLP tasks. The dataset for this work is made available at https://github.com/rkritesh210/NAET. © 2013 IEEE.
Nonsmooth optimization-based hyperparameter-free neural networks for large-scale regression
- Karmitsa, Napsu, Taheri, Sona, Joki, Kaisa, Paasivirta, Pauliina, Defterdarovic, J., Bagirov, Adil, Mäkelä, Marko
- Authors: Karmitsa, Napsu , Taheri, Sona , Joki, Kaisa , Paasivirta, Pauliina , Defterdarovic, J. , Bagirov, Adil , Mäkelä, Marko
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 9 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the (Formula presented.) -loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. © 2023 by the authors.
- Authors: Karmitsa, Napsu , Taheri, Sona , Joki, Kaisa , Paasivirta, Pauliina , Defterdarovic, J. , Bagirov, Adil , Mäkelä, Marko
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 9 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the (Formula presented.) -loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. © 2023 by the authors.
UDTN-RS : a new underwater delay tolerant network routing protocol for coastal patrol and surveillance
- Azad, Saiful, Neffati, Ahmed, Mahmud, Mufti, Kaiser, M., Ahmed, Muhammad, Kamruzzaman, Joarder
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
Wearable obstacle avoidance electronic travel aids for blind and visually impaired individuals : a systematic review
- Xu, Peijie, Kennedy, Gerard, Zhao, Fei-Yi, Zhang, Wen-Jing, Van Schyndel, Ron
- Authors: Xu, Peijie , Kennedy, Gerard , Zhao, Fei-Yi , Zhang, Wen-Jing , Van Schyndel, Ron
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 66587-66613
- Full Text:
- Reviewed:
- Description: Background Wearable obstacle avoidance electronic travel aids (ETAs) have been developed to assist the safe displacement of blind and visually impaired individuals (BVIs) in indoor/outdoor spaces. This systematic review aimed to understand the strengths and weaknesses of existing ETAs in terms of hardware functionality, cost, and user experience. These elements may influence the usability of the ETAs and are valuable in guiding the development of superior ETAs in the future. Methods Formally published studies designing and developing the wearable obstacle avoidance ETAs were searched for from six databases from their inception to April 2023. The PRISMA 2020 and APISSER guidelines were followed. Results Eighty-nine studies were included for analysis, 41 of which were judged to be of moderate to high quality. Most wearable obstacle avoidance ETAs mainly depend on camera- and ultrasonic-based techniques to achieve perception of the environment. Acoustic feedback was the most common human-computer feedback form used by the ETAs. According to user experience, the efficacy and safety of the device was usually their primary concern. Conclusions Although many conceptualised ETAs have been designed to facilitate BVIs' independent navigation, most of these devices suffer from shortcomings. This is due to the nature and limitations of the various processors, environment detection techniques and human-computer feedback those ETAs are equipped with. Integrating multiple techniques and hardware into one ETA is a way to improve performance, but there is still a need to address the discomfort of wearing the device and the high-cost. Developing an applicable systematic review guideline along with a credible quality assessment tool for these types of studies is also required. © 2013 IEEE.
- Authors: Xu, Peijie , Kennedy, Gerard , Zhao, Fei-Yi , Zhang, Wen-Jing , Van Schyndel, Ron
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 66587-66613
- Full Text:
- Reviewed:
- Description: Background Wearable obstacle avoidance electronic travel aids (ETAs) have been developed to assist the safe displacement of blind and visually impaired individuals (BVIs) in indoor/outdoor spaces. This systematic review aimed to understand the strengths and weaknesses of existing ETAs in terms of hardware functionality, cost, and user experience. These elements may influence the usability of the ETAs and are valuable in guiding the development of superior ETAs in the future. Methods Formally published studies designing and developing the wearable obstacle avoidance ETAs were searched for from six databases from their inception to April 2023. The PRISMA 2020 and APISSER guidelines were followed. Results Eighty-nine studies were included for analysis, 41 of which were judged to be of moderate to high quality. Most wearable obstacle avoidance ETAs mainly depend on camera- and ultrasonic-based techniques to achieve perception of the environment. Acoustic feedback was the most common human-computer feedback form used by the ETAs. According to user experience, the efficacy and safety of the device was usually their primary concern. Conclusions Although many conceptualised ETAs have been designed to facilitate BVIs' independent navigation, most of these devices suffer from shortcomings. This is due to the nature and limitations of the various processors, environment detection techniques and human-computer feedback those ETAs are equipped with. Integrating multiple techniques and hardware into one ETA is a way to improve performance, but there is still a need to address the discomfort of wearing the device and the high-cost. Developing an applicable systematic review guideline along with a credible quality assessment tool for these types of studies is also required. © 2013 IEEE.
A fault-tolerant cascaded switched-capacitor multilevel inverter for domestic applications in smart grids
- Akbari, Ehsan, Teimouri, Ali, Saki, Mojtaba, Rezaei, Mohammad, Hu, Jiefeng, Band, Shahab, Pai, Hao-Ting, Mosavi, Amir
- Authors: Akbari, Ehsan , Teimouri, Ali , Saki, Mojtaba , Rezaei, Mohammad , Hu, Jiefeng , Band, Shahab , Pai, Hao-Ting , Mosavi, Amir
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 110590-110602
- Full Text:
- Reviewed:
- Description: Cascaded multilevel inverters (MLIs) generate an output voltage using series-connected power modules that employ standard configurations of low-voltage components. Each module may employ one or more switched capacitors to double or quadruple its input voltage. The higher number of switched capacitors and semiconductor switches in MLIs compared to conventional two-level inverters has led to concerns about overall system reliability. A fault-tolerant design can mitigate this reliability issue. If one part of the system fails, the MLI can continue its planned operation at a reduced level rather than the entire system failing, which makes the fault tolerance of the MLI particularly important. In this paper, a novel fault location technique is presented that leads to a significant reduction in fault location detection time based on the reliability priority of the components of the proposed fault-tolerant switched capacitor cascaded MLI (CSCMLI). The main contribution of this paper is to reduce the number of MLI switches under fault conditions while operating at lower levels. The fault-tolerant inverter requires fewer switches at higher reliability, and the comparison with similar MLIs shows a faster dynamic response of fault detection and reduced fault location detection time. The experimental results confirm the effectiveness of the presented methods applied in the CSCMLI. Also, all experimental data including processor code, schematic, PCB, and video of CSCMLI operation are attached. © 2013 IEEE.
- Authors: Akbari, Ehsan , Teimouri, Ali , Saki, Mojtaba , Rezaei, Mohammad , Hu, Jiefeng , Band, Shahab , Pai, Hao-Ting , Mosavi, Amir
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 110590-110602
- Full Text:
- Reviewed:
- Description: Cascaded multilevel inverters (MLIs) generate an output voltage using series-connected power modules that employ standard configurations of low-voltage components. Each module may employ one or more switched capacitors to double or quadruple its input voltage. The higher number of switched capacitors and semiconductor switches in MLIs compared to conventional two-level inverters has led to concerns about overall system reliability. A fault-tolerant design can mitigate this reliability issue. If one part of the system fails, the MLI can continue its planned operation at a reduced level rather than the entire system failing, which makes the fault tolerance of the MLI particularly important. In this paper, a novel fault location technique is presented that leads to a significant reduction in fault location detection time based on the reliability priority of the components of the proposed fault-tolerant switched capacitor cascaded MLI (CSCMLI). The main contribution of this paper is to reduce the number of MLI switches under fault conditions while operating at lower levels. The fault-tolerant inverter requires fewer switches at higher reliability, and the comparison with similar MLIs shows a faster dynamic response of fault detection and reduced fault location detection time. The experimental results confirm the effectiveness of the presented methods applied in the CSCMLI. Also, all experimental data including processor code, schematic, PCB, and video of CSCMLI operation are attached. © 2013 IEEE.
A new hybrid cascaded switched-capacitor reduced switch multilevel inverter for renewable sources and domestic loads
- Rezaei, Mohammad, Nayeripour, Majid, Hu, Jiefeng, Band, Shahab, Mosavi, Amir, Khooban, Mohammad-Hassan
- Authors: Rezaei, Mohammad , Nayeripour, Majid , Hu, Jiefeng , Band, Shahab , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 14157-14183
- Full Text:
- Reviewed:
- Description: This multilevel inverter type summarizes an output voltage of medium voltage based on a series connection of power cells employing standard configurations of low-voltage components. The main problems of cascaded switched-capacitor multilevel inverters (CSCMLIs) are the harmful reverse flowing current of inductive loads, the large number of switches, and the surge current of the capacitors. As the number of switches increases, the reliability of the inverter decreases. To address these issues, a new CSCMLI is proposed using two modules containing asymmetric DC sources to generate 13 levels. The main novelty of the proposed configuration is the reduction of the number of switches while increasing the maximum output voltage. Despite the many similarities, the presented topology differs from similar topologies. Compared to similar structures, the direction of some switches is reversed, leading to a change in the direction of current flow. By incorporating the lowest number of semiconductors, it was demonstrated that the proposed inverter has the lowest cost function among similar inverters. The role of switched-capacitor inrush current in the selection of switch, diode, and DC source for inverter operation in medium and high voltage applications is presented. The inverter performance to supply the inductive loads is clarified. Comparison of the simulation and experimental results validates the effectiveness of the proposed inverter topology, showing promising potentials in photovoltaic, buildings, and domestic applications. A video demonstrating the experimental test, and all manufacturing data are attached. © 2013 IEEE.
- Authors: Rezaei, Mohammad , Nayeripour, Majid , Hu, Jiefeng , Band, Shahab , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 14157-14183
- Full Text:
- Reviewed:
- Description: This multilevel inverter type summarizes an output voltage of medium voltage based on a series connection of power cells employing standard configurations of low-voltage components. The main problems of cascaded switched-capacitor multilevel inverters (CSCMLIs) are the harmful reverse flowing current of inductive loads, the large number of switches, and the surge current of the capacitors. As the number of switches increases, the reliability of the inverter decreases. To address these issues, a new CSCMLI is proposed using two modules containing asymmetric DC sources to generate 13 levels. The main novelty of the proposed configuration is the reduction of the number of switches while increasing the maximum output voltage. Despite the many similarities, the presented topology differs from similar topologies. Compared to similar structures, the direction of some switches is reversed, leading to a change in the direction of current flow. By incorporating the lowest number of semiconductors, it was demonstrated that the proposed inverter has the lowest cost function among similar inverters. The role of switched-capacitor inrush current in the selection of switch, diode, and DC source for inverter operation in medium and high voltage applications is presented. The inverter performance to supply the inductive loads is clarified. Comparison of the simulation and experimental results validates the effectiveness of the proposed inverter topology, showing promising potentials in photovoltaic, buildings, and domestic applications. A video demonstrating the experimental test, and all manufacturing data are attached. © 2013 IEEE.
Adaptation of a real-time deep learning approach with an analog fault detection technique for reliability forecasting of capacitor banks used in mobile vehicles
- Rezaei, Mohammad, Fathollahi, Arman, Rezaei, Sajad, Hu, Jiefeng, Gheisarnejad, Meysam, Teimouri, Ali, Rituraj, Rituraj, Mosavi, Amir, Khooban, Mohammad-Hassan
- Authors: Rezaei, Mohammad , Fathollahi, Arman , Rezaei, Sajad , Hu, Jiefeng , Gheisarnejad, Meysam , Teimouri, Ali , Rituraj, Rituraj , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 132271-132287
- Full Text:
- Reviewed:
- Description: The DC-Link capacitor is defined as the essential electronics element which sources or sinks the respective currents. The reliability of DC-link capacitor-banks (CBs) encounters many challenges due to their usage in electric vehicles. Heavy shocks may damage the internal capacitors without shutting down the CB. The fundamental development obstacles of CBs are: lack of considering capacitor degradation in reliability assessment, the impact of unforeseen sudden internal capacitor faults in forecasting CB lifetime, and the faults consequence on CB degradation. The sudden faults change the CB capacitance, which leads to reliability change. To more accurately estimate the reliability, the type of the fault needs to be detected for predicting the correct post-fault capacitance. To address these practical problems, a new CB model and reliability assessment formula covering all fault types are first presented, then, a new analog fault-detection method is presented, and a combination of online-learning long short-term memory (LSTM) and fault-detection method is subsequently performed, which adapt the sudden internal CB faults with the LSTM to correctly predict the CB degradation. To confirm the correct LSTM operation, four capacitors degradation is practically recorded for 2000-hours, and the off-line faultless degradation values predicted by the LSTM are compared with the actual data. The experimental findings validate the applicability of the proposed method. The codes and data are provided. © 2013 IEEE.
- Authors: Rezaei, Mohammad , Fathollahi, Arman , Rezaei, Sajad , Hu, Jiefeng , Gheisarnejad, Meysam , Teimouri, Ali , Rituraj, Rituraj , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 132271-132287
- Full Text:
- Reviewed:
- Description: The DC-Link capacitor is defined as the essential electronics element which sources or sinks the respective currents. The reliability of DC-link capacitor-banks (CBs) encounters many challenges due to their usage in electric vehicles. Heavy shocks may damage the internal capacitors without shutting down the CB. The fundamental development obstacles of CBs are: lack of considering capacitor degradation in reliability assessment, the impact of unforeseen sudden internal capacitor faults in forecasting CB lifetime, and the faults consequence on CB degradation. The sudden faults change the CB capacitance, which leads to reliability change. To more accurately estimate the reliability, the type of the fault needs to be detected for predicting the correct post-fault capacitance. To address these practical problems, a new CB model and reliability assessment formula covering all fault types are first presented, then, a new analog fault-detection method is presented, and a combination of online-learning long short-term memory (LSTM) and fault-detection method is subsequently performed, which adapt the sudden internal CB faults with the LSTM to correctly predict the CB degradation. To confirm the correct LSTM operation, four capacitors degradation is practically recorded for 2000-hours, and the off-line faultless degradation values predicted by the LSTM are compared with the actual data. The experimental findings validate the applicability of the proposed method. The codes and data are provided. © 2013 IEEE.
An adaptive fault ride-through scheme for grid-forming inverters under asymmetrical grid faults
- Li, Zilin, Chan, Ka, Hu, Jiefeng, Or, Siu
- Authors: Li, Zilin , Chan, Ka , Hu, Jiefeng , Or, Siu
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Electronics Vol. 69, no. 12 (2022), p. 12912-12923
- Full Text:
- Reviewed:
- Description: Three-phase four-wire grid-forming (GFM) inverters are promising to interface distributed energy resources into low-voltage networks. However, these inverters are prone to overcurrent under grid faults. Physically increasing the inverter current capacity is not cost-effective to cope with complicated fault conditions. In this article, an adaptive fault ride-through (FRT) scheme based on instantaneous saturators and virtual negative- and zero-sequence resistances is proposed. It features not only overcurrent limitation by modifying voltage references, but also seamless transition between normal and grid fault conditions. The proposed FRT scheme is first analyzed from different aspects, including the virtual sequence resistances, grid short-circuit ratio, fault types, and fault levels. The virtual sequence resistances are then designed to be adaptive to ensure high voltage quality at the healthy phase. The proposed FRT scheme is verified by MATLAB/Simulink simulations under asymmetrical faults. A laboratory platform with a grid-connected 3kW GFM inverter is further constructed to demonstrate its effectiveness (a video of the experimental results under three asymmetrical faults is attached). © 1982-2012 IEEE.
- Authors: Li, Zilin , Chan, Ka , Hu, Jiefeng , Or, Siu
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Electronics Vol. 69, no. 12 (2022), p. 12912-12923
- Full Text:
- Reviewed:
- Description: Three-phase four-wire grid-forming (GFM) inverters are promising to interface distributed energy resources into low-voltage networks. However, these inverters are prone to overcurrent under grid faults. Physically increasing the inverter current capacity is not cost-effective to cope with complicated fault conditions. In this article, an adaptive fault ride-through (FRT) scheme based on instantaneous saturators and virtual negative- and zero-sequence resistances is proposed. It features not only overcurrent limitation by modifying voltage references, but also seamless transition between normal and grid fault conditions. The proposed FRT scheme is first analyzed from different aspects, including the virtual sequence resistances, grid short-circuit ratio, fault types, and fault levels. The virtual sequence resistances are then designed to be adaptive to ensure high voltage quality at the healthy phase. The proposed FRT scheme is verified by MATLAB/Simulink simulations under asymmetrical faults. A laboratory platform with a grid-connected 3kW GFM inverter is further constructed to demonstrate its effectiveness (a video of the experimental results under three asymmetrical faults is attached). © 1982-2012 IEEE.
An automatic detection of breast cancer diagnosis and prognosis based on machine learning using ensemble of classifiers
- Naseem, Usman, Rashid, Junaid, Ali, Liaqat, Kim, Jungeun, Haq, Qazi, Awan, Mazhar, Imran, Muhammad
- Authors: Naseem, Usman , Rashid, Junaid , Ali, Liaqat , Kim, Jungeun , Haq, Qazi , Awan, Mazhar , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78242-78252
- Full Text:
- Reviewed:
- Description: Breast cancer (BC) is the second most prevalent type of cancer among women leading to death, and its rate of mortality is very high. Its effects will be reduced if diagnosed early. BC's early detection will greatly boost the prognosis and likelihood of recovery, as it may encourage prompt surgical care for patients. It is therefore vital to have a system enabling the healthcare industry to detect breast cancer quickly and accurately. Machine learning (ML) is widely used in breast cancer (BC) pattern classification due to its advantages in modelling a critical feature detection from complex BC datasets. In this paper, we propose a system for automatic detection of BC diagnosis and prognosis using ensemble of classifiers. First, we review various machine learning (ML) algorithms and ensemble of different ML algorithms. We present an overview of ML algorithms including ANN, and ensemble of different classifiers for automatic BC diagnosis and prognosis detection. We also present and compare various ensemble models and other variants of tested ML based models with and without up-sampling technique on two benchmark datasets. We also studied the effects of using balanced class weight on prognosis dataset and compared its performance with others. The results showed that the ensemble method outperformed other state-of-the-art methods and achieved 98.83% accuracy. Because of high performance, the proposed system is of great importance to the medical industry and relevant research community. The comparison shows that the proposed method outperformed other state-of-the-art methods. © 2013 IEEE.
- Authors: Naseem, Usman , Rashid, Junaid , Ali, Liaqat , Kim, Jungeun , Haq, Qazi , Awan, Mazhar , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78242-78252
- Full Text:
- Reviewed:
- Description: Breast cancer (BC) is the second most prevalent type of cancer among women leading to death, and its rate of mortality is very high. Its effects will be reduced if diagnosed early. BC's early detection will greatly boost the prognosis and likelihood of recovery, as it may encourage prompt surgical care for patients. It is therefore vital to have a system enabling the healthcare industry to detect breast cancer quickly and accurately. Machine learning (ML) is widely used in breast cancer (BC) pattern classification due to its advantages in modelling a critical feature detection from complex BC datasets. In this paper, we propose a system for automatic detection of BC diagnosis and prognosis using ensemble of classifiers. First, we review various machine learning (ML) algorithms and ensemble of different ML algorithms. We present an overview of ML algorithms including ANN, and ensemble of different classifiers for automatic BC diagnosis and prognosis detection. We also present and compare various ensemble models and other variants of tested ML based models with and without up-sampling technique on two benchmark datasets. We also studied the effects of using balanced class weight on prognosis dataset and compared its performance with others. The results showed that the ensemble method outperformed other state-of-the-art methods and achieved 98.83% accuracy. Because of high performance, the proposed system is of great importance to the medical industry and relevant research community. The comparison shows that the proposed method outperformed other state-of-the-art methods. © 2013 IEEE.
Blasting pattern optimization using gene expression programming and grasshopper optimization algorithm to minimise blast-induced ground vibrations
- Bayat, Parichehra, Monjezi, Mejrdamesj, Mehrdanesh, Amirhosseina, Khandelwal, Manoj
- Authors: Bayat, Parichehra , Monjezi, Mejrdamesj , Mehrdanesh, Amirhosseina , Khandelwal, Manoj
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. 4 (2022), p. 3341-3350
- Full Text:
- Reviewed:
- Description: Blast-induced ground vibration is considered as one of the most hazardous phenomena of mine blasting, which can even cause casualties and severe damages to the adjacent properties. Measuring peak particle velocity (PPV) is helpful to know the actual vibration level but prediction of blast vibration prior to the blast is a tedious job due to involvement of blast design, explosive and rock parameters. Nowadays, efficient application of intelligent systems has been approved in different branches of science and technology. In this paper, a gene expression programming (GEP) model was developed to predict PPV using various blasting patterns as model inputs, which showed a high level of accuracy for the implemented model. Also, to optimize blast pattern attaining minimum ground vibration during blasting operation, the developed functional GEP model was taken as objective function for grasshopper optimization algorithm (GOA). Construction of GOA model was performed using a trial and error mechanism to find out the best possible pertinent GOA parameters. Finally, it was observed that utilizing GOA technique, PPV can be reduced by 67% with optimized blast parameters including burden of 3.21 m, spacing of 3.75 m, and charge per delay of 225 kg. A sensitivity analysis was also performed to understand the influence of each input parameters on the blast vibrations. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
- Authors: Bayat, Parichehra , Monjezi, Mejrdamesj , Mehrdanesh, Amirhosseina , Khandelwal, Manoj
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. 4 (2022), p. 3341-3350
- Full Text:
- Reviewed:
- Description: Blast-induced ground vibration is considered as one of the most hazardous phenomena of mine blasting, which can even cause casualties and severe damages to the adjacent properties. Measuring peak particle velocity (PPV) is helpful to know the actual vibration level but prediction of blast vibration prior to the blast is a tedious job due to involvement of blast design, explosive and rock parameters. Nowadays, efficient application of intelligent systems has been approved in different branches of science and technology. In this paper, a gene expression programming (GEP) model was developed to predict PPV using various blasting patterns as model inputs, which showed a high level of accuracy for the implemented model. Also, to optimize blast pattern attaining minimum ground vibration during blasting operation, the developed functional GEP model was taken as objective function for grasshopper optimization algorithm (GOA). Construction of GOA model was performed using a trial and error mechanism to find out the best possible pertinent GOA parameters. Finally, it was observed that utilizing GOA technique, PPV can be reduced by 67% with optimized blast parameters including burden of 3.21 m, spacing of 3.75 m, and charge per delay of 225 kg. A sensitivity analysis was also performed to understand the influence of each input parameters on the blast vibrations. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
Edge computing for Internet of Everything : a survey
- Kong, Xiangjie, Wu, Yuhan, Wang, Hui, Xia, Feng
- Authors: Kong, Xiangjie , Wu, Yuhan , Wang, Hui , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 23 (2022), p. 23472-23485
- Full Text:
- Reviewed:
- Description: In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed. © 2014 IEEE.
- Authors: Kong, Xiangjie , Wu, Yuhan , Wang, Hui , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 23 (2022), p. 23472-23485
- Full Text:
- Reviewed:
- Description: In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed. © 2014 IEEE.
Emerging point of care devices and artificial intelligence : prospects and challenges for public health
- Stranieri, Andrew, Venkatraman, Sitalakshmi, Minicz, John, Zarnegar, Armita, Firmin, Sally, Balasubramanian, Venki, Jelinek, Herbert
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.