An exploration of online technoliteracy capability teaching and learning in early years classrooms
- Authors: Falloon, Garry
- Date: 2024
- Type: Text , Journal article
- Relation: Education and Information Technologies Vol. 29, no. 1 (2024), p. 625-654
- Full Text:
- Reviewed:
- Description: The increasing use of digital devices by young children, has led to calls for earlier teaching for information literacy. However, some research indicates reluctance to do this, due to perceived limitations of young children and notions about what is and is not ‘appropriate’ for them learn. This study examines this proposition, through analysis of 6 and 7 year olds’ application of ‘Technoliteracy’ capabilities during a unit of learning about Matariki (the Maori new year). It used an updated and expanded revision of Durrant and Green’s (2000) l(IT)eracy capability model, to understand how the students applied ‘Technoliteracy’ capabilities to online research and production of an information artefact for an identified audience. Although results were mixed, data was found of students’ productive engagement of ‘Technoliteracy’ capabilities aligned with Durrant and Green’s dimensions, suggesting that with developmentally-appropriate curriculum and pedagogy they were capable of integrating these for meaning making, judging meaning quality, and meaning sharing and communication. Given increasingly ubiquitous access to devices from a young age, results indicate that serious consideration should be given to teaching basic ‘Technoliteracy’ capabilities in early years classrooms. © 2023, Crown.
- Authors: Falloon, Garry
- Date: 2024
- Type: Text , Journal article
- Relation: Education and Information Technologies Vol. 29, no. 1 (2024), p. 625-654
- Full Text:
- Reviewed:
- Description: The increasing use of digital devices by young children, has led to calls for earlier teaching for information literacy. However, some research indicates reluctance to do this, due to perceived limitations of young children and notions about what is and is not ‘appropriate’ for them learn. This study examines this proposition, through analysis of 6 and 7 year olds’ application of ‘Technoliteracy’ capabilities during a unit of learning about Matariki (the Maori new year). It used an updated and expanded revision of Durrant and Green’s (2000) l(IT)eracy capability model, to understand how the students applied ‘Technoliteracy’ capabilities to online research and production of an information artefact for an identified audience. Although results were mixed, data was found of students’ productive engagement of ‘Technoliteracy’ capabilities aligned with Durrant and Green’s dimensions, suggesting that with developmentally-appropriate curriculum and pedagogy they were capable of integrating these for meaning making, judging meaning quality, and meaning sharing and communication. Given increasingly ubiquitous access to devices from a young age, results indicate that serious consideration should be given to teaching basic ‘Technoliteracy’ capabilities in early years classrooms. © 2023, Crown.
Coupled attention networks for multivariate time series anomaly detection
- Xia, Feng, Chen, Xin, Yu, Shuo, Hou, Mingliang, Liu, Mujie, You, Linlin
- Authors: Xia, Feng , Chen, Xin , Yu, Shuo , Hou, Mingliang , Liu, Mujie , You, Linlin
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Transactions on Emerging Topics in Computing Vol. 12, no. 1 (2024), p. 240-253
- Full Text:
- Reviewed:
- Description: Multivariate time series anomaly detection (MTAD) plays a vital role in a wide variety of real-world application domains. Over the past few years, MTAD has attracted rapidly increasing attention from both academia and industry. Many deep learning and graph learning models have been developed for effective anomaly detection in multivariate time series data, which enable advanced applications such as smart surveillance and risk management with unprecedented capabilities. Nevertheless, MTAD is facing critical challenges deriving from the dependencies among sensors and variables, which often change over time. To address this issue, we propose a coupled attention-based neural network framework (CAN) for anomaly detection in multivariate time series data featuring dynamic variable relationships. We combine adaptive graph learning methods with graph attention to generate a global-local graph that can represent both global correlations and dynamic local correlations among sensors. To capture inter-sensor relationships and temporal dependencies, a convolutional neural network based on the global-local graph is integrated with a temporal self-attention module to construct a coupled attention module. In addition, we develop a multilevel encoder-decoder architecture that accommodates reconstruction and prediction tasks to better characterize multivariate time series data. Extensive experiments on real-world datasets have been conducted to evaluate the performance of the proposed CAN approach, and the results show that CAN significantly outperforms state-of-the-art baselines. © 2013 IEEE.
- Authors: Xia, Feng , Chen, Xin , Yu, Shuo , Hou, Mingliang , Liu, Mujie , You, Linlin
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Transactions on Emerging Topics in Computing Vol. 12, no. 1 (2024), p. 240-253
- Full Text:
- Reviewed:
- Description: Multivariate time series anomaly detection (MTAD) plays a vital role in a wide variety of real-world application domains. Over the past few years, MTAD has attracted rapidly increasing attention from both academia and industry. Many deep learning and graph learning models have been developed for effective anomaly detection in multivariate time series data, which enable advanced applications such as smart surveillance and risk management with unprecedented capabilities. Nevertheless, MTAD is facing critical challenges deriving from the dependencies among sensors and variables, which often change over time. To address this issue, we propose a coupled attention-based neural network framework (CAN) for anomaly detection in multivariate time series data featuring dynamic variable relationships. We combine adaptive graph learning methods with graph attention to generate a global-local graph that can represent both global correlations and dynamic local correlations among sensors. To capture inter-sensor relationships and temporal dependencies, a convolutional neural network based on the global-local graph is integrated with a temporal self-attention module to construct a coupled attention module. In addition, we develop a multilevel encoder-decoder architecture that accommodates reconstruction and prediction tasks to better characterize multivariate time series data. Extensive experiments on real-world datasets have been conducted to evaluate the performance of the proposed CAN approach, and the results show that CAN significantly outperforms state-of-the-art baselines. © 2013 IEEE.
DQN approach for adaptive self-healing of VNFs in cloud-native network
- Arulappan, Arunkumar, Mahanti, Aniket, Passi, Kalpdrum, Srinivasan, Thiruvenkadam, Naha, Ranesh, Raja, Gunasekaran
- Authors: Arulappan, Arunkumar , Mahanti, Aniket , Passi, Kalpdrum , Srinivasan, Thiruvenkadam , Naha, Ranesh , Raja, Gunasekaran
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 34489-34504
- Full Text:
- Reviewed:
- Description: The transformation from physical network function to Virtual Network Function (VNF) requires a fundamental design change in how applications and services are tested and assured in a hybrid virtual network. Once the VNFs are onboarded in a cloud network infrastructure, operators need to test VNFs in real-time at the time of instantiation automatically. This paper explicitly analyses the problem of adaptive self-healing of a Virtual Machine (VM) allocated by the VNF with the Deep Reinforcement Learning (DRL) approach. The DRL-based big data collection and analytics engine performs aggregation to probe and analyze data for troubleshooting and performance management. This engine helps to determine corrective actions (self-healing), such as scaling or migrating VNFs. Hence, we proposed a Deep Queue Learning (DQL) based Deep Queue Networks (DQN) mechanism for self-healing VNFs in the virtualized infrastructure manager. Virtual network probes of closed-loop orchestration perform the automation of the VNF and provide analytics for real-time, policy-driven orchestration in an open networking automation platform through the stochastic gradient descent method for VNF service assurance and network reliability. The proposed DQN/DDQN mechanism optimizes the price and lowers the cost by 18% for resource usage without disrupting the Quality of Service (QoS) provided by the VNF. The outcome of adaptive self-healing of the VNFs enhances the computational performance by 27% compared to other state-of-the-art algorithms. © 2013 IEEE.
- Authors: Arulappan, Arunkumar , Mahanti, Aniket , Passi, Kalpdrum , Srinivasan, Thiruvenkadam , Naha, Ranesh , Raja, Gunasekaran
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 34489-34504
- Full Text:
- Reviewed:
- Description: The transformation from physical network function to Virtual Network Function (VNF) requires a fundamental design change in how applications and services are tested and assured in a hybrid virtual network. Once the VNFs are onboarded in a cloud network infrastructure, operators need to test VNFs in real-time at the time of instantiation automatically. This paper explicitly analyses the problem of adaptive self-healing of a Virtual Machine (VM) allocated by the VNF with the Deep Reinforcement Learning (DRL) approach. The DRL-based big data collection and analytics engine performs aggregation to probe and analyze data for troubleshooting and performance management. This engine helps to determine corrective actions (self-healing), such as scaling or migrating VNFs. Hence, we proposed a Deep Queue Learning (DQL) based Deep Queue Networks (DQN) mechanism for self-healing VNFs in the virtualized infrastructure manager. Virtual network probes of closed-loop orchestration perform the automation of the VNF and provide analytics for real-time, policy-driven orchestration in an open networking automation platform through the stochastic gradient descent method for VNF service assurance and network reliability. The proposed DQN/DDQN mechanism optimizes the price and lowers the cost by 18% for resource usage without disrupting the Quality of Service (QoS) provided by the VNF. The outcome of adaptive self-healing of the VNFs enhances the computational performance by 27% compared to other state-of-the-art algorithms. © 2013 IEEE.
A nethack learning environment language wrapper for autonomous agents
- Goodger, Nikolaj, Vamplew, Peter, Foale, Cameron, Dazeley, Richard
- Authors: Goodger, Nikolaj , Vamplew, Peter , Foale, Cameron , Dazeley, Richard
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Open Research Software Vol. 11, no. (2023), p.
- Full Text:
- Reviewed:
- Description: This paper describes a language wrapper for the NetHack Learning Environment (NLE) [1]. The wrapper replaces the non-language observations and actions with comparable language versions. The NLE offers a grand challenge for AI research while MiniHack [2] extends this potential to more specific and configurable tasks. By providing a language interface, we can enable further research on language agents and directly connect language models to a versatile environment. © 2023 The Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.
- Authors: Goodger, Nikolaj , Vamplew, Peter , Foale, Cameron , Dazeley, Richard
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Open Research Software Vol. 11, no. (2023), p.
- Full Text:
- Reviewed:
- Description: This paper describes a language wrapper for the NetHack Learning Environment (NLE) [1]. The wrapper replaces the non-language observations and actions with comparable language versions. The NLE offers a grand challenge for AI research while MiniHack [2] extends this potential to more specific and configurable tasks. By providing a language interface, we can enable further research on language agents and directly connect language models to a versatile environment. © 2023 The Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.
A novel dynamic software-defined networking approach to neutralize traffic burst
- Sharma, Aakanksha, Balasubramanian, Venki, Kamruzzaman, Joarder
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
Application of various robust techniques to study and evaluate the role of effective parameters on rock fragmentation
- Mehrdanesh, Amirhossein, Monjezi, Masoud, Khandelwal, Manoj, Bayat, Parichehr
- Authors: Mehrdanesh, Amirhossein , Monjezi, Masoud , Khandelwal, Manoj , Bayat, Parichehr
- Date: 2023
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 39, no. 2 (2023), p. 1317-1327
- Full Text:
- Reviewed:
- Description: In this paper, an attempt has been made to implement various robust techniques to predict rock fragmentation due to blasting in open pit mines using effective parameters. As rock fragmentation prediction is very complex and complicated, and due to that various artificial intelligence-based techniques, such as artificial neural network (ANN), classification and regression tree and support vector machines were selected for the modeling. To validate and compare the prediction results, conventional multivariate regression analysis was also utilized on the same data sets. Since accuracy and generality of the modeling is dependent on the number of inputs, it was tried to collect enough required information from four different open pit mines of Iran. According to the obtained results, it was revealed that ANN with a determination coefficient of 0.986 is the most precise method of modeling as compared to the other applied techniques. Also, based on the performed sensitivity analysis, it was observed that the most prevailing parameters on the rock fragmentation are rock quality designation, Schmidt hardness value, mean in-situ block size and the minimum effective ones are hole diameter, burden and spacing. The advantage of back propagation neural network technique for using in this study compared to other soft computing methods is that they are able to describe complex and nonlinear multivariable problems in a transparent way. Furthermore, ANN can be used as a first approach, where much knowledge about the influencing parameters are missing. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Mehrdanesh, Amirhossein , Monjezi, Masoud , Khandelwal, Manoj , Bayat, Parichehr
- Date: 2023
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 39, no. 2 (2023), p. 1317-1327
- Full Text:
- Reviewed:
- Description: In this paper, an attempt has been made to implement various robust techniques to predict rock fragmentation due to blasting in open pit mines using effective parameters. As rock fragmentation prediction is very complex and complicated, and due to that various artificial intelligence-based techniques, such as artificial neural network (ANN), classification and regression tree and support vector machines were selected for the modeling. To validate and compare the prediction results, conventional multivariate regression analysis was also utilized on the same data sets. Since accuracy and generality of the modeling is dependent on the number of inputs, it was tried to collect enough required information from four different open pit mines of Iran. According to the obtained results, it was revealed that ANN with a determination coefficient of 0.986 is the most precise method of modeling as compared to the other applied techniques. Also, based on the performed sensitivity analysis, it was observed that the most prevailing parameters on the rock fragmentation are rock quality designation, Schmidt hardness value, mean in-situ block size and the minimum effective ones are hole diameter, burden and spacing. The advantage of back propagation neural network technique for using in this study compared to other soft computing methods is that they are able to describe complex and nonlinear multivariable problems in a transparent way. Furthermore, ANN can be used as a first approach, where much knowledge about the influencing parameters are missing. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Blockchain technology and application : an overview
- Dong, Shi, Abbas, Khushnood, Li, Meixi, Kamruzzaman, Joarder
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
Bundle enrichment method for nonsmooth difference of convex programming problems
- Gaudioso, Manilo, Taheri, Sona, Bagirov, Adil, Karmitsa, Napsu
- Authors: Gaudioso, Manilo , Taheri, Sona , Bagirov, Adil , Karmitsa, Napsu
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 8 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. © 2023 by the authors.
- Authors: Gaudioso, Manilo , Taheri, Sona , Bagirov, Adil , Karmitsa, Napsu
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 8 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided. © 2023 by the authors.
Deep learning : survey of environmental and camera impacts on internet of things images
- Kaur, Roopdeep, Karmakar, Gour, Xia, Feng, Imran, Muhammad
- Authors: Kaur, Roopdeep , Karmakar, Gour , Xia, Feng , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 9 (2023), p. 9605-9638
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas. © 2023, The Author(s).
- Authors: Kaur, Roopdeep , Karmakar, Gour , Xia, Feng , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 9 (2023), p. 9605-9638
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas. © 2023, The Author(s).
Deep learning-based digital image forgery detection using transfer learning
- Qazi, Emad, Zia, Tanveer, Imran, Muhammad, Faheem, Muhammad
- Authors: Qazi, Emad , Zia, Tanveer , Imran, Muhammad , Faheem, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Intelligent Automation and Soft Computing Vol. 38, no. 3 (2023), p. 225-240
- Full Text:
- Reviewed:
- Description: Deep learning is considered one of the most efficient and reliable methods through which the legitimacy of a digital image can be verified. In the current cyber world where deepfakes have shaken the global community, confirming the legitimacy of a digital image is of great importance. With the advancements made in deep learning techniques, now we can efficiently train and develop state-of-the-art digital image forensic models. The most traditional and widely used method by researchers is convolution neural networks (CNN) for verification of image authenticity but it consumes a considerable number of resources and requires a large dataset for training. Therefore, in this study, a transfer learning based deep learning technique for image forgery detection is proposed. The proposed methodology consists of three modules namely; preprocessing module, convolutional module, and the classification module. By using our proposed technique, the training time is drastically reduced by utilizing the pre-trained weights. The performance of the proposed technique is evaluated by using benchmark datasets, i.e., BOW and BOSSBase that detect five forensic types which include JPEG compression, contrast enhancement (CE), median filtering (MF), additive Gaussian noise, and resampling. We evaluated the performance of our proposed technique by conducting various experiments and case scenarios and achieved an accuracy of 99.92%. The results show the superiority of the proposed system. © 2023, Tech Science Press. All rights reserved.
- Authors: Qazi, Emad , Zia, Tanveer , Imran, Muhammad , Faheem, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Intelligent Automation and Soft Computing Vol. 38, no. 3 (2023), p. 225-240
- Full Text:
- Reviewed:
- Description: Deep learning is considered one of the most efficient and reliable methods through which the legitimacy of a digital image can be verified. In the current cyber world where deepfakes have shaken the global community, confirming the legitimacy of a digital image is of great importance. With the advancements made in deep learning techniques, now we can efficiently train and develop state-of-the-art digital image forensic models. The most traditional and widely used method by researchers is convolution neural networks (CNN) for verification of image authenticity but it consumes a considerable number of resources and requires a large dataset for training. Therefore, in this study, a transfer learning based deep learning technique for image forgery detection is proposed. The proposed methodology consists of three modules namely; preprocessing module, convolutional module, and the classification module. By using our proposed technique, the training time is drastically reduced by utilizing the pre-trained weights. The performance of the proposed technique is evaluated by using benchmark datasets, i.e., BOW and BOSSBase that detect five forensic types which include JPEG compression, contrast enhancement (CE), median filtering (MF), additive Gaussian noise, and resampling. We evaluated the performance of our proposed technique by conducting various experiments and case scenarios and achieved an accuracy of 99.92%. The results show the superiority of the proposed system. © 2023, Tech Science Press. All rights reserved.
Defending SDN against packet injection attacks using deep learning
- Phu, Anh, Li, Bo, Ullah, Faheem, Ul Huque, Tanvir, Naha, Ranesh, Babar, Muhammad, Nguyen, Hung
- Authors: Phu, Anh , Li, Bo , Ullah, Faheem , Ul Huque, Tanvir , Naha, Ranesh , Babar, Muhammad , Nguyen, Hung
- Date: 2023
- Type: Text , Journal article
- Relation: Computer Networks Vol. 234, no. (2023), p.
- Full Text:
- Reviewed:
- Description: The (logically) centralized architecture of software-defined networks makes them an easy target for packet injection attacks. In these attacks, the attacker injects malicious packets into the SDN network to affect the services and performance of the SDN controller and overflows the capacity of the SDN switches. Such attacks have been shown to ultimately stop the network functioning in real-time, leading to network breakdowns. There have been significant works on detecting and defending against similar DoS attacks in non-SDN networks, but detection and protection techniques for SDN against packet injection attacks are still in their infancy. Furthermore, many of the proposed solutions have been shown to be easily bypassed by simple modifications to the attacking packets or by altering the attacking profile. In this paper, we develop novel Graph Convolutional Neural Network models and algorithms for grouping network nodes/users into security classes by learning from network data. We start with two simple classes — nodes that engage in suspicious packet injection attacks and nodes that are not. From these classes, we then partition the network into separate segments with different security policies using distributed Ryu controllers in an SDN network. We show in experiments on an emulated SDN that our detection solution outperforms alternative approaches with above 99% detection accuracy for various types (both old and new) of injection attacks. More importantly, our mitigation solution maintains continuous functions of non-compromised nodes while isolating compromised/suspicious nodes in real-time. All code and data are publicly available for the reproducibility of our results. © 2023 The Author(s)
- Authors: Phu, Anh , Li, Bo , Ullah, Faheem , Ul Huque, Tanvir , Naha, Ranesh , Babar, Muhammad , Nguyen, Hung
- Date: 2023
- Type: Text , Journal article
- Relation: Computer Networks Vol. 234, no. (2023), p.
- Full Text:
- Reviewed:
- Description: The (logically) centralized architecture of software-defined networks makes them an easy target for packet injection attacks. In these attacks, the attacker injects malicious packets into the SDN network to affect the services and performance of the SDN controller and overflows the capacity of the SDN switches. Such attacks have been shown to ultimately stop the network functioning in real-time, leading to network breakdowns. There have been significant works on detecting and defending against similar DoS attacks in non-SDN networks, but detection and protection techniques for SDN against packet injection attacks are still in their infancy. Furthermore, many of the proposed solutions have been shown to be easily bypassed by simple modifications to the attacking packets or by altering the attacking profile. In this paper, we develop novel Graph Convolutional Neural Network models and algorithms for grouping network nodes/users into security classes by learning from network data. We start with two simple classes — nodes that engage in suspicious packet injection attacks and nodes that are not. From these classes, we then partition the network into separate segments with different security policies using distributed Ryu controllers in an SDN network. We show in experiments on an emulated SDN that our detection solution outperforms alternative approaches with above 99% detection accuracy for various types (both old and new) of injection attacks. More importantly, our mitigation solution maintains continuous functions of non-compromised nodes while isolating compromised/suspicious nodes in real-time. All code and data are publicly available for the reproducibility of our results. © 2023 The Author(s)
Enhancing ultimate bearing capacity prediction of cohesionless soils beneath shallow foundations with grey box and hybrid AI models
- Kiany, Katayoon, Baghbani, Abolfazl, Abuel-Naga, Hossam, Baghbani, Hasan, Arabani, Mahyar, Shalchian, Mohammad
- Authors: Kiany, Katayoon , Baghbani, Abolfazl , Abuel-Naga, Hossam , Baghbani, Hasan , Arabani, Mahyar , Shalchian, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 10 (2023), p.
- Full Text:
- Reviewed:
- Description: This study examines the potential of the soft computing technique, namely, multiple linear regression (MLR), genetic programming (GP), classification and regression trees (CART) and GA-ENN (genetic algorithm-emotional neuron network), to predict the ultimate bearing capacity (UBC) of cohesionless soils beneath shallow foundations. For the first time, two grey-box AI models, GP and CART, and one hybrid AI model, GA-ENN, were used in the literature to predict UBC. The inputs of the model are the width of footing (B), depth of footing (D), footing geometry (ratio of length to width, L/B), unit weight of sand (
- Authors: Kiany, Katayoon , Baghbani, Abolfazl , Abuel-Naga, Hossam , Baghbani, Hasan , Arabani, Mahyar , Shalchian, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 10 (2023), p.
- Full Text:
- Reviewed:
- Description: This study examines the potential of the soft computing technique, namely, multiple linear regression (MLR), genetic programming (GP), classification and regression trees (CART) and GA-ENN (genetic algorithm-emotional neuron network), to predict the ultimate bearing capacity (UBC) of cohesionless soils beneath shallow foundations. For the first time, two grey-box AI models, GP and CART, and one hybrid AI model, GA-ENN, were used in the literature to predict UBC. The inputs of the model are the width of footing (B), depth of footing (D), footing geometry (ratio of length to width, L/B), unit weight of sand (
Knowledge graphs : opportunities and challenges
- Peng, Ciyuan, Xia, Feng, Naseriparsa, Mehdi, Osborne, Francesco
- Authors: Peng, Ciyuan , Xia, Feng , Naseriparsa, Mehdi , Osborne, Francesco
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 11 (2023), p. 13071-13102
- Full Text:
- Reviewed:
- Description: With the explosive growth of artificial intelligence (AI) and big data, it has become vitally important to organize and represent the enormous volume of knowledge appropriately. As graph data, knowledge graphs accumulate and convey knowledge of the real world. It has been well-recognized that knowledge graphs effectively represent complex information; hence, they rapidly gain the attention of academia and industry in recent years. Thus to develop a deeper understanding of knowledge graphs, this paper presents a systematic overview of this field. Specifically, we focus on the opportunities and challenges of knowledge graphs. We first review the opportunities of knowledge graphs in terms of two aspects: (1) AI systems built upon knowledge graphs; (2) potential application fields of knowledge graphs. Then, we thoroughly discuss severe technical challenges in this field, such as knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning. We expect that this survey will shed new light on future research and the development of knowledge graphs. © 2023, The Author(s).
- Authors: Peng, Ciyuan , Xia, Feng , Naseriparsa, Mehdi , Osborne, Francesco
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 11 (2023), p. 13071-13102
- Full Text:
- Reviewed:
- Description: With the explosive growth of artificial intelligence (AI) and big data, it has become vitally important to organize and represent the enormous volume of knowledge appropriately. As graph data, knowledge graphs accumulate and convey knowledge of the real world. It has been well-recognized that knowledge graphs effectively represent complex information; hence, they rapidly gain the attention of academia and industry in recent years. Thus to develop a deeper understanding of knowledge graphs, this paper presents a systematic overview of this field. Specifically, we focus on the opportunities and challenges of knowledge graphs. We first review the opportunities of knowledge graphs in terms of two aspects: (1) AI systems built upon knowledge graphs; (2) potential application fields of knowledge graphs. Then, we thoroughly discuss severe technical challenges in this field, such as knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning. We expect that this survey will shed new light on future research and the development of knowledge graphs. © 2023, The Author(s).
MSCET : a multi-scenario offloading schedule for biomedical data processing and analysis in cloud-edge-terminal collaborative vehicular networks
- Ni, Zhichen, Chen, Honglong, Li, Zhe, Wang, Xiaomeng, Yan, Na, Liu, Weifeng, Xia, Feng
- Authors: Ni, Zhichen , Chen, Honglong , Li, Zhe , Wang, Xiaomeng , Yan, Na , Liu, Weifeng , Xia, Feng
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE/ACM Transactions on Computational Biology and Bioinformatics Vol. 20, no. 4 (2023), p. 2376-2386
- Full Text:
- Reviewed:
- Description: With the rapid development of Artificial Intelligence (AI) and Internet of Things (IoTs), an increasing number of computation intensive or delay sensitive biomedical data processing and analysis tasks are produced in vehicles, bringing more and more challenges to the biometric monitoring of drivers. Edge computing is a new paradigm to solve these challenges by offloading tasks from the resource-limited vehicles to Edge Servers (ESs) in Road Side Units (RSUs). However, most of the traditional offloading schedules for vehicular networks concentrate on the edge, while some tasks may be too complex for ESs to process. To this end, we consider a collaborative vehicular network in which the cloud, edge and terminal can cooperate with each other to accomplish the tasks. The vehicles can offload the computation intensive tasks to the cloud to save the resource of edge. We further construct the virtual resource pool which can integrate the resource of multiple ESs since some regions may be covered by multiple RSUs. In this paper, we propose a Multi-Scenario offloading schedule for biomedical data processing and analysis in Cloud-Edge-Terminal collaborative vehicular networks called MSCET. The parameters of the proposed MSCET are optimized to maximize the system utility. We also conduct extensive simulations to evaluate the proposed MSCET and the results illustrate that MSCET outperforms other existing schedules. © 2004-2012 IEEE.
- Authors: Ni, Zhichen , Chen, Honglong , Li, Zhe , Wang, Xiaomeng , Yan, Na , Liu, Weifeng , Xia, Feng
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE/ACM Transactions on Computational Biology and Bioinformatics Vol. 20, no. 4 (2023), p. 2376-2386
- Full Text:
- Reviewed:
- Description: With the rapid development of Artificial Intelligence (AI) and Internet of Things (IoTs), an increasing number of computation intensive or delay sensitive biomedical data processing and analysis tasks are produced in vehicles, bringing more and more challenges to the biometric monitoring of drivers. Edge computing is a new paradigm to solve these challenges by offloading tasks from the resource-limited vehicles to Edge Servers (ESs) in Road Side Units (RSUs). However, most of the traditional offloading schedules for vehicular networks concentrate on the edge, while some tasks may be too complex for ESs to process. To this end, we consider a collaborative vehicular network in which the cloud, edge and terminal can cooperate with each other to accomplish the tasks. The vehicles can offload the computation intensive tasks to the cloud to save the resource of edge. We further construct the virtual resource pool which can integrate the resource of multiple ESs since some regions may be covered by multiple RSUs. In this paper, we propose a Multi-Scenario offloading schedule for biomedical data processing and analysis in Cloud-Edge-Terminal collaborative vehicular networks called MSCET. The parameters of the proposed MSCET are optimized to maximize the system utility. We also conduct extensive simulations to evaluate the proposed MSCET and the results illustrate that MSCET outperforms other existing schedules. © 2004-2012 IEEE.
Multi-aspect annotation and analysis of Nepali tweets on anti-establishment election discourse
- Rauniyar, Kritesh, Poudel, Sweta, Shiwakoti, Shuvam, Thapa, Surendrabikram, Rashid, Junaid, Kim, Jungeun, Imran, Muhammad, Naseem, Usman
- Authors: Rauniyar, Kritesh , Poudel, Sweta , Shiwakoti, Shuvam , Thapa, Surendrabikram , Rashid, Junaid , Kim, Jungeun , Imran, Muhammad , Naseem, Usman
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 143092-143115
- Full Text:
- Reviewed:
- Description: In today's social media-dominated landscape, digital platforms wield substantial influence over public opinion, particularly during crucial political events such as electoral processes. These platforms become hubs for diverse discussions, encompassing topics, reforms, and desired changes. Notably, in times of government dissatisfaction, they serve as arenas for anti-establishment discourse, highlighting the need to analyze public sentiment in these conversations. However, the analysis of such discourse is notably scarce, even in high-resource languages, and entirely non-existent in the context of the Nepali language. To address this critical gap, we present Nepal Anti Establishment discourse Tweets (NAET), a novel dataset comprising 4,445 multi-aspect annotated Nepali tweets, facilitating a comprehensive understanding of political conversations. Our contributions encompass evaluating tweet relevance, sentiment, and satire, while also exploring the presence of hate speech, identifying its targets, and distinguishing directed and non-directed expressions. Additionally, we investigate hope speech, an underexplored aspect crucial in the context of anti-establishment discourse, as it reflects the aspirations and expectations from new political figures and parties. Furthermore, we set NLP-based baselines for all these tasks. To ensure a holistic analysis, we also employ topic modeling, a powerful technique that helps us identify and understand the prevalent themes and patterns emerging from the discourse. Our research thus presents a comprehensive and multi-faceted perspective on anti-establishment election discourse in a low-resource language setting. The dataset is publicly available, facilitating in-depth analysis of political tweets in Nepali discourse and further advancing NLP research for the Nepali language through labeled data and baselines for various NLP tasks. The dataset for this work is made available at https://github.com/rkritesh210/NAET. © 2013 IEEE.
- Authors: Rauniyar, Kritesh , Poudel, Sweta , Shiwakoti, Shuvam , Thapa, Surendrabikram , Rashid, Junaid , Kim, Jungeun , Imran, Muhammad , Naseem, Usman
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 143092-143115
- Full Text:
- Reviewed:
- Description: In today's social media-dominated landscape, digital platforms wield substantial influence over public opinion, particularly during crucial political events such as electoral processes. These platforms become hubs for diverse discussions, encompassing topics, reforms, and desired changes. Notably, in times of government dissatisfaction, they serve as arenas for anti-establishment discourse, highlighting the need to analyze public sentiment in these conversations. However, the analysis of such discourse is notably scarce, even in high-resource languages, and entirely non-existent in the context of the Nepali language. To address this critical gap, we present Nepal Anti Establishment discourse Tweets (NAET), a novel dataset comprising 4,445 multi-aspect annotated Nepali tweets, facilitating a comprehensive understanding of political conversations. Our contributions encompass evaluating tweet relevance, sentiment, and satire, while also exploring the presence of hate speech, identifying its targets, and distinguishing directed and non-directed expressions. Additionally, we investigate hope speech, an underexplored aspect crucial in the context of anti-establishment discourse, as it reflects the aspirations and expectations from new political figures and parties. Furthermore, we set NLP-based baselines for all these tasks. To ensure a holistic analysis, we also employ topic modeling, a powerful technique that helps us identify and understand the prevalent themes and patterns emerging from the discourse. Our research thus presents a comprehensive and multi-faceted perspective on anti-establishment election discourse in a low-resource language setting. The dataset is publicly available, facilitating in-depth analysis of political tweets in Nepali discourse and further advancing NLP research for the Nepali language through labeled data and baselines for various NLP tasks. The dataset for this work is made available at https://github.com/rkritesh210/NAET. © 2013 IEEE.
Nonsmooth optimization-based hyperparameter-free neural networks for large-scale regression
- Karmitsa, Napsu, Taheri, Sona, Joki, Kaisa, Paasivirta, Pauliina, Defterdarovic, J., Bagirov, Adil, Mäkelä, Marko
- Authors: Karmitsa, Napsu , Taheri, Sona , Joki, Kaisa , Paasivirta, Pauliina , Defterdarovic, J. , Bagirov, Adil , Mäkelä, Marko
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 9 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the (Formula presented.) -loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. © 2023 by the authors.
- Authors: Karmitsa, Napsu , Taheri, Sona , Joki, Kaisa , Paasivirta, Pauliina , Defterdarovic, J. , Bagirov, Adil , Mäkelä, Marko
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 9 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the (Formula presented.) -loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. © 2023 by the authors.
Performance analysis of machine learning classifiers for non-technical loss detection
- Ghori, Khawaja, Imran, Muhammad, Nawaz, Asad, Abbasi, Rabeeh, Ullah, Ata, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Imran, Muhammad , Nawaz, Asad , Abbasi, Rabeeh , Ullah, Ata , Szathmary, Laszlo
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Ambient Intelligence and Humanized Computing Vol. 14, no. 11 (2023), p. 15327-15342
- Full Text:
- Reviewed:
- Description: Power companies are responsible for producing and transferring the required amount of electricity from grid stations to individual households. Many countries suffer huge losses in billions of dollars due to non-technical loss (NTL) in power supply companies. To deal with NTL, many machine learning classifiers have been employed in recent time. However, few has been studied about the performance evaluation metrics that are used in NTL detection to evaluate how good or bad the classifier is in predicting the non-technical loss. This paper first uses three classifiers: random forest, K-nearest neighbors and linear support vector machine to predict the occurrence of NTL in a real dataset of an electric supply company containing approximately 80,000 monthly consumption records. Then, it computes 14 performance evaluation metrics across the three classifiers and identify the key scientific relationships between them. These relationships provide insights into deciding which classifier can be more useful under given scenarios for NTL detection. This work can be proved to be a baseline not only for the NTL detection in power industry but also for the selection of appropriate performance evaluation metrics for NTL detection. © 2020, The Author(s).
- Authors: Ghori, Khawaja , Imran, Muhammad , Nawaz, Asad , Abbasi, Rabeeh , Ullah, Ata , Szathmary, Laszlo
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Ambient Intelligence and Humanized Computing Vol. 14, no. 11 (2023), p. 15327-15342
- Full Text:
- Reviewed:
- Description: Power companies are responsible for producing and transferring the required amount of electricity from grid stations to individual households. Many countries suffer huge losses in billions of dollars due to non-technical loss (NTL) in power supply companies. To deal with NTL, many machine learning classifiers have been employed in recent time. However, few has been studied about the performance evaluation metrics that are used in NTL detection to evaluate how good or bad the classifier is in predicting the non-technical loss. This paper first uses three classifiers: random forest, K-nearest neighbors and linear support vector machine to predict the occurrence of NTL in a real dataset of an electric supply company containing approximately 80,000 monthly consumption records. Then, it computes 14 performance evaluation metrics across the three classifiers and identify the key scientific relationships between them. These relationships provide insights into deciding which classifier can be more useful under given scenarios for NTL detection. This work can be proved to be a baseline not only for the NTL detection in power industry but also for the selection of appropriate performance evaluation metrics for NTL detection. © 2020, The Author(s).
Performance and cryptographic evaluation of security protocols in distributed networks using applied pi calculus and Markov Chain
- Edris, Ed, Aiash, Mahdi, Khoshkholghi, Mohammad, Naha, Ranesh, Chowdhury, Abdullahi, Loo, Jonathan
- Authors: Edris, Ed , Aiash, Mahdi , Khoshkholghi, Mohammad , Naha, Ranesh , Chowdhury, Abdullahi , Loo, Jonathan
- Date: 2023
- Type: Text , Journal article
- Relation: Internet of Things (Netherlands) Vol. 24, no. (2023), p.
- Full Text:
- Reviewed:
- Description: The development of cryptographic protocols goes through two stages, namely, security verification and performance analysis. The verification of the protocol's security properties could be analytically achieved using threat modelling, or formally using formal methods and model checkers. The performance analysis could be mathematical or simulation-based. However, mathematical modelling is complicated and does not reflect the actual deployment environment of the protocol in the current state of the art. Simulation software provides scalability and can simulate complicated scenarios, however, there are times when it is not possible to use simulations due to a lack of support for new technologies or simulation scenarios. Therefore, this paper proposes a formal method and analytical model for evaluating the performance of security protocols using applied pi-calculus and Markov Chain processes. It interprets algebraic processes and associates cryptographic operatives with quantitative measures to estimate and evaluate cryptographic costs. With this approach, the protocols are presented as processes using applied pi-calculus, and their security properties are an approximate abstraction of protocol equivalence based on the verification from ProVerif and evaluated using analytical and simulation models for quantitative measures. The interpretation of the quantities is associated with process transitions, rates, and measures as a cost of using cryptographic primitives. This method supports users’ input in analysing the protocol's activities and performance. As a proof of concept, we deploy this approach to assess the performance of security protocols designed to protect large-scale, 5G-based Device-to-Device communications. We also conducted a performance evaluation of the protocols based on analytical and network simulator results to compare the effectiveness of the proposed approach. © 2023 The Author(s)
- Authors: Edris, Ed , Aiash, Mahdi , Khoshkholghi, Mohammad , Naha, Ranesh , Chowdhury, Abdullahi , Loo, Jonathan
- Date: 2023
- Type: Text , Journal article
- Relation: Internet of Things (Netherlands) Vol. 24, no. (2023), p.
- Full Text:
- Reviewed:
- Description: The development of cryptographic protocols goes through two stages, namely, security verification and performance analysis. The verification of the protocol's security properties could be analytically achieved using threat modelling, or formally using formal methods and model checkers. The performance analysis could be mathematical or simulation-based. However, mathematical modelling is complicated and does not reflect the actual deployment environment of the protocol in the current state of the art. Simulation software provides scalability and can simulate complicated scenarios, however, there are times when it is not possible to use simulations due to a lack of support for new technologies or simulation scenarios. Therefore, this paper proposes a formal method and analytical model for evaluating the performance of security protocols using applied pi-calculus and Markov Chain processes. It interprets algebraic processes and associates cryptographic operatives with quantitative measures to estimate and evaluate cryptographic costs. With this approach, the protocols are presented as processes using applied pi-calculus, and their security properties are an approximate abstraction of protocol equivalence based on the verification from ProVerif and evaluated using analytical and simulation models for quantitative measures. The interpretation of the quantities is associated with process transitions, rates, and measures as a cost of using cryptographic primitives. This method supports users’ input in analysing the protocol's activities and performance. As a proof of concept, we deploy this approach to assess the performance of security protocols designed to protect large-scale, 5G-based Device-to-Device communications. We also conducted a performance evaluation of the protocols based on analytical and network simulator results to compare the effectiveness of the proposed approach. © 2023 The Author(s)
UDTN-RS : a new underwater delay tolerant network routing protocol for coastal patrol and surveillance
- Azad, Saiful, Neffati, Ahmed, Mahmud, Mufti, Kaiser, M., Ahmed, Muhammad, Kamruzzaman, Joarder
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.