A tree-based stacking ensemble technique with feature selection for network intrusion detection
- Rashid, Mamanur, Kamruzzaman, Joarder, Imam, Tasadduq, Wibowo, Santoso, Gordon, Steven
- Authors: Rashid, Mamanur , Kamruzzaman, Joarder , Imam, Tasadduq , Wibowo, Santoso , Gordon, Steven
- Date: 2022
- Type: Text , Journal article
- Relation: Applied Intelligence Vol. 52, no. 9 (2022), p. 9768-9781
- Full Text: false
- Reviewed:
- Description: Several studies have used machine learning algorithms to develop intrusion systems (IDS), which differentiate anomalous behaviours from the normal activities of network systems. Due to the ease of automated data collection and subsequently an increased size of collected data on network traffic and activities, the complexity of intrusion analysis is increasing exponentially. A particular issue, due to statistical and computation limitations, a single classifier may not perform well for large scale data as existent in modern IDS contexts. Ensemble methods have been explored in literature in such big data contexts. Although more complicated and requiring additional computation, literature has a note that ensemble methods can result in better accuracy than single classifiers in different large scale data classification contexts, and it is interesting to explore how ensemble approaches can perform in IDS. In this research, we introduce a tree-based stacking ensemble technique (SET) and test the effectiveness of the proposed model on two intrusion datasets (NSL-KDD and UNSW-NB15). We further enhance incorporate feature selection techniques to select the best relevant features with the proposed SET. A comprehensive performance analysis shows that our proposed model can better identify the normal and anomaly traffic in network than other existing IDS models. This implies the potentials of our proposed system for cybersecurity in Internet of Things (IoT) and large scale networks. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
A survey on big multimedia data processing and management in smart cities
- Usman, Muhammad, Jan, Mian, He, Xiangjian, Chen, Jinjun
- Authors: Usman, Muhammad , Jan, Mian , He, Xiangjian , Chen, Jinjun
- Date: 2019
- Type: Text , Journal article
- Relation: ACM computing surveys Vol. 52, no. 3 (2019), p. 1-29
- Full Text: false
- Reviewed:
- Description: Integration of embedded multimedia devices with powerful computing platforms, e.g., machine learning platforms, helps to build smart cities and transforms the concept of Internet of Things into Internet of Multimedia Things (IoMT). To provide different services to the residents of smart cities, the IoMT technology generates big multimedia data. The management of big multimedia data is a challenging task for IoMT technology. Without proper management, it is hard to maintain consistency, reusability, and reconcilability of generated big multimedia data in smart cities. Various machine learning techniques can be used for automatic classification of raw multimedia data and to allow machines to learn features and perform specific tasks. In this survey, we focus on various machine learning platforms that can be used to process and manage big multimedia data generated by different applications in smart cities. We also highlight various limitations and research challenges that need to be considered when processing big multimedia data in real-time.
Security and privacy in IoT using machine learning and blockchain : threats and countermeasures
- Waheed, Nazar, He, Xiangjian, Ikram, Muhammad, Usman, Muhammad, Hashmi, Saad
- Authors: Waheed, Nazar , He, Xiangjian , Ikram, Muhammad , Usman, Muhammad , Hashmi, Saad
- Date: 2021
- Type: Text , Journal article , Review
- Relation: ACM Computing Surveys Vol. 53, no. 6 (2021), p.
- Full Text:
- Reviewed:
- Description: Security and privacy of users have become significant concerns due to the involvement of the Internet of Things (IoT) devices in numerous applications. Cyber threats are growing at an explosive pace making the existing security and privacy measures inadequate. Hence, everyone on the Internet is a product for hackers. Consequently, Machine Learning (ML) algorithms are used to produce accurate outputs from large complex databases, where the generated outputs can be used to predict and detect vulnerabilities in IoT-based systems. Furthermore, Blockchain (BC) techniques are becoming popular in modern IoT applications to solve security and privacy issues. Several studies have been conducted on either ML algorithms or BC techniques. However, these studies target either security or privacy issues using ML algorithms or BC techniques, thus posing a need for a combined survey on efforts made in recent years addressing both security and privacy issues using ML algorithms and BC techniques. In this article, we provide a summary of research efforts made in the past few years, from 2008 to 2019, addressing security and privacy issues using ML algorithms and BC techniques in the IoT domain. First, we discuss and categorize various security and privacy threats reported in the past 12 years in the IoT domain. We then classify the literature on security and privacy efforts based on ML algorithms and BC techniques in the IoT domain. Finally, we identify and illuminate several challenges and future research directions using ML algorithms and BC techniques to address security and privacy issues in the IoT domain. © 2020 ACM.
- Authors: Waheed, Nazar , He, Xiangjian , Ikram, Muhammad , Usman, Muhammad , Hashmi, Saad
- Date: 2021
- Type: Text , Journal article , Review
- Relation: ACM Computing Surveys Vol. 53, no. 6 (2021), p.
- Full Text:
- Reviewed:
- Description: Security and privacy of users have become significant concerns due to the involvement of the Internet of Things (IoT) devices in numerous applications. Cyber threats are growing at an explosive pace making the existing security and privacy measures inadequate. Hence, everyone on the Internet is a product for hackers. Consequently, Machine Learning (ML) algorithms are used to produce accurate outputs from large complex databases, where the generated outputs can be used to predict and detect vulnerabilities in IoT-based systems. Furthermore, Blockchain (BC) techniques are becoming popular in modern IoT applications to solve security and privacy issues. Several studies have been conducted on either ML algorithms or BC techniques. However, these studies target either security or privacy issues using ML algorithms or BC techniques, thus posing a need for a combined survey on efforts made in recent years addressing both security and privacy issues using ML algorithms and BC techniques. In this article, we provide a summary of research efforts made in the past few years, from 2008 to 2019, addressing security and privacy issues using ML algorithms and BC techniques in the IoT domain. First, we discuss and categorize various security and privacy threats reported in the past 12 years in the IoT domain. We then classify the literature on security and privacy efforts based on ML algorithms and BC techniques in the IoT domain. Finally, we identify and illuminate several challenges and future research directions using ML algorithms and BC techniques to address security and privacy issues in the IoT domain. © 2020 ACM.
Performance analysis of different types of machine learning classifiers for non-technical loss detection
- Ghori, Khawaja, Abbasi, Rabeeh, Awais, Muhammad, Imran, Muhammad, Ullah, Ata, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Abbasi, Rabeeh , Awais, Muhammad , Imran, Muhammad , Ullah, Ata , Szathmary, Laszlo
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 16033-16048
- Full Text:
- Reviewed:
- Description: With the ever-growing demand of electric power, it is quite challenging to detect and prevent Non-Technical Loss (NTL) in power industries. NTL is committed by meter bypassing, hooking from the main lines, reversing and tampering the meters. Manual on-site checking and reporting of NTL remains an unattractive strategy due to the required manpower and associated cost. The use of machine learning classifiers has been an attractive option for NTL detection. It enhances data-oriented analysis and high hit ratio along with less cost and manpower requirements. However, there is still a need to explore the results across multiple types of classifiers on a real-world dataset. This paper considers a real dataset from a power supply company in Pakistan to identify NTL. We have evaluated 15 existing machine learning classifiers across 9 types which also include the recently developed CatBoost, LGBoost and XGBoost classifiers. Our work is validated using extensive simulations. Results elucidate that ensemble methods and Artificial Neural Network (ANN) outperform the other types of classifiers for NTL detection in our real dataset. Moreover, we have also derived a procedure to identify the top-14 features out of a total of 71 features, which are contributing 77% in predicting NTL. We conclude that including more features beyond this threshold does not improve performance and thus limiting to the selected feature set reduces the computation time required by the classifiers. Last but not least, the paper also analyzes the results of the classifiers with respect to their types, which has opened a new area of research in NTL detection. © 2013 IEEE.
- Authors: Ghori, Khawaja , Abbasi, Rabeeh , Awais, Muhammad , Imran, Muhammad , Ullah, Ata , Szathmary, Laszlo
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 16033-16048
- Full Text:
- Reviewed:
- Description: With the ever-growing demand of electric power, it is quite challenging to detect and prevent Non-Technical Loss (NTL) in power industries. NTL is committed by meter bypassing, hooking from the main lines, reversing and tampering the meters. Manual on-site checking and reporting of NTL remains an unattractive strategy due to the required manpower and associated cost. The use of machine learning classifiers has been an attractive option for NTL detection. It enhances data-oriented analysis and high hit ratio along with less cost and manpower requirements. However, there is still a need to explore the results across multiple types of classifiers on a real-world dataset. This paper considers a real dataset from a power supply company in Pakistan to identify NTL. We have evaluated 15 existing machine learning classifiers across 9 types which also include the recently developed CatBoost, LGBoost and XGBoost classifiers. Our work is validated using extensive simulations. Results elucidate that ensemble methods and Artificial Neural Network (ANN) outperform the other types of classifiers for NTL detection in our real dataset. Moreover, we have also derived a procedure to identify the top-14 features out of a total of 71 features, which are contributing 77% in predicting NTL. We conclude that including more features beyond this threshold does not improve performance and thus limiting to the selected feature set reduces the computation time required by the classifiers. Last but not least, the paper also analyzes the results of the classifiers with respect to their types, which has opened a new area of research in NTL detection. © 2013 IEEE.
A survey on representation learning efforts in cybersecurity domain
- Usman, Muhammad, Jan, Mian, He, Xiangjian, Chen, Jinjun
- Authors: Usman, Muhammad , Jan, Mian , He, Xiangjian , Chen, Jinjun
- Date: 2020
- Type: Text , Journal article
- Relation: ACM computing surveys Vol. 52, no. 6 (2020), p. 1-28
- Full Text: false
- Reviewed:
- Description: In this technology-based era, network-based systems are facing new cyber-attacks on daily bases. Traditional cybersecurity approaches are based on old threat-knowledge databases and need to be updated on a daily basis to stand against new generation of cyber-threats and protect underlying network-based systems. Along with updating threat-knowledge databases, there is a need for proper management and processing of data generated by sensitive real-time applications. In recent years, various computing platforms based on representation learning algorithms have emerged as a useful resource to manage and exploit the generated data to extract meaningful information. If these platforms are properly utilized, then strong cybersecurity systems can be developed to protect the underlying network-based systems and support sensitive real-time applications. In this survey, we highlight various cyber-threats, real-life examples, and initiatives taken by various international organizations. We discuss various computing platforms based on representation learning algorithms to process and analyze the generated data. We highlight various popular datasets introduced by well-known global organizations that can be used to train the representation learning algorithms to predict and detect threats. We also provide an in-depth analysis of research efforts based on representation learning algorithms made in recent years to protect the underlying network-based systems against current cyber-threats. Finally, we highlight various limitations and challenges in these efforts and available datasets that need to be considered when using them to build cybersecurity systems.
An automatic detection of breast cancer diagnosis and prognosis based on machine learning using ensemble of classifiers
- Naseem, Usman, Rashid, Junaid, Ali, Liaqat, Kim, Jungeun, Haq, Qazi, Awan, Mazhar, Imran, Muhammad
- Authors: Naseem, Usman , Rashid, Junaid , Ali, Liaqat , Kim, Jungeun , Haq, Qazi , Awan, Mazhar , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78242-78252
- Full Text:
- Reviewed:
- Description: Breast cancer (BC) is the second most prevalent type of cancer among women leading to death, and its rate of mortality is very high. Its effects will be reduced if diagnosed early. BC's early detection will greatly boost the prognosis and likelihood of recovery, as it may encourage prompt surgical care for patients. It is therefore vital to have a system enabling the healthcare industry to detect breast cancer quickly and accurately. Machine learning (ML) is widely used in breast cancer (BC) pattern classification due to its advantages in modelling a critical feature detection from complex BC datasets. In this paper, we propose a system for automatic detection of BC diagnosis and prognosis using ensemble of classifiers. First, we review various machine learning (ML) algorithms and ensemble of different ML algorithms. We present an overview of ML algorithms including ANN, and ensemble of different classifiers for automatic BC diagnosis and prognosis detection. We also present and compare various ensemble models and other variants of tested ML based models with and without up-sampling technique on two benchmark datasets. We also studied the effects of using balanced class weight on prognosis dataset and compared its performance with others. The results showed that the ensemble method outperformed other state-of-the-art methods and achieved 98.83% accuracy. Because of high performance, the proposed system is of great importance to the medical industry and relevant research community. The comparison shows that the proposed method outperformed other state-of-the-art methods. © 2013 IEEE.
- Authors: Naseem, Usman , Rashid, Junaid , Ali, Liaqat , Kim, Jungeun , Haq, Qazi , Awan, Mazhar , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78242-78252
- Full Text:
- Reviewed:
- Description: Breast cancer (BC) is the second most prevalent type of cancer among women leading to death, and its rate of mortality is very high. Its effects will be reduced if diagnosed early. BC's early detection will greatly boost the prognosis and likelihood of recovery, as it may encourage prompt surgical care for patients. It is therefore vital to have a system enabling the healthcare industry to detect breast cancer quickly and accurately. Machine learning (ML) is widely used in breast cancer (BC) pattern classification due to its advantages in modelling a critical feature detection from complex BC datasets. In this paper, we propose a system for automatic detection of BC diagnosis and prognosis using ensemble of classifiers. First, we review various machine learning (ML) algorithms and ensemble of different ML algorithms. We present an overview of ML algorithms including ANN, and ensemble of different classifiers for automatic BC diagnosis and prognosis detection. We also present and compare various ensemble models and other variants of tested ML based models with and without up-sampling technique on two benchmark datasets. We also studied the effects of using balanced class weight on prognosis dataset and compared its performance with others. The results showed that the ensemble method outperformed other state-of-the-art methods and achieved 98.83% accuracy. Because of high performance, the proposed system is of great importance to the medical industry and relevant research community. The comparison shows that the proposed method outperformed other state-of-the-art methods. © 2013 IEEE.
- Qiu, Yingui, Zhou, Jian, Khandelwal, Manoj, Yang, Haitao, Yang, Peixi, Li, Chuanqi
- Authors: Qiu, Yingui , Zhou, Jian , Khandelwal, Manoj , Yang, Haitao , Yang, Peixi , Li, Chuanqi
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. (2022), p. 4145-4162
- Full Text: false
- Reviewed:
- Description: Accurate prediction of ground vibration caused by blasting has always been a significant issue in the mining industry. Ground vibration caused by blasting is a harmful phenomenon to nearby buildings and should be prevented. In this regard, a new intelligent method for predicting peak particle velocity (PPV) induced by blasting had been developed. Accordingly, 150 sets of data composed of thirteen uncontrollable and controllable indicators are selected as input dependent variables, and the measured PPV is used as the output target for characterizing blast-induced ground vibration. Also, in order to enhance its predictive accuracy, the gray wolf optimization (GWO), whale optimization algorithm (WOA) and Bayesian optimization algorithm (BO) are applied to fine-tune the hyper-parameters of the extreme gradient boosting (XGBoost) model. According to the root mean squared error (RMSE), determination coefficient (R2), the variance accounted for (VAF), and mean absolute error (MAE), the hybrid models GWO-XGBoost, WOA-XGBoost, and BO-XGBoost were verified. Additionally, XGBoost, CatBoost (CatB), Random Forest, and gradient boosting regression (GBR) were also considered and used to compare the multiple hybrid-XGBoost models that have been developed. The values of RMSE, R2, VAF, and MAE obtained from WOA-XGBoost, GWO-XGBoost, and BO-XGBoost models were equal to (3.0538, 0.9757, 97.68, 2.5032), (3.0954, 0.9751, 97.62, 2.5189), and (3.2409, 0.9727, 97.65, 2.5867), respectively. Findings reveal that compared with other machine learning models, the proposed WOA-XGBoost became the most reliable model. These three optimized hybrid models are superior to the GBR model, CatB model, Random Forest model, and the XGBoost model, confirming the ability of the meta-heuristic algorithm to enhance the performance of the PPV model, which can be helpful for mine planners and engineers using advanced supervised machine learning with metaheuristic algorithms for predicting ground vibration caused by explosions. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
Water quality management using hybrid machine learning and data mining algorithms : an indexing approach
- Aslam, Bilal, Maqsoom, Ahsen, Cheema, Ali, Ullah, Fahim, Alharbi, Abdullah, Imran, Muhammad
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
Adaptation of a real-time deep learning approach with an analog fault detection technique for reliability forecasting of capacitor banks used in mobile vehicles
- Rezaei, Mohammad, Fathollahi, Arman, Rezaei, Sajad, Hu, Jiefeng, Gheisarnejad, Meysam, Teimouri, Ali, Rituraj, Rituraj, Mosavi, Amir, Khooban, Mohammad-Hassan
- Authors: Rezaei, Mohammad , Fathollahi, Arman , Rezaei, Sajad , Hu, Jiefeng , Gheisarnejad, Meysam , Teimouri, Ali , Rituraj, Rituraj , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 132271-132287
- Full Text:
- Reviewed:
- Description: The DC-Link capacitor is defined as the essential electronics element which sources or sinks the respective currents. The reliability of DC-link capacitor-banks (CBs) encounters many challenges due to their usage in electric vehicles. Heavy shocks may damage the internal capacitors without shutting down the CB. The fundamental development obstacles of CBs are: lack of considering capacitor degradation in reliability assessment, the impact of unforeseen sudden internal capacitor faults in forecasting CB lifetime, and the faults consequence on CB degradation. The sudden faults change the CB capacitance, which leads to reliability change. To more accurately estimate the reliability, the type of the fault needs to be detected for predicting the correct post-fault capacitance. To address these practical problems, a new CB model and reliability assessment formula covering all fault types are first presented, then, a new analog fault-detection method is presented, and a combination of online-learning long short-term memory (LSTM) and fault-detection method is subsequently performed, which adapt the sudden internal CB faults with the LSTM to correctly predict the CB degradation. To confirm the correct LSTM operation, four capacitors degradation is practically recorded for 2000-hours, and the off-line faultless degradation values predicted by the LSTM are compared with the actual data. The experimental findings validate the applicability of the proposed method. The codes and data are provided. © 2013 IEEE.
- Authors: Rezaei, Mohammad , Fathollahi, Arman , Rezaei, Sajad , Hu, Jiefeng , Gheisarnejad, Meysam , Teimouri, Ali , Rituraj, Rituraj , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 132271-132287
- Full Text:
- Reviewed:
- Description: The DC-Link capacitor is defined as the essential electronics element which sources or sinks the respective currents. The reliability of DC-link capacitor-banks (CBs) encounters many challenges due to their usage in electric vehicles. Heavy shocks may damage the internal capacitors without shutting down the CB. The fundamental development obstacles of CBs are: lack of considering capacitor degradation in reliability assessment, the impact of unforeseen sudden internal capacitor faults in forecasting CB lifetime, and the faults consequence on CB degradation. The sudden faults change the CB capacitance, which leads to reliability change. To more accurately estimate the reliability, the type of the fault needs to be detected for predicting the correct post-fault capacitance. To address these practical problems, a new CB model and reliability assessment formula covering all fault types are first presented, then, a new analog fault-detection method is presented, and a combination of online-learning long short-term memory (LSTM) and fault-detection method is subsequently performed, which adapt the sudden internal CB faults with the LSTM to correctly predict the CB degradation. To confirm the correct LSTM operation, four capacitors degradation is practically recorded for 2000-hours, and the off-line faultless degradation values predicted by the LSTM are compared with the actual data. The experimental findings validate the applicability of the proposed method. The codes and data are provided. © 2013 IEEE.
Nonsmooth optimization-based hyperparameter-free neural networks for large-scale regression
- Karmitsa, Napsu, Taheri, Sona, Joki, Kaisa, Paasivirta, Pauliina, Defterdarovic, J., Bagirov, Adil, Mäkelä, Marko
- Authors: Karmitsa, Napsu , Taheri, Sona , Joki, Kaisa , Paasivirta, Pauliina , Defterdarovic, J. , Bagirov, Adil , Mäkelä, Marko
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 9 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the (Formula presented.) -loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. © 2023 by the authors.
- Authors: Karmitsa, Napsu , Taheri, Sona , Joki, Kaisa , Paasivirta, Pauliina , Defterdarovic, J. , Bagirov, Adil , Mäkelä, Marko
- Date: 2023
- Type: Text , Journal article
- Relation: Algorithms Vol. 16, no. 9 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, and the (Formula presented.) -loss functions. A modified version of the limited memory bundle method is applied to minimize this nonsmooth objective. In addition, a novel constructive approach for automated determination of the proper number of hidden nodes is developed. Finally, large real-world data sets are used to evaluate the proposed algorithm and to compare it with some state-of-the-art neural network algorithms for regression. The results demonstrate the superiority of the proposed algorithm as a predictive tool in most data sets used in numerical experiments. © 2023 by the authors.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- «
- ‹
- 1
- ›
- »