Churn prediction in telecom industry using machine learning ensembles with class balancing
- Authors: Chowdhury, Abdullahi , Kaisar, Shahriar , Rashid, Md Mamunur , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2021, Brisbane, 8-10 December 2021
- Full Text: false
- Reviewed:
- Description: Telecommunication service providers are going through a very competitive and challenging time to retain existing customers by offering new and attractive services (e.g., unlimited local and international calls, high-speed internet, new phones). It is therefore imperative to analyse and predict customer churn behaviour more accurately. One of the major challenges to analyse churn data and build better prediction model is the imbalance nature of the data. Customer behaviour for churn and non-churn scenarios may contain resembling features. Using a single classifier or simple oversampling method to handle data imbalance often struggles to identify the minority (churn) class data. To overcome the issue, we introduce a model that uses sophisticated oversampling technique in conjunction with ensemble methods, namely Random Forest, Gradient Boost, Extreme Gradient Boost, and AdaBoost. The hyperparameters of the baseline ensemble methods and the oversampling methods were tuned in several ways to investigate their impact on prediction performances. Using a widely used publicly available customer churn dataset, prediction performance of the proposed model was evaluated in term of various metrics, namely, accuracy, precision, recall, F-1 score, AUC under ROC curve. Our model outperformed the existing models and significantly reduced both false positive and false negative prediction. © IEEE 2022.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Distributed denial of service attack detection using machine learning and class oversampling
- Authors: Shafin, Sakib , Prottoy, Shafin , Abbas, Saif , Hakim, Safayat , Chowdhury, Abdullahi , Rashid, Md Mamanur
- Date: 2021
- Type: Text , Conference paper
- Relation: First International Conference on Applied Intelligence and Informatics, AII 2021, Nottingham, UK, July 30-31, 2021 Vol. 1435, p. 247-259
- Full Text: false
- Reviewed:
- Description: Distributed Denial of Services (DDoS) attack, one of the most dangerous types of cyber attack, has been reported to increase during the COVID-19 pandemic. Machine learning techniques have been proposed in the literature to build models to detect DDoS attacks. Existing works in literature tested their models with old datasets where DDoS attacks are not specific. These works mainly focus on detecting the presence of an attack rather than the type of DDoS attacks. However, detection of the attack type is vital for the review and analysis of enterprise-level security policy. Cyber-attacks are inherently an imbalanced data problem, but none of the models treated DDoS attack detection from this perspective. In this work, we present a machine learning model that takes the imbalance nature of the DDoS attack data into consideration for both presence/absence and the type of DDoS attack detection. Extensive experiment analysis with the recent and DDoS attack-specific dataset shows that the proposed technique can effectively identify DDoS attacks. © 2021, Springer Nature Switzerland AG.
Detection of android malware using tree-based ensemble stacking model
- Authors: Shafin, Sakib , Ahmed, Md Maroof , Pranto, Mahmud , Chowdhury, Abdullahi
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2021, Brisbane, 8-10 December 2021
- Full Text: false
- Reviewed:
- Description: Increasing use of smartphones for everyday activities from banking, education to social networking is putting our personal information at risk as smartphone operating systems and applications are vulnerable to various types of attacks including malware attack. To this end Android operating system is particularly targeted as it is the most widely used mobile operating system. Building a robust detection system that can provide protection against recent attacks and can deliver not only accurate detection but also the type of the attack in order to protect the system is vital. In this study, we propose a twolayer Machine Learning detection model based on Ensemble Learning and Stacked Generalization method to accurately predict and classify the growing attacks on Android smartphones. We evaluated the proposed model on a very recent dataset, named CIC-Maldroid-2020, which contains 11,598 samples with various malicious attack types. The performance of our proposed model was evaluated on widely used metrics, like accuracy, precision, recall & F1-score. It outperforms previous studies done on the same dataset and achieves an accuracy of 99.49% in classifying each attack type. © IEEE 2022.
Obfuscated memory malware detection in resource-constrained iot devices for smart city applications
- Authors: Shafin, Sakib , Karmakar, Gour , Mareels, Iven
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 11 (2023), p. 5348
- Full Text:
- Reviewed:
- Description: Obfuscated Memory Malware (OMM) presents significant threats to interconnected systems, including smart city applications, for its ability to evade detection through concealment tactics. Existing OMM detection methods primarily focus on binary detection. Their multiclass versions consider a few families only and, thereby, fail to detect much existing and emerging malware. Moreover, their large memory size makes them unsuitable to be executed in resource-constrained embedded/IoT devices. To address this problem, in this paper, we propose a multiclass but lightweight malware detection method capable of identifying recent malware and is suitable to execute in embedded devices. For this, the method considers a hybrid model by combining the feature-learning capabilities of convolutional neural networks with the temporal modeling advantage of bidirectional long short-term memory. The proposed architecture exhibits compact size and fast processing speed, making it suitable for deployment in IoT devices that constitute the major components of smart city systems. Extensive experiments with the recent CIC-Malmem-2022 OMM dataset demonstrate that our method outperforms other machine learning-based models proposed in the literature in both detecting OMM and identifying specific attack types. Our proposed method thus offers a robust yet compact model executable in IoT devices for defending against obfuscated malware.
Whose data are reliable : sensor declared data reliability
- Authors: Shafin, Sakib , Karmakar, Gour , Mareels, Iven , Balasubramanian, Venki , Kolluri, Ramachandra
- Date: 2023
- Type: Text , Conference paper
- Relation: 19th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, WiMob 2023, Montreal, Canada, 21-23 June 2023, International Conference on Wireless and Mobile Computing, Networking and Communications Vol. 2023-June, p. 249-254
- Full Text: false
- Reviewed:
- Description: Sensor data is susceptible to faults, noise, and malicious attacks, posing a significant operational and security threat. Therefore, ensuring reliability of sensor data is critical for real-time monitoring systems. Prior research on sensor data reliability relies on edge or upper-layer devices for data fusion from multiple sensors, employing architectures with major overheads and latency due to transmission and storage demands. An alternative approach is to have the sensor estimate and declare its own reliability. While some methods involve sensors computing data confidence and including it in payloads, limitations arise in the absence of neighboring sensor data, and communication overheads are incurred. To address this problem, this paper proposes an innovative approach to enhance the reliability of sensor data using an intelligent self-declaration process. Proposed reliability estimation is evaluate with three lightweight estimation algorithms, namely, Kalman Filter, Holt-Winters Method, and Mahalanobis Distance using sensor's historical data. The reliability level is then added to the three reserved bits of a TCP packet header which results in zero additional overhead. Experiments conducted using real-world sensor data (from water quality monitoring systems) obtained from our IoT lab demonstrate the effectiveness of our proposal and the potential for application in real-world sensor-based applications. © 2023 IEEE.