Assessing reliability of smart grid against cyberattacks using stability index
- Authors: Rashed, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Conference paper
- Relation: 31st Australasian Universities Power Engineering Conference, AUPEC 2021, Virtual, Online 26 to 30 September 2021, Proceedings of 2021 31st Australasian Universities Power Engineering Conference, AUPEC 2021
- Full Text: false
- Reviewed:
- Description: The degradation of stability index within smart grid leads to incorrect power generation and poor load balancing. The remote data dependency of the central energy management system (CEMS) causes communication delay that further leads to poor synchronization within the system. This becomes worse in the presence of cyber-attacks such as stealth or false data injection attack (FDIA). We used dynamic estimation to obtain state data after the inception of false data attack and analyzed its impact on the stability index of the smart grid. A lookup table was constructed based on the fluctuations within the voltage estimates of IEEE-Bus system. An index number was assigned to output estimates at the bus that highlights the level of severity within the grid. We used IEEE-57 Bus using MATLAB to capture and plot the results related to voltage estimates, latency, and inception time delay. The results demonstrate a clear relationship between stability index and state estimates especially when the system is under the influence of a cyber-attack. © 2021 IEEE.
Assessing trust level of a driverless car using deep learning
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Das, Rajkumar , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4457-4466
- Full Text: false
- Reviewed:
- Description: The increasing adoption of driverless cars already providing a shift to move away from traditional transportation systems to automated ones in many industrial and commercial applications. Recent research has justified that driverless vehicles will considerably reduce traffic congestions, accidents, carbon emissions, and enhance the accessibility of driving to wider cross-section of people and lifestyle choices. However, at present, people's main concerns are about its privacy and security. Since traditional protocol layers based security mechanisms are not so effective for a distributed system, trust value-based security mechanisms, a type of pervasive security, are appearing as popular and promising techniques. A few statistical non-learning based models for measuring the trust level of a driverless are available in the current literature. These are not so effective because of not being able to capture the extremely distributed, dynamic, and complex nature of the traffic systems. To bridge this research gap, in this paper, for the first time, we propose two deep learning-based models that measure the trustworthiness of a driverless car and its major On-Board Unit (OBU) components. The second model also determines its OBU components that were breached during the driving operation. Results produced using real and simulated traffic data demonstrate that our proposed DNN based deep learning models outperform other machine learning models in assessing the trustworthiness of individual car as well as its OBU components. The average precision of detection accuracies for the car, LiDAR, camera, and radar are 0.99, 0.96, 0.81, and 0.83, respectively, which indicates the potential real-life application of our models in assessing the trust level of a driverless car. © 2000-2011 IEEE.
Churn prediction in telecom industry using machine learning ensembles with class balancing
- Authors: Chowdhury, Abdullahi , Kaisar, Shahriar , Rashid, Md Mamunur , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2021, Brisbane, 8-10 December 2021
- Full Text: false
- Reviewed:
- Description: Telecommunication service providers are going through a very competitive and challenging time to retain existing customers by offering new and attractive services (e.g., unlimited local and international calls, high-speed internet, new phones). It is therefore imperative to analyse and predict customer churn behaviour more accurately. One of the major challenges to analyse churn data and build better prediction model is the imbalance nature of the data. Customer behaviour for churn and non-churn scenarios may contain resembling features. Using a single classifier or simple oversampling method to handle data imbalance often struggles to identify the minority (churn) class data. To overcome the issue, we introduce a model that uses sophisticated oversampling technique in conjunction with ensemble methods, namely Random Forest, Gradient Boost, Extreme Gradient Boost, and AdaBoost. The hyperparameters of the baseline ensemble methods and the oversampling methods were tuned in several ways to investigate their impact on prediction performances. Using a widely used publicly available customer churn dataset, prediction performance of the proposed model was evaluated in term of various metrics, namely, accuracy, precision, recall, F-1 score, AUC under ROC curve. Our model outperformed the existing models and significantly reduced both false positive and false negative prediction. © IEEE 2022.
Green underwater wireless communications using hybrid optical-acoustic technologies
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
How much I can rely on you : measuring trustworthiness of a twitter user
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
Malware detection in edge devices with fuzzy oversampling and dynamic class weighting
- Authors: Khoda, Mahbub , Kamruzzaman, Joarder , Gondal, Iqbal , Imam, Tasadduq , Rahman, Ashfaqur
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 112, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: In Internet-of-things (IoT) domain, edge devices are used increasingly for data accumulation, preprocessing, and analytics. Intelligent integration of edge devices with Artificial Intelligence (AI) facilitates real-time analysis and decision making. However, these devices simultaneously provide additional attack opportunities for malware developers, potentially leading to information and financial loss. Machine learning approaches can detect such attacks but their performance degrades when benign samples substantially outnumber malware samples in training data. Existing approaches for such imbalanced data assume samples represented as continuous features and thus can generate invalid samples when malware applications are represented by binary features. We propose a novel malware oversampling technique that addresses this issue. Further, we propose two approaches for malware detection. Our first approach uses fuzzy set theory, while the second approach dynamically assigns higher priority to malware samples using a novel loss function. Combining our oversampling technique with these approaches, the proposed approach attains over 9% improvement over competing methods in terms of F1_score. Our approaches can, therefore, result in enhanced privacy and security in edge computing services. © 2021 Elsevier B.V.
State estimation within ied based smart grid using kalman estimates
- Authors: Rashed, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 15 (2021), p.
- Full Text:
- Reviewed:
- Description: State Estimation is a traditional and reliable technique within power distribution and control systems. It is used for building a topology of the power grid network based on state measurements and current operational state of different nodes & buses. The protection of sensors and measurement units such as Intelligent Electronic Devices (IED) in Central Energy Management System (CEMS) against False Data Injection Attacks (FDIAs) is a big concern to grid operators. These are special kind of cyber-attacks that are directed towards the state & measurement data in such a way that mislead the CEMS into making incorrect decisions and create generation load imbalance. These are known to bypass the traditional bad data detection systems within central estimators. This paper presents the use of an additional novel state estimator based on Kalman filter along with traditional Distributed State Estimation (DSE) which is based on Weighted Least Square (WLS). Kalman filter is a feedback control mechanism that constantly updates itself based on state prediction and state correction technique and shows improvement in the estimates. The additional estimator output is compared with the results of DSE in order to identify anomalies and injection of false data. We evaluated our methodology by simulating proposed technique using MATPOWER over IEEE-14, IEEE-30, IEEE-118, IEEE-300 bus. The results clearly demonstrate the superiority of the proposed method over traditional state estimation. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
Trustworthiness of self-driving vehicles for intelligent transportation systems in industry applications
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 17, no. 2 (2021), p. 961-970
- Full Text: false
- Reviewed:
- Description: To enhance industrial production and automation, rapid and faster transportation of raw materials and finished products to and from distributed factories, warehouses and outlets are essential. To reduce cost with increased efficiency, this will increasingly see the use of connected and self-driving commercial vehicles fitted with industrial grade sensors on roads, shared with normal and self-driving passenger vehicles. For its wide adoption, the trustworthiness of self-driving vehicles in the intelligent transportation system (ITS) is pivotal. In this article, we introduce a novel model to measure the overall trustworthiness of a self-driving vehicle considering on-Board unit (OBU) components, GPS data and safety messages. In calculating the trustworthiness of individual OBU components, CertainLogic and beta distribution function (BDF) are used. Those trust values are fused using both the dempster-Shafer Theory (DST) and a logical operator of CertainLogic. Results of our simulation show that our proposed method can effectively determine the trust of self-driving vehicles. © 2005-2012 IEEE.
A machine learning approach for prediction of pregnancy outcome following IVF treatment
- Authors: Hassan, Md Rafiul , Al-Insaif, Sadiq , Hossain, Muhammad , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 32, no. 7 (2020), p. 2283-2297
- Full Text: false
- Reviewed:
- Description: Infertility affects one out of seven couples around the world. Therefore, the best possible management of the in vitro fertilization (IVF) treatment and patient advice is crucial for both patients and medical practitioners. The ultimate concern of the patients is the success of an IVF procedure, which depends on a number of influencing attributes. Without any automated tool, it is hard for the practitioners to assess any influencing trend of the attributes and factors that might lead to a successful IVF pregnancy. This paper proposes a hill climbing feature (attribute) selection algorithm coupled with automated classification using machine learning techniques with the aim to analyze and predict IVF pregnancy in greater accuracy. Using 25 attributes, we assessed the prediction ability of IVF pregnancy success for five different machine learning models, namely multilayer perceptron (MLP), support vector machines (SVM), C4.5, classification and regression trees (CART) and random forest (RF). The prediction ability was measured in terms of widely used performance metrics, namely accuracy rate, F-measure and AUC. Feature selection algorithm reduced the number of most influential attributes to nineteen for MLP, sixteen for RF, seventeen for SVM, twelve for C4.5 and eight for CART. Overall, the most influential attributes identified are: ‘age’, ‘indication’ of fertility factor, ‘Antral Follicle Counts (AFC)’, ‘NbreM2’, ‘method of sperm collection’, ‘Chamotte’, ‘Fertilization rate in vitro’, ‘Follicles on day 14’ and ‘Embryo transfer day.’ The machine learning models trained with the selected set of features significantly improved the prediction accuracy of IVF pregnancy success to a level considerably higher than those reported in the current literature. © 2018, The Natural Computing Applications Forum.
A robust forgery detection method for copy-move and splicing attacks in images
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
A Survey on Behavioral Pattern Mining from Sensor Data in Internet of Things
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
A survey on context awareness in big data analytics for business applications
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
API based discrimination of ransomware and benign cryptographic programs
- Authors: Black, Paul , Sohail, Ammar , Gondal, Iqbal , Kamruzzaman, Joarder , Vamplew, Peter , Watters, Paul
- Date: 2020
- Type: Text , Conference paper
- Relation: 27th International Conference on Neural Information Processing, ICONIP 2020, Bangkok, 18 to 22 November 2020, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 12533 LNCS, p. 177-188
- Full Text: false
- Reviewed:
- Description: Ransomware is a widespread class of malware that encrypts files in a victim’s computer and extorts victims into paying a fee to regain access to their data. Previous research has proposed methods for ransomware detection using machine learning techniques. However, this research has not examined the precision of ransomware detection. While existing techniques show an overall high accuracy in detecting novel ransomware samples, previous research does not investigate the discrimination of novel ransomware from benign cryptographic programs. This is a critical, practical limitation of current research; machine learning based techniques would be limited in their practical benefit if they generated too many false positives (at best) or deleted/quarantined critical data (at worst). We examine the ability of machine learning techniques based on Application Programming Interface (API) profile features to discriminate novel ransomware from benign-cryptographic programs. This research provides a ransomware detection technique that provides improved detection accuracy and precision compared to other API profile based ransomware detection techniques while using significantly simpler features than previous dynamic ransomware detection research. © 2020, Springer Nature Switzerland AG.
Attacks on self-driving cars and their countermeasures : a survey
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
Cyberattacks detection in iot-based smart city applications using machine learning techniques
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Imam, Tassadduq , Gordon, Steven
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Environmental Research and Public Health Vol. 17, no. 24 (2020), p. 1-21
- Full Text:
- Reviewed:
- Description: In recent years, the widespread deployment of the Internet of Things (IoT) applications has contributed to the development of smart cities. A smart city utilizes IoT-enabled technologies, communications and applications to maximize operational efficiency and enhance both the service providers’ quality of services and people’s wellbeing and quality of life. With the growth of smart city networks, however, comes the increased risk of cybersecurity threats and attacks. IoT devices within a smart city network are connected to sensors linked to large cloud servers and are exposed to malicious attacks and threats. Thus, it is important to devise approaches to prevent such attacks and protect IoT devices from failure. In this paper, we explore an attack and anomaly detection technique based on machine learning algorithms (LR, SVM, DT, RF, ANN and KNN) to defend against and mitigate IoT cybersecurity threats in a smart city. Contrary to existing works that have focused on single classifiers, we also explore ensemble methods such as bagging, boosting and stacking to enhance the performance of the detection system. Additionally, we consider an integration of feature selection, cross-validation and multi-class classification for the discussed domain, which has not been well considered in the existing literature. Experimental results with the recent attack dataset demonstrate that the proposed technique can effectively identify cyberattacks and the stacking ensemble model outperforms comparable models in terms of accuracy, precision, recall and F1-Score, implying the promise of stacking in this domain. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
Friendly jammer against an adaptive eavesdropper in a relay-aided network
- Authors: Giti, J. E. , Sakzad, A. , Srinivasan, B. , Kamruzzaman, Joarder , Gaire, R.
- Date: 2020
- Type: Text , Conference paper
- Relation: 16th IEEE International Wireless Communications and Mobile Computing Conference, IWCMC 2020; Limassol, Cyprus; 15-19 June 2020, International Wireless Communications and Mobile Computing, IWCMC 2020 p. 1707-1712
- Full Text: false
- Reviewed:
- Description: In this paper, we consider the problem of information theoretic security for a single-input single-output (SISO) relay-aided network in the presence of an adaptive eavesdropper. We assess the impact of deceptive friendly jammers on the secrecy of communication in this network when countering adaptive eavesdroppers. Specifically, we derive the secrecy capacity and secrecy outage probability of the network and compare the results in the absence and presence of a deceptive friendly jammer. Our results show that the secrecy capacity of the network increases while the achievable secrecy outage probability decreases significantly in the presence of friendly jammer to nullify the effect of the adversary. Numerical results, obtained through computer simulations, under different scenarios of varying jamming power and average main channel gain to average eavesdropper channel gain ratio demonstrate the effectiveness of friendly jammer in providing physical layer security. © 2020 IEEE.
Hybrid intrusion detection system based on the stacking ensemble of C5 decision tree classifier and one class support vector machine
- Authors: Khraisat, Ansam , Gondal, Iqbal , Vamplew, Peter , Kamruzzaman, Joarder , Alazab, Ammar
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 9, no. 1 (2020), p.
- Full Text:
- Reviewed:
- Description: Cyberttacks are becoming increasingly sophisticated, necessitating the efficient intrusion detection mechanisms to monitor computer resources and generate reports on anomalous or suspicious activities. Many Intrusion Detection Systems (IDSs) use a single classifier for identifying intrusions. Single classifier IDSs are unable to achieve high accuracy and low false alarm rates due to polymorphic, metamorphic, and zero-day behaviors of malware. In this paper, a Hybrid IDS (HIDS) is proposed by combining the C5 decision tree classifier and One Class Support Vector Machine (OC-SVM). HIDS combines the strengths of SIDS) and Anomaly-based Intrusion Detection System (AIDS). The SIDS was developed based on the C5.0 Decision tree classifier and AIDS was developed based on the one-class Support Vector Machine (SVM). This framework aims to identify both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the benchmark datasets, namely, Network Security Laboratory-Knowledge Discovery in Databases (NSL-KDD) and Australian Defence Force Academy (ADFA) datasets. Studies show that the performance of HIDS is enhanced, compared to SIDS and AIDS in terms of detection rate and low false-alarm rates. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
IoT Sensor Numerical Data Trust Model Using Temporal Correlation
- Authors: Karmakar, Gour , Das, Rajkumar , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 7, no. 4 (2020), p. 2573-2581
- Full Text: false
- Reviewed:
- Description: Internet of Things (IoT) applications are increasingly being adopted for innovative and cost-effective services. However, the IoT devices and data are susceptible to various attacks, including cyberattacks, which emphasizes the need for pervasive security measure like trust evaluation on the fly. There exist several IoT numerical data trustworthiness measures which are based on the quality of information (QoI) and correlations. The QoI measurement techniques excessively exploit heuristics, while the correlation-based approaches predict temporal correlation using an average or moving average, which limits their efficacy. To improve accuracy and reliability, we propose a model for assessing trust of IoT sensor numerical data by representing the temporal correlation using temporal relationship. We represent the temporal relationship between data within a time window in two ways: first, using the discrete cosine transform (DCT) coefficients of daily data; and second, to obtain the impact of shuttle variation, we further divide the daily data into some time windows and calculate the average of each DCT coefficient over all time windows. These two feature sets are then used to develop two independent deep neural network models. The model outcomes are fused by the Dempster-Shepard theory to calculate trust scores. The strength of our model is evaluated using both trustworthy and untrustworthy data - the former are collected from sensors under controlled supervision in a smart city project in Melbourne, Australia and the latter are generated either by simulating breached sensors or perturbing real data. Our proposed approach outperforms a contemporary correlation-based approach in terms of trust score accuracy and consistency. © 2014 IEEE.
Low-power wide-area networks : design goals, architecture, suitability to use cases and research challenges
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
Mobile malware detection with imbalanced data using a novel synthetic oversampling strategy and deep learning
- Authors: Khoda, Mahbub , Kamruzzaman, Joarder , Gondal, Iqbal , Imam, Tasadduq , Rahman, Ashfaqur
- Date: 2020
- Type: Text , Conference paper
- Relation: 16th International Conference on Wireless and Mobile Computing, Networking and Communications (IEEE WiMob), Virtual, Thessaloniki, 12 to 14 October 2020, International Conference on Wireless and Mobile Computing, Networking and Communications
- Full Text: false
- Reviewed:
- Description: Mobile malware detection is inherently an imbalanced data problem since the number of benign applications in the market is far greater than the number of malicious applications. Existing methods to handle imbalanced data, such as synthetic minority over-sampling, do not translate well into this domain since mobile malware detection generally deals with binary features and these methods are designed for continuous features. Also, methods adapted for categorical features cannot be applied here since random modifications of features can result in invalid sample generation. In this work, we propose a novel technique for generating synthetic samples for mobile malware detection with imbalanced data. Our proposed method adds new data points in the sample space by generating synthetic malware samples which also preserves the original functionality of the malicious apps. Experiments show that the proposed approach outperforms existing techniques in terms of precision, recall, F1score, and AUC. This study will be useful in building deep neural network-based systems to handle imbalanced data for mobile malware detection. © 2020 IEEE.