Sensitivity analysis for vulnerability mitigation in hybrid networks
- Authors: Ur‐rehman, Attiq , Gondal, Iqbal , Kamruzzaman, Joarder , Jolfaei, Alireza
- Date: 2022
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 11, no. 2 (2022), p.
- Full Text:
- Reviewed:
- Description: The development of cyber‐assured systems is a challenging task, particularly due to the cost and complexities associated with the modern hybrid networks architectures, as well as the recent advancements in cloud computing. For this reason, the early detection of vulnerabilities and threat strategies are vital for minimising the risks for enterprise networks configured with a variety of node types, which are called hybrid networks. Existing vulnerability assessment techniques are unable to exhaustively analyse all vulnerabilities in modern dynamic IT networks, which utilise a wide range of IoT and industrial control devices (ICS). This could lead to having a less optimal risk evaluation. In this paper, we present a novel framework to analyse the mitigation strategies for a variety of nodes, including traditional IT systems and their dependability on IoT devices, as well as industrial control systems. The framework adopts avoid, reduce, and manage as its core principles in characterising mitigation strategies. Our results confirmed the effectiveness of our mitigation strategy framework, which took node types, their criticality, and the network topology into account. Our results showed that our proposed framework was highly effective at reducing the risks in dynamic and resource constraint environments, in contrast to the existing techniques in the literature. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
Spam email categorization with nlp and using federated deep learning
- Authors: Ul Haq, Ikram , Black, Paul , Gondal, Iqbal , Kamruzzaman, Joarder , Watters, Paul , Kayes, A.
- Date: 2022
- Type: Text , Conference paper
- Relation: 18th International Conference on Advanced Data Mining and Applications, ADMA 2022, Brisbane, Australia, 28-30 November 2022, Advanced Data Mining and Applications, 18th International Conference, ADMA 2022 Vol. 13726 LNAI, p. 15-27
- Full Text: false
- Reviewed:
- Description: Emails are the most popular and efficient communication method that makes them vulnerable to misuse. Federated learning (FL) provides a decentralized machine learning (ML) model, where a central server coordinates clients that collaboratively train a shared ML model. This paper proposes Federated Phishing Filtering (FPF) technique based on federated learning, natural language processing, and deep learning. FL for intelligent algorithms fuses trained models of ML algorithms from multiple sites for collective learning. This approach improves ML performance by utilizing large collective training data sets across the corporate client base, resulting in higher phishing email detection accuracy. FPF techniques preserve email privacy using local feature extraction on client email servers. Thus, the contents of emails do not need to be transmitted across the network or stored on third-party servers. We have applied FL and Natural Language Processing (NLP) for email phishing detection. This technique provides four training modes that perform FL without sharing email content. Our research categorizes emails as benign, spam, and phishing. Empirical evaluations with publicly available datasets show that accuracy is improved by the use of our Federated Deep Learning model. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Vulnerability assessment framework for a smart grid
- Authors: Rashed, Muhammad , Kamruzzaman, Joarder , Gondal, Iqbal , Islam, Syed
- Date: 2022
- Type: Text , Conference paper
- Relation: 4th IEEE Global Power, Energy and Communication Conference, GPECOM 2022, Cappadocia, Turkey, 14-17 June 2022, Proceedings - 2022 IEEE 4th Global Power, Energy and Communication Conference, GPECOM 2022 p. 449-454
- Full Text: false
- Reviewed:
- Description: The increasing demand for the interconnected IoT based smart grid is facing threats from cyber-attacks due to inherent vulnerability in the smart grid network. There is a pressing need to evaluate and model these vulnerabilities in the network to avoid cascading failures in power systems. In this paper, we propose and evaluate a vulnerability assessment framework based on attack probability for the protection and security of a smart grid. Several factors were taken into consideration such as the probability of attack, propagation of attack from a parent node to child nodes, effectiveness of basic metering system, Kalman estimation and Advanced Metering Infrastructure (AMI). The IEEE-300 bus smart grid was simulated using MATPOWER to study the effectiveness of the proposed framework by injecting false data injection attacks (FDIA); and studying their propagation. Our results show that the use of severity assessment standards such as Common Vulnerability Scoring System (CVSS), AMI measurements and Kalman estimates were very effective for evaluating the vulnerability assessment of smart grid in the presence of FDIA attack scenarios. © 2022 IEEE.
A novel OFDM format and a machine learning based dimming control for lifi
- Authors: Nowrin, Itisha , Mondal, M. , Islam, Rashed , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 17 (2021), p.
- Full Text:
- Reviewed:
- Description: This paper proposes a new hybrid orthogonal frequency division multiplexing (OFDM) form termed as DC‐biased pulse amplitude modulated optical OFDM (DPO‐OFDM) by combining the ideas of the existing DC‐biased optical OFDM (DCO‐OFDM) and pulse amplitude modulated discrete multitone (PAM‐DMT). The analysis indicates that the required DC‐bias for DPO‐OFDM-based light fidelity (LiFi) depends on the dimming level and the components of the DPO‐OFDM. The bit error rate (BER) performance and dimming flexibility of the DPO‐OFDM and existing OFDM schemes are evaluated using MATLAB tools. The results show that the proposed DPO‐OFDM is power efficient and has a wide dimming range. Furthermore, a switching algorithm is introduced for LiFi, where the individual components of the hybrid OFDM are switched according to a target dimming level. Next, machine learning algorithms are used for the first time to find the appropriate proportions of the hybrid OFDM components. It is shown that polynomial regression of degree 4 can reliably predict the constellation size of the DCO‐OFDM component of DPO‐OFDM for a given constellation size of PAM‐DMT. With the component switching and the machine learning algorithms, DPO‐OFDM‐based LiFi is power efficient at a wide dimming range. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
A smart priority-based traffic control system for emergency vehicles
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Kamruzzaman, Joarder , Gondal, Iqbal
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Sensors Journal Vol. 21, no. 14 (2021), p. 15849-15858
- Full Text: false
- Reviewed:
- Description: Unwanted events on roads, such as incidents and increased traffic jams, can cause human lives and economic loss. For efficient incident management, it is essential to send Emergency Vehicles (EVs) to the incident place as quickly as possible. To reduce incidence clearance time, several approaches exist to provide a clear pathway to EVs mainly fitted with RFID sensors in the urban areas. However, they neither assign priority to the EVs based on the type and severity of an incident nor consider the effect on other on-road traffic. To address this issue, in this paper, we introduce an Emergency Vehicle Priority System (EVPS) by determining the priority level of an EV based on the type and the severity of an incident, and estimating the number of necessary signal interventions while considering the impact of those interventions on the traffic in the roads surrounding the EV's travel path. We present how EVPS determines the priority code and a new algorithm to estimate the number of green signal interventions to attain the quickest incident response while concomitantly reducing impact on others. A simulation model is developed in Simulation of Urban Mobility (SUMO) using the real traffic data of Melbourne, Australia, captured by various sensors. Results show that our system recommends appropriate number of intervention that can reduce emergency response time significantly. © 2001-2012 IEEE.
Assessing reliability of smart grid against cyberattacks using stability index
- Authors: Rashed, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Conference paper
- Relation: 31st Australasian Universities Power Engineering Conference, AUPEC 2021, Virtual, Online 26 to 30 September 2021, Proceedings of 2021 31st Australasian Universities Power Engineering Conference, AUPEC 2021
- Full Text: false
- Reviewed:
- Description: The degradation of stability index within smart grid leads to incorrect power generation and poor load balancing. The remote data dependency of the central energy management system (CEMS) causes communication delay that further leads to poor synchronization within the system. This becomes worse in the presence of cyber-attacks such as stealth or false data injection attack (FDIA). We used dynamic estimation to obtain state data after the inception of false data attack and analyzed its impact on the stability index of the smart grid. A lookup table was constructed based on the fluctuations within the voltage estimates of IEEE-Bus system. An index number was assigned to output estimates at the bus that highlights the level of severity within the grid. We used IEEE-57 Bus using MATLAB to capture and plot the results related to voltage estimates, latency, and inception time delay. The results demonstrate a clear relationship between stability index and state estimates especially when the system is under the influence of a cyber-attack. © 2021 IEEE.
Assessing trust level of a driverless car using deep learning
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Das, Rajkumar , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4457-4466
- Full Text: false
- Reviewed:
- Description: The increasing adoption of driverless cars already providing a shift to move away from traditional transportation systems to automated ones in many industrial and commercial applications. Recent research has justified that driverless vehicles will considerably reduce traffic congestions, accidents, carbon emissions, and enhance the accessibility of driving to wider cross-section of people and lifestyle choices. However, at present, people's main concerns are about its privacy and security. Since traditional protocol layers based security mechanisms are not so effective for a distributed system, trust value-based security mechanisms, a type of pervasive security, are appearing as popular and promising techniques. A few statistical non-learning based models for measuring the trust level of a driverless are available in the current literature. These are not so effective because of not being able to capture the extremely distributed, dynamic, and complex nature of the traffic systems. To bridge this research gap, in this paper, for the first time, we propose two deep learning-based models that measure the trustworthiness of a driverless car and its major On-Board Unit (OBU) components. The second model also determines its OBU components that were breached during the driving operation. Results produced using real and simulated traffic data demonstrate that our proposed DNN based deep learning models outperform other machine learning models in assessing the trustworthiness of individual car as well as its OBU components. The average precision of detection accuracies for the car, LiDAR, camera, and radar are 0.99, 0.96, 0.81, and 0.83, respectively, which indicates the potential real-life application of our models in assessing the trust level of a driverless car. © 2000-2011 IEEE.
Churn prediction in telecom industry using machine learning ensembles with class balancing
- Authors: Chowdhury, Abdullahi , Kaisar, Shahriar , Rashid, Md Mamunur , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2021, Brisbane, 8-10 December 2021
- Full Text: false
- Reviewed:
- Description: Telecommunication service providers are going through a very competitive and challenging time to retain existing customers by offering new and attractive services (e.g., unlimited local and international calls, high-speed internet, new phones). It is therefore imperative to analyse and predict customer churn behaviour more accurately. One of the major challenges to analyse churn data and build better prediction model is the imbalance nature of the data. Customer behaviour for churn and non-churn scenarios may contain resembling features. Using a single classifier or simple oversampling method to handle data imbalance often struggles to identify the minority (churn) class data. To overcome the issue, we introduce a model that uses sophisticated oversampling technique in conjunction with ensemble methods, namely Random Forest, Gradient Boost, Extreme Gradient Boost, and AdaBoost. The hyperparameters of the baseline ensemble methods and the oversampling methods were tuned in several ways to investigate their impact on prediction performances. Using a widely used publicly available customer churn dataset, prediction performance of the proposed model was evaluated in term of various metrics, namely, accuracy, precision, recall, F-1 score, AUC under ROC curve. Our model outperformed the existing models and significantly reduced both false positive and false negative prediction. © IEEE 2022.
Green underwater wireless communications using hybrid optical-acoustic technologies
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
How much I can rely on you : measuring trustworthiness of a twitter user
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
Malware detection in edge devices with fuzzy oversampling and dynamic class weighting
- Authors: Khoda, Mahbub , Kamruzzaman, Joarder , Gondal, Iqbal , Imam, Tasadduq , Rahman, Ashfaqur
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 112, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: In Internet-of-things (IoT) domain, edge devices are used increasingly for data accumulation, preprocessing, and analytics. Intelligent integration of edge devices with Artificial Intelligence (AI) facilitates real-time analysis and decision making. However, these devices simultaneously provide additional attack opportunities for malware developers, potentially leading to information and financial loss. Machine learning approaches can detect such attacks but their performance degrades when benign samples substantially outnumber malware samples in training data. Existing approaches for such imbalanced data assume samples represented as continuous features and thus can generate invalid samples when malware applications are represented by binary features. We propose a novel malware oversampling technique that addresses this issue. Further, we propose two approaches for malware detection. Our first approach uses fuzzy set theory, while the second approach dynamically assigns higher priority to malware samples using a novel loss function. Combining our oversampling technique with these approaches, the proposed approach attains over 9% improvement over competing methods in terms of F1_score. Our approaches can, therefore, result in enhanced privacy and security in edge computing services. © 2021 Elsevier B.V.
State estimation within ied based smart grid using kalman estimates
- Authors: Rashed, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 15 (2021), p.
- Full Text:
- Reviewed:
- Description: State Estimation is a traditional and reliable technique within power distribution and control systems. It is used for building a topology of the power grid network based on state measurements and current operational state of different nodes & buses. The protection of sensors and measurement units such as Intelligent Electronic Devices (IED) in Central Energy Management System (CEMS) against False Data Injection Attacks (FDIAs) is a big concern to grid operators. These are special kind of cyber-attacks that are directed towards the state & measurement data in such a way that mislead the CEMS into making incorrect decisions and create generation load imbalance. These are known to bypass the traditional bad data detection systems within central estimators. This paper presents the use of an additional novel state estimator based on Kalman filter along with traditional Distributed State Estimation (DSE) which is based on Weighted Least Square (WLS). Kalman filter is a feedback control mechanism that constantly updates itself based on state prediction and state correction technique and shows improvement in the estimates. The additional estimator output is compared with the results of DSE in order to identify anomalies and injection of false data. We evaluated our methodology by simulating proposed technique using MATPOWER over IEEE-14, IEEE-30, IEEE-118, IEEE-300 bus. The results clearly demonstrate the superiority of the proposed method over traditional state estimation. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
Trustworthiness of self-driving vehicles for intelligent transportation systems in industry applications
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 17, no. 2 (2021), p. 961-970
- Full Text: false
- Reviewed:
- Description: To enhance industrial production and automation, rapid and faster transportation of raw materials and finished products to and from distributed factories, warehouses and outlets are essential. To reduce cost with increased efficiency, this will increasingly see the use of connected and self-driving commercial vehicles fitted with industrial grade sensors on roads, shared with normal and self-driving passenger vehicles. For its wide adoption, the trustworthiness of self-driving vehicles in the intelligent transportation system (ITS) is pivotal. In this article, we introduce a novel model to measure the overall trustworthiness of a self-driving vehicle considering on-Board unit (OBU) components, GPS data and safety messages. In calculating the trustworthiness of individual OBU components, CertainLogic and beta distribution function (BDF) are used. Those trust values are fused using both the dempster-Shafer Theory (DST) and a logical operator of CertainLogic. Results of our simulation show that our proposed method can effectively determine the trust of self-driving vehicles. © 2005-2012 IEEE.
A machine learning approach for prediction of pregnancy outcome following IVF treatment
- Authors: Hassan, Md Rafiul , Al-Insaif, Sadiq , Hossain, Muhammad , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 32, no. 7 (2020), p. 2283-2297
- Full Text: false
- Reviewed:
- Description: Infertility affects one out of seven couples around the world. Therefore, the best possible management of the in vitro fertilization (IVF) treatment and patient advice is crucial for both patients and medical practitioners. The ultimate concern of the patients is the success of an IVF procedure, which depends on a number of influencing attributes. Without any automated tool, it is hard for the practitioners to assess any influencing trend of the attributes and factors that might lead to a successful IVF pregnancy. This paper proposes a hill climbing feature (attribute) selection algorithm coupled with automated classification using machine learning techniques with the aim to analyze and predict IVF pregnancy in greater accuracy. Using 25 attributes, we assessed the prediction ability of IVF pregnancy success for five different machine learning models, namely multilayer perceptron (MLP), support vector machines (SVM), C4.5, classification and regression trees (CART) and random forest (RF). The prediction ability was measured in terms of widely used performance metrics, namely accuracy rate, F-measure and AUC. Feature selection algorithm reduced the number of most influential attributes to nineteen for MLP, sixteen for RF, seventeen for SVM, twelve for C4.5 and eight for CART. Overall, the most influential attributes identified are: ‘age’, ‘indication’ of fertility factor, ‘Antral Follicle Counts (AFC)’, ‘NbreM2’, ‘method of sperm collection’, ‘Chamotte’, ‘Fertilization rate in vitro’, ‘Follicles on day 14’ and ‘Embryo transfer day.’ The machine learning models trained with the selected set of features significantly improved the prediction accuracy of IVF pregnancy success to a level considerably higher than those reported in the current literature. © 2018, The Natural Computing Applications Forum.
A robust forgery detection method for copy-move and splicing attacks in images
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
A Survey on Behavioral Pattern Mining from Sensor Data in Internet of Things
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
A survey on context awareness in big data analytics for business applications
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
API based discrimination of ransomware and benign cryptographic programs
- Authors: Black, Paul , Sohail, Ammar , Gondal, Iqbal , Kamruzzaman, Joarder , Vamplew, Peter , Watters, Paul
- Date: 2020
- Type: Text , Conference paper
- Relation: 27th International Conference on Neural Information Processing, ICONIP 2020, Bangkok, 18 to 22 November 2020, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 12533 LNCS, p. 177-188
- Full Text: false
- Reviewed:
- Description: Ransomware is a widespread class of malware that encrypts files in a victim’s computer and extorts victims into paying a fee to regain access to their data. Previous research has proposed methods for ransomware detection using machine learning techniques. However, this research has not examined the precision of ransomware detection. While existing techniques show an overall high accuracy in detecting novel ransomware samples, previous research does not investigate the discrimination of novel ransomware from benign cryptographic programs. This is a critical, practical limitation of current research; machine learning based techniques would be limited in their practical benefit if they generated too many false positives (at best) or deleted/quarantined critical data (at worst). We examine the ability of machine learning techniques based on Application Programming Interface (API) profile features to discriminate novel ransomware from benign-cryptographic programs. This research provides a ransomware detection technique that provides improved detection accuracy and precision compared to other API profile based ransomware detection techniques while using significantly simpler features than previous dynamic ransomware detection research. © 2020, Springer Nature Switzerland AG.
Attacks on self-driving cars and their countermeasures : a survey
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
Cyberattacks detection in iot-based smart city applications using machine learning techniques
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Imam, Tassadduq , Gordon, Steven
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Environmental Research and Public Health Vol. 17, no. 24 (2020), p. 1-21
- Full Text:
- Reviewed:
- Description: In recent years, the widespread deployment of the Internet of Things (IoT) applications has contributed to the development of smart cities. A smart city utilizes IoT-enabled technologies, communications and applications to maximize operational efficiency and enhance both the service providers’ quality of services and people’s wellbeing and quality of life. With the growth of smart city networks, however, comes the increased risk of cybersecurity threats and attacks. IoT devices within a smart city network are connected to sensors linked to large cloud servers and are exposed to malicious attacks and threats. Thus, it is important to devise approaches to prevent such attacks and protect IoT devices from failure. In this paper, we explore an attack and anomaly detection technique based on machine learning algorithms (LR, SVM, DT, RF, ANN and KNN) to defend against and mitigate IoT cybersecurity threats in a smart city. Contrary to existing works that have focused on single classifiers, we also explore ensemble methods such as bagging, boosting and stacking to enhance the performance of the detection system. Additionally, we consider an integration of feature selection, cross-validation and multi-class classification for the discussed domain, which has not been well considered in the existing literature. Experimental results with the recent attack dataset demonstrate that the proposed technique can effectively identify cyberattacks and the stacking ensemble model outperforms comparable models in terms of accuracy, precision, recall and F1-Score, implying the promise of stacking in this domain. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.