A robust forgery detection method for copy-move and splicing attacks in images
- Islam, Mohammad, Karmakar, Gour, Kamruzzaman, Joarder, Murshed, Manzur
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
An adaptive approach to opportunistic data forwarding in underwater acoustic sensor networks
- Nowsheen, Nusrat, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
QoS support in event detection in WSN through optimal k-coverage
- Alam, Kh Mahmudul, Kamruzzaman, Joarder, Karmakar, Gour, Murhsed, Manzur, Azad, Arman
- Authors: Alam, Kh Mahmudul , Kamruzzaman, Joarder , Karmakar, Gour , Murhsed, Manzur , Azad, Arman
- Date: 2011
- Type: Text , Conference paper
- Relation: 11th International Conference on Computational Science, ICCS 2011; Singapore, Singapore; 1st-3rd June 2011; published in Procedia Computer Science Vol. 4, p. 499-507
- Full Text:
- Reviewed:
- Description: Wireless sensor networks promise to guarantee accurate, fault tolerant and timely detection of events in large scale sensor fields. To achieve this the notion of k-coverage is widely employed in WSNs where significant redundancy is introduced in deployment as an event is expected to be sensed by at least k sensors in the neighborhood. As sensor density increases significantly with k, it is imperative to find the optimal k for the underlying event detection system. In this work, we consider the detection probability, fault tolerance and latency as the Quality of Service (QoS) metrics of an event detection system employing k-coverage and present a probabilistic model to guarantee given QoS support with the minimum degree of coverage taking into account the noise related measurement error, communication interference and sensor fault probability. This work eventually resolves the problem of over or under deployment of sensors, increases scalability and provides a well defined mechanism to tune the degree of coverage according to performance needs.
- Authors: Alam, Kh Mahmudul , Kamruzzaman, Joarder , Karmakar, Gour , Murhsed, Manzur , Azad, Arman
- Date: 2011
- Type: Text , Conference paper
- Relation: 11th International Conference on Computational Science, ICCS 2011; Singapore, Singapore; 1st-3rd June 2011; published in Procedia Computer Science Vol. 4, p. 499-507
- Full Text:
- Reviewed:
- Description: Wireless sensor networks promise to guarantee accurate, fault tolerant and timely detection of events in large scale sensor fields. To achieve this the notion of k-coverage is widely employed in WSNs where significant redundancy is introduced in deployment as an event is expected to be sensed by at least k sensors in the neighborhood. As sensor density increases significantly with k, it is imperative to find the optimal k for the underlying event detection system. In this work, we consider the detection probability, fault tolerance and latency as the Quality of Service (QoS) metrics of an event detection system employing k-coverage and present a probabilistic model to guarantee given QoS support with the minimum degree of coverage taking into account the noise related measurement error, communication interference and sensor fault probability. This work eventually resolves the problem of over or under deployment of sensors, increases scalability and provides a well defined mechanism to tune the degree of coverage according to performance needs.
A comprehensive spectrum trading scheme based on market competition, reputation and buyer specific requirements
- Hassan, Md Rakib, Karmakar, Gour, Kamruzzaman, Joarder, Srinivasan, Bala
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
Decentralized content sharing among tourists in visiting hotspots
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Gondal, Iqbal
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 79, no. (2017), p. 25-40
- Full Text:
- Reviewed:
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency.
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency. © 2016
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 79, no. (2017), p. 25-40
- Full Text:
- Reviewed:
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency.
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency. © 2016
Carry me if you can : A utility based forwarding scheme for content sharing in tourist destinations
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Gondal, Iqbal
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 22nd Asia-Pacific Conference on Communications, APCC 2016; Yogyakarta, Indonesia; 25th-27th August 2016 p. 261-267
- Full Text:
- Reviewed:
- Description: Message forwarding is an integral part of the decentralized content sharing process as the content delivery success highly depends on it. Existing literature employs spatio-temporal regularity of human movement pattern and pre-existing social relationship to take message forwarding decisions. However, such approaches are ineffectual in environments where those information are unavailable such as a tourist spot or camping site. In this study, we explore the message forwarding techniques in such environments considering the information that are readily available and can be gathered on the fly. We propose a utility based forwarding scheme to select the appropriate forwarder node based on co-location stay time, connectivity and available resources. A higher co-location stay time reflects that the forwarder and the destination node is likely to have more opportunistic contacts, while the connectivity and available resource ensure that the selected forwarder has sufficient neighbours and resources to carry the message forward. Simulation results suggest that the proposed approach attains high hit and success rate and low latency for successful content delivery, which is comparable to those proposed for work-place type scenarios with regular movement pattern and pre-existing relationships. © 2016 IEEE.
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 22nd Asia-Pacific Conference on Communications, APCC 2016; Yogyakarta, Indonesia; 25th-27th August 2016 p. 261-267
- Full Text:
- Reviewed:
- Description: Message forwarding is an integral part of the decentralized content sharing process as the content delivery success highly depends on it. Existing literature employs spatio-temporal regularity of human movement pattern and pre-existing social relationship to take message forwarding decisions. However, such approaches are ineffectual in environments where those information are unavailable such as a tourist spot or camping site. In this study, we explore the message forwarding techniques in such environments considering the information that are readily available and can be gathered on the fly. We propose a utility based forwarding scheme to select the appropriate forwarder node based on co-location stay time, connectivity and available resources. A higher co-location stay time reflects that the forwarder and the destination node is likely to have more opportunistic contacts, while the connectivity and available resource ensure that the selected forwarder has sufficient neighbours and resources to carry the message forward. Simulation results suggest that the proposed approach attains high hit and success rate and low latency for successful content delivery, which is comparable to those proposed for work-place type scenarios with regular movement pattern and pre-existing relationships. © 2016 IEEE.
Low-power wide-area networks : design goals, architecture, suitability to use cases and research challenges
- Buurman, Ben, Kamruzzaman, Joarder, Karmakar, Gour, Islam, Syed
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
A survey on context awareness in big data analytics for business applications
- Dinh, Loan, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
How much I can rely on you : measuring trustworthiness of a twitter user
- Das, Rajkumar, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
Assessing transformer oil quality using deep convolutional networks
- Alam, Mohammad, Karmakar, Gour, Islam, Syed, Kamruzzaman, Joarder, Chetty, Madhu, Lim, Suryani, Appuhamillage, Gayan, Chattopadhyay, Gopi, Wilcox, Steve, Verheyen, Vincent
- Authors: Alam, Mohammad , Karmakar, Gour , Islam, Syed , Kamruzzaman, Joarder , Chetty, Madhu , Lim, Suryani , Appuhamillage, Gayan , Chattopadhyay, Gopi , Wilcox, Steve , Verheyen, Vincent
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 29th Australasian Universities Power Engineering Conference, AUPEC 2019
- Full Text:
- Reviewed:
- Description: Electrical power grids comprise a significantly large number of transformers that interconnect power generation, transmission and distribution. These transformers having different MVA ratings are critical assets that require proper maintenance to provide long and uninterrupted electrical service. The mineral oil, an essential component of any transformer, not only provides cooling but also acts as an insulating medium within the transformer. The quality and the key dissolved properties of insulating mineral oil for the transformer are critical with its proper and reliable operation. However, traditional chemical diagnostic methods are expensive and time-consuming. A transformer oil image analysis approach, based on the entropy value of oil, which is inexpensive, effective and quick. However, the inability of entropy to estimate the vital transformer oil properties such as equivalent age, Neutralization Number (NN), dissipation factor (tanδ) and power factor (PF); and many intuitively derived constants usage limit its estimation accuracy. To address this issue, in this paper, we introduce an innovative transformer oil analysis using two deep convolutional learning techniques such as Convolutional Neural Network (ConvNet) and Residual Neural Network (ResNet). These two deep neural networks are chosen for this project as they have superior performance in computer vision. After estimating the equivalent aging year of transformer oil from its image by our proposed method, NN, tanδ and PF are computed using that estimated age. Our deep learning based techniques can accurately predict the transformer oil equivalent age, leading to calculate NN, tanδ and PF more accurately. The root means square error of estimated equivalent age produced by entropy, ConvNet and ResNet based methods are 0.718, 0.122 and 0.065, respectively. ConvNet and ResNet based methods have reduced the error of the oil age estimation by 83% and 91%, respectively compared to that of the entropy method. Our proposed oil image analysis can calculate the equivalent age that is very close to the actual age for all images used in the experiment. © 2019 IEEE.
- Description: E1
- Authors: Alam, Mohammad , Karmakar, Gour , Islam, Syed , Kamruzzaman, Joarder , Chetty, Madhu , Lim, Suryani , Appuhamillage, Gayan , Chattopadhyay, Gopi , Wilcox, Steve , Verheyen, Vincent
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 29th Australasian Universities Power Engineering Conference, AUPEC 2019
- Full Text:
- Reviewed:
- Description: Electrical power grids comprise a significantly large number of transformers that interconnect power generation, transmission and distribution. These transformers having different MVA ratings are critical assets that require proper maintenance to provide long and uninterrupted electrical service. The mineral oil, an essential component of any transformer, not only provides cooling but also acts as an insulating medium within the transformer. The quality and the key dissolved properties of insulating mineral oil for the transformer are critical with its proper and reliable operation. However, traditional chemical diagnostic methods are expensive and time-consuming. A transformer oil image analysis approach, based on the entropy value of oil, which is inexpensive, effective and quick. However, the inability of entropy to estimate the vital transformer oil properties such as equivalent age, Neutralization Number (NN), dissipation factor (tanδ) and power factor (PF); and many intuitively derived constants usage limit its estimation accuracy. To address this issue, in this paper, we introduce an innovative transformer oil analysis using two deep convolutional learning techniques such as Convolutional Neural Network (ConvNet) and Residual Neural Network (ResNet). These two deep neural networks are chosen for this project as they have superior performance in computer vision. After estimating the equivalent aging year of transformer oil from its image by our proposed method, NN, tanδ and PF are computed using that estimated age. Our deep learning based techniques can accurately predict the transformer oil equivalent age, leading to calculate NN, tanδ and PF more accurately. The root means square error of estimated equivalent age produced by entropy, ConvNet and ResNet based methods are 0.718, 0.122 and 0.065, respectively. ConvNet and ResNet based methods have reduced the error of the oil age estimation by 83% and 91%, respectively compared to that of the entropy method. Our proposed oil image analysis can calculate the equivalent age that is very close to the actual age for all images used in the experiment. © 2019 IEEE.
- Description: E1
Passive detection of splicing and copy-move attacks in image forgery
- Islam, Mohammad, Kamruzzaman, Joarder, Karmakar, Gour, Murshed, Manzur, Kahandawa, Gayan
- Authors: Islam, Mohammad , Kamruzzaman, Joarder , Karmakar, Gour , Murshed, Manzur , Kahandawa, Gayan
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 25th International Conference on Neural Information Processing, ICONIP 2018; Siem Reap, Cambodia; 13th-16th December 2018; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 11304 LNCS, p. 555-567
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors for surveillance and monitoring, digital cameras, smart phones and social media generate huge volume of digital images every day. Image splicing and copy-move attacks are the most common types of image forgery that can be done very easily using modern photo editing software. Recently, digital forensics has drawn much attention to detect such tampering on images. In this paper, we introduce a novel feature extraction technique, namely Sum of Relevant Inter-Cell Values (SRIV) using which we propose a passive (blind) image forgery detection method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP). First, the input image is divided into non-overlapping blocks and 2D block DCT is applied to capture the changes of a tampered image in the frequency domain. Then LBP operator is applied to enhance the local changes among the neighbouring DCT coefficients, magnifying the changes in high frequency components resulting from splicing and copy-move attacks. The resulting LBP image is again divided into non-overlapping blocks. Finally, SRIV is applied on the LBP image blocks to extract features which are then fed into a Support Vector Machine (SVM) classifier to identify forged images from authentic ones. Extensive experiment on four well-known benchmark datasets of tampered images reveal the superiority of our method over recent state-of-the-art methods.
- Authors: Islam, Mohammad , Kamruzzaman, Joarder , Karmakar, Gour , Murshed, Manzur , Kahandawa, Gayan
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 25th International Conference on Neural Information Processing, ICONIP 2018; Siem Reap, Cambodia; 13th-16th December 2018; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 11304 LNCS, p. 555-567
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors for surveillance and monitoring, digital cameras, smart phones and social media generate huge volume of digital images every day. Image splicing and copy-move attacks are the most common types of image forgery that can be done very easily using modern photo editing software. Recently, digital forensics has drawn much attention to detect such tampering on images. In this paper, we introduce a novel feature extraction technique, namely Sum of Relevant Inter-Cell Values (SRIV) using which we propose a passive (blind) image forgery detection method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP). First, the input image is divided into non-overlapping blocks and 2D block DCT is applied to capture the changes of a tampered image in the frequency domain. Then LBP operator is applied to enhance the local changes among the neighbouring DCT coefficients, magnifying the changes in high frequency components resulting from splicing and copy-move attacks. The resulting LBP image is again divided into non-overlapping blocks. Finally, SRIV is applied on the LBP image blocks to extract features which are then fed into a Support Vector Machine (SVM) classifier to identify forged images from authentic ones. Extensive experiment on four well-known benchmark datasets of tampered images reveal the superiority of our method over recent state-of-the-art methods.
Detecting splicing and copy-move attacks in color images
- Islam, Mohammad, Karmakar, Gour, Kamruzzaman, Joarder, Murshed, Manzur, Kahandawa, Gayan, Parvin, Nahida
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur , Kahandawa, Gayan , Parvin, Nahida
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018; Canberra, Australia; 10th-13th December 2018 p. 1-7
- Full Text:
- Reviewed:
- Description: Image sensors are generating limitless digital images every day. Image forgery like splicing and copy-move are very common type of attacks that are easy to execute using sophisticated photo editing tools. As a result, digital forensics has attracted much attention to identify such tampering on digital images. In this paper, a passive (blind) image tampering identification method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) has been proposed. First, the chroma components of an image is divided into fixed sized non-overlapping blocks and 2D block DCT is applied to identify the changes due to forgery in local frequency distribution of the image. Then a texture descriptor, LBP is applied on the magnitude component of the 2D-DCT array to enhance the artifacts introduced by the tampering operation. The resulting LBP image is again divided into non-overlapping blocks. Finally, summations of corresponding inter-cell values of all the LBP blocks are computed and arranged as a feature vector. These features are fed into a Support Vector Machine (SVM) with Radial Basis Function (RBF) as kernel to distinguish forged images from authentic ones. The proposed method has been experimented extensively on three publicly available well-known image splicing and copy-move detection benchmark datasets of color images. Results demonstrate the superiority of the proposed method over recently proposed state-of-the-art approaches in terms of well accepted performance metrics such as accuracy, area under ROC curve and others.
- Description: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur , Kahandawa, Gayan , Parvin, Nahida
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018; Canberra, Australia; 10th-13th December 2018 p. 1-7
- Full Text:
- Reviewed:
- Description: Image sensors are generating limitless digital images every day. Image forgery like splicing and copy-move are very common type of attacks that are easy to execute using sophisticated photo editing tools. As a result, digital forensics has attracted much attention to identify such tampering on digital images. In this paper, a passive (blind) image tampering identification method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) has been proposed. First, the chroma components of an image is divided into fixed sized non-overlapping blocks and 2D block DCT is applied to identify the changes due to forgery in local frequency distribution of the image. Then a texture descriptor, LBP is applied on the magnitude component of the 2D-DCT array to enhance the artifacts introduced by the tampering operation. The resulting LBP image is again divided into non-overlapping blocks. Finally, summations of corresponding inter-cell values of all the LBP blocks are computed and arranged as a feature vector. These features are fed into a Support Vector Machine (SVM) with Radial Basis Function (RBF) as kernel to distinguish forged images from authentic ones. The proposed method has been experimented extensively on three publicly available well-known image splicing and copy-move detection benchmark datasets of color images. Results demonstrate the superiority of the proposed method over recently proposed state-of-the-art approaches in terms of well accepted performance metrics such as accuracy, area under ROC curve and others.
- Description: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
A dynamic content distribution scheme for decentralized sharing in tourist hotspots
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
Decentralized content sharing in mobile ad-hoc networks : a survey
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Rashid, Md Mamunur
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Rashid, Md Mamunur
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Digital Communications and Networks Vol. 9, no. 6 (2023), p. 1363-1398
- Full Text:
- Reviewed:
- Description: The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic. To address this issue and take advantage of the short-range communication capabilities of smart mobile devices, the decentralized content sharing approach has emerged as a suitable and promising alternative. Decentralized content sharing uses a peer-to-peer network among co-located smart mobile device users to fulfil content requests. Several articles have been published to date to address its different aspects including group management, interest extraction, message forwarding, participation incentive, and content replication. This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration. © 2022 Chongqing University of Posts and Telecommunications
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Rashid, Md Mamunur
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Digital Communications and Networks Vol. 9, no. 6 (2023), p. 1363-1398
- Full Text:
- Reviewed:
- Description: The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic. To address this issue and take advantage of the short-range communication capabilities of smart mobile devices, the decentralized content sharing approach has emerged as a suitable and promising alternative. Decentralized content sharing uses a peer-to-peer network among co-located smart mobile device users to fulfil content requests. Several articles have been published to date to address its different aspects including group management, interest extraction, message forwarding, participation incentive, and content replication. This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration. © 2022 Chongqing University of Posts and Telecommunications
An evidence theoretic approach for traffic signal intrusion detection
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Das, Rajkumar, Newaz, Shah
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
- «
- ‹
- 1
- ›
- »