A robust forgery detection method for copy-move and splicing attacks in images
- Islam, Mohammad, Karmakar, Gour, Kamruzzaman, Joarder, Murshed, Manzur
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
An adaptive approach to opportunistic data forwarding in underwater acoustic sensor networks
- Nowsheen, Nusrat, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
QoS support in event detection in WSN through optimal k-coverage
- Alam, Kh Mahmudul, Kamruzzaman, Joarder, Karmakar, Gour, Murhsed, Manzur, Azad, Arman
- Authors: Alam, Kh Mahmudul , Kamruzzaman, Joarder , Karmakar, Gour , Murhsed, Manzur , Azad, Arman
- Date: 2011
- Type: Text , Conference paper
- Relation: 11th International Conference on Computational Science, ICCS 2011; Singapore, Singapore; 1st-3rd June 2011; published in Procedia Computer Science Vol. 4, p. 499-507
- Full Text:
- Reviewed:
- Description: Wireless sensor networks promise to guarantee accurate, fault tolerant and timely detection of events in large scale sensor fields. To achieve this the notion of k-coverage is widely employed in WSNs where significant redundancy is introduced in deployment as an event is expected to be sensed by at least k sensors in the neighborhood. As sensor density increases significantly with k, it is imperative to find the optimal k for the underlying event detection system. In this work, we consider the detection probability, fault tolerance and latency as the Quality of Service (QoS) metrics of an event detection system employing k-coverage and present a probabilistic model to guarantee given QoS support with the minimum degree of coverage taking into account the noise related measurement error, communication interference and sensor fault probability. This work eventually resolves the problem of over or under deployment of sensors, increases scalability and provides a well defined mechanism to tune the degree of coverage according to performance needs.
- Authors: Alam, Kh Mahmudul , Kamruzzaman, Joarder , Karmakar, Gour , Murhsed, Manzur , Azad, Arman
- Date: 2011
- Type: Text , Conference paper
- Relation: 11th International Conference on Computational Science, ICCS 2011; Singapore, Singapore; 1st-3rd June 2011; published in Procedia Computer Science Vol. 4, p. 499-507
- Full Text:
- Reviewed:
- Description: Wireless sensor networks promise to guarantee accurate, fault tolerant and timely detection of events in large scale sensor fields. To achieve this the notion of k-coverage is widely employed in WSNs where significant redundancy is introduced in deployment as an event is expected to be sensed by at least k sensors in the neighborhood. As sensor density increases significantly with k, it is imperative to find the optimal k for the underlying event detection system. In this work, we consider the detection probability, fault tolerance and latency as the Quality of Service (QoS) metrics of an event detection system employing k-coverage and present a probabilistic model to guarantee given QoS support with the minimum degree of coverage taking into account the noise related measurement error, communication interference and sensor fault probability. This work eventually resolves the problem of over or under deployment of sensors, increases scalability and provides a well defined mechanism to tune the degree of coverage according to performance needs.
Reverse engineering genetic networks using nonlinear saturation kinetics
- Youseph, Ahammed, Chetty, Madhu, Karmakar, Gour
- Authors: Youseph, Ahammed , Chetty, Madhu , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: BioSystems Vol. 182, no. (2019), p. 30-41
- Full Text:
- Reviewed:
- Description: A gene regulatory network (GRN) represents a set of genes along with their regulatory interactions. Cellular behavior is driven by genetic level interactions. Dynamics of such systems show nonlinear saturation kinetics which can be best modeled by Michaelis-Menten (MM) and Hill equations. Although MM equation is being widely used for modeling biochemical processes, it has been applied rarely for reverse engineering GRNs. In this paper, we develop a complete framework for a novel model for GRN inference using MM kinetics. A set of coupled equations is first proposed for modeling GRNs. In the coupled model, Michaelis-Menten constant associated with regulation by a gene is made invariant irrespective of the gene being regulated. The parameter estimation of the proposed model is carried out using an evolutionary optimization method, namely, trigonometric differential evolution (TDE). Subsequently, the model is further improved and the regulations of different genes by a given gene are made distinct by allowing varying values of Michaelis-Menten constants for each regulation. Apart from making the model more relevant biologically, the improvement results in a decoupled GRN model with fast estimation of model parameters. Further, to enhance exploitation of the search, we propose a local search algorithm based on hill climbing heuristics. A novel mutation operation is also proposed to avoid population stagnation and premature convergence. Real life benchmark data sets generated in vivo are used for validating the proposed model. Further, we also analyze realistic in silico datasets generated using GeneNetweaver. The comparison of the performance of proposed model with other existing methods shows the potential of the proposed model.
- Authors: Youseph, Ahammed , Chetty, Madhu , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: BioSystems Vol. 182, no. (2019), p. 30-41
- Full Text:
- Reviewed:
- Description: A gene regulatory network (GRN) represents a set of genes along with their regulatory interactions. Cellular behavior is driven by genetic level interactions. Dynamics of such systems show nonlinear saturation kinetics which can be best modeled by Michaelis-Menten (MM) and Hill equations. Although MM equation is being widely used for modeling biochemical processes, it has been applied rarely for reverse engineering GRNs. In this paper, we develop a complete framework for a novel model for GRN inference using MM kinetics. A set of coupled equations is first proposed for modeling GRNs. In the coupled model, Michaelis-Menten constant associated with regulation by a gene is made invariant irrespective of the gene being regulated. The parameter estimation of the proposed model is carried out using an evolutionary optimization method, namely, trigonometric differential evolution (TDE). Subsequently, the model is further improved and the regulations of different genes by a given gene are made distinct by allowing varying values of Michaelis-Menten constants for each regulation. Apart from making the model more relevant biologically, the improvement results in a decoupled GRN model with fast estimation of model parameters. Further, to enhance exploitation of the search, we propose a local search algorithm based on hill climbing heuristics. A novel mutation operation is also proposed to avoid population stagnation and premature convergence. Real life benchmark data sets generated in vivo are used for validating the proposed model. Further, we also analyze realistic in silico datasets generated using GeneNetweaver. The comparison of the performance of proposed model with other existing methods shows the potential of the proposed model.
A comprehensive spectrum trading scheme based on market competition, reputation and buyer specific requirements
- Hassan, Md Rakib, Karmakar, Gour, Kamruzzaman, Joarder, Srinivasan, Bala
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
Decentralized content sharing among tourists in visiting hotspots
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Gondal, Iqbal
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 79, no. (2017), p. 25-40
- Full Text:
- Reviewed:
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency.
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency. © 2016
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 79, no. (2017), p. 25-40
- Full Text:
- Reviewed:
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency.
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency. © 2016
Cuboid colour image segmentation using intuitive distance measure
- Tania, Sheikh, Murshed, Manzur, Teng, Shyh, Karmakar, Gour
- Authors: Tania, Sheikh , Murshed, Manzur , Teng, Shyh , Karmakar, Gour
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 International Conference on Image and Vision Computing New Zealand, IVCNZ 2018; Auckland, New Zealand; 19th-21st November 2018 Vol. 2018-November, p. 1-6
- Full Text:
- Reviewed:
- Description: In this paper, an improved algorithm for cuboid image segmentation is proposed. To address the two main limitations of the recently proposed cuboid segmentation algorithm, the improved algorithm substitutes colour quantization in HCL colour space with infinity norm distance in RGB colour space along with a different way to impose area thresholding. We also propose a new metric to evaluate the quality of segmentation. Experimental results show that the proposed cuboid segmentation algorithm significantly outperforms the existing cuboid segmentation algorithm in terms of quality of segmentation.
- Description: International Conference Image and Vision Computing New Zealand
- Authors: Tania, Sheikh , Murshed, Manzur , Teng, Shyh , Karmakar, Gour
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 International Conference on Image and Vision Computing New Zealand, IVCNZ 2018; Auckland, New Zealand; 19th-21st November 2018 Vol. 2018-November, p. 1-6
- Full Text:
- Reviewed:
- Description: In this paper, an improved algorithm for cuboid image segmentation is proposed. To address the two main limitations of the recently proposed cuboid segmentation algorithm, the improved algorithm substitutes colour quantization in HCL colour space with infinity norm distance in RGB colour space along with a different way to impose area thresholding. We also propose a new metric to evaluate the quality of segmentation. Experimental results show that the proposed cuboid segmentation algorithm significantly outperforms the existing cuboid segmentation algorithm in terms of quality of segmentation.
- Description: International Conference Image and Vision Computing New Zealand
Carry me if you can : A utility based forwarding scheme for content sharing in tourist destinations
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Gondal, Iqbal
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 22nd Asia-Pacific Conference on Communications, APCC 2016; Yogyakarta, Indonesia; 25th-27th August 2016 p. 261-267
- Full Text:
- Reviewed:
- Description: Message forwarding is an integral part of the decentralized content sharing process as the content delivery success highly depends on it. Existing literature employs spatio-temporal regularity of human movement pattern and pre-existing social relationship to take message forwarding decisions. However, such approaches are ineffectual in environments where those information are unavailable such as a tourist spot or camping site. In this study, we explore the message forwarding techniques in such environments considering the information that are readily available and can be gathered on the fly. We propose a utility based forwarding scheme to select the appropriate forwarder node based on co-location stay time, connectivity and available resources. A higher co-location stay time reflects that the forwarder and the destination node is likely to have more opportunistic contacts, while the connectivity and available resource ensure that the selected forwarder has sufficient neighbours and resources to carry the message forward. Simulation results suggest that the proposed approach attains high hit and success rate and low latency for successful content delivery, which is comparable to those proposed for work-place type scenarios with regular movement pattern and pre-existing relationships. © 2016 IEEE.
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 22nd Asia-Pacific Conference on Communications, APCC 2016; Yogyakarta, Indonesia; 25th-27th August 2016 p. 261-267
- Full Text:
- Reviewed:
- Description: Message forwarding is an integral part of the decentralized content sharing process as the content delivery success highly depends on it. Existing literature employs spatio-temporal regularity of human movement pattern and pre-existing social relationship to take message forwarding decisions. However, such approaches are ineffectual in environments where those information are unavailable such as a tourist spot or camping site. In this study, we explore the message forwarding techniques in such environments considering the information that are readily available and can be gathered on the fly. We propose a utility based forwarding scheme to select the appropriate forwarder node based on co-location stay time, connectivity and available resources. A higher co-location stay time reflects that the forwarder and the destination node is likely to have more opportunistic contacts, while the connectivity and available resource ensure that the selected forwarder has sufficient neighbours and resources to carry the message forward. Simulation results suggest that the proposed approach attains high hit and success rate and low latency for successful content delivery, which is comparable to those proposed for work-place type scenarios with regular movement pattern and pre-existing relationships. © 2016 IEEE.
Low-power wide-area networks : design goals, architecture, suitability to use cases and research challenges
- Buurman, Ben, Kamruzzaman, Joarder, Karmakar, Gour, Islam, Syed
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
Hierarchical colour image segmentation by leveraging RGB channels independently
- Tania, Sheikh, Murshed, Manzur, Teng, Shyh, Karmakar, Gour
- Authors: Tania, Sheikh , Murshed, Manzur , Teng, Shyh , Karmakar, Gour
- Date: 2019
- Type: Text , Conference paper
- Relation: 9th Pacific-Rim Symposium on Image and Video Technology, PSIVT 2019 Vol. 11854 LNCS, p. 197-210
- Full Text:
- Reviewed:
- Description: In this paper, we introduce a hierarchical colour image segmentation based on cuboid partitioning using simple statistical features of the pixel intensities in the RGB channels. Estimating the difference between any two colours is a challenging task. As most of the colour models are not perceptually uniform, investigation of an alternative strategy is highly demanding. To address this issue, for our proposed technique, we present a new concept for colour distance measure based on the inconsistency of pixel intensities of an image which is more compliant to human perception. Constructing a reliable set of superpixels from an image is fundamental for further merging. As cuboid partitioning is a superior candidate to produce superpixels, we use the agglomerative merging to yield the final segmentation results exploiting the outcome of our proposed cuboid partitioning. The proposed cuboid segmentation based algorithm significantly outperforms not only the quadtree-based segmentation but also existing state-of-the-art segmentation algorithms in terms of quality of segmentation for the benchmark datasets used in image segmentation. © 2019, Springer Nature Switzerland AG.
- Authors: Tania, Sheikh , Murshed, Manzur , Teng, Shyh , Karmakar, Gour
- Date: 2019
- Type: Text , Conference paper
- Relation: 9th Pacific-Rim Symposium on Image and Video Technology, PSIVT 2019 Vol. 11854 LNCS, p. 197-210
- Full Text:
- Reviewed:
- Description: In this paper, we introduce a hierarchical colour image segmentation based on cuboid partitioning using simple statistical features of the pixel intensities in the RGB channels. Estimating the difference between any two colours is a challenging task. As most of the colour models are not perceptually uniform, investigation of an alternative strategy is highly demanding. To address this issue, for our proposed technique, we present a new concept for colour distance measure based on the inconsistency of pixel intensities of an image which is more compliant to human perception. Constructing a reliable set of superpixels from an image is fundamental for further merging. As cuboid partitioning is a superior candidate to produce superpixels, we use the agglomerative merging to yield the final segmentation results exploiting the outcome of our proposed cuboid partitioning. The proposed cuboid segmentation based algorithm significantly outperforms not only the quadtree-based segmentation but also existing state-of-the-art segmentation algorithms in terms of quality of segmentation for the benchmark datasets used in image segmentation. © 2019, Springer Nature Switzerland AG.
PCA based population generation for genetic network optimization
- Youseph, Ahammed, Chetty, Madhu, Karmakar, Gour
- Authors: Youseph, Ahammed , Chetty, Madhu , Karmakar, Gour
- Date: 2018
- Type: Text , Journal article
- Relation: Cognitive Neurodynamics Vol. 12, no. 4 (2018), p. 417-429
- Full Text:
- Reviewed:
- Description: A gene regulatory network (GRN) represents a set of genes and its regulatory interactions. The inference of the regulatory interactions between genes is usually carried out using an appropriate mathematical model and the available gene expression profile. Among the various models proposed for GRN inference, our recently proposed Michaelis–Menten based ODE model provides a good trade-off between the computational complexity and biological relevance. This model, like other known GRN models, also uses an evolutionary algorithm for parameter estimation. Considering various issues associated with such population based stochastic optimization approaches (e.g. diversity, premature convergence due to local optima, accuracy, etc.), it becomes important to seed the initial population with good individuals which are closer to the optimal solution. In this paper, we exploit the inherent strength of principal component analysis (PCA) in a novel manner to initialize the population for GRN optimization. The benefit of the proposed method is validated by reconstructing in silico and in vivo networks of various sizes. For the same level of accuracy, the approach with PCA based initialization shows improved convergence speed.
- Authors: Youseph, Ahammed , Chetty, Madhu , Karmakar, Gour
- Date: 2018
- Type: Text , Journal article
- Relation: Cognitive Neurodynamics Vol. 12, no. 4 (2018), p. 417-429
- Full Text:
- Reviewed:
- Description: A gene regulatory network (GRN) represents a set of genes and its regulatory interactions. The inference of the regulatory interactions between genes is usually carried out using an appropriate mathematical model and the available gene expression profile. Among the various models proposed for GRN inference, our recently proposed Michaelis–Menten based ODE model provides a good trade-off between the computational complexity and biological relevance. This model, like other known GRN models, also uses an evolutionary algorithm for parameter estimation. Considering various issues associated with such population based stochastic optimization approaches (e.g. diversity, premature convergence due to local optima, accuracy, etc.), it becomes important to seed the initial population with good individuals which are closer to the optimal solution. In this paper, we exploit the inherent strength of principal component analysis (PCA) in a novel manner to initialize the population for GRN optimization. The benefit of the proposed method is validated by reconstructing in silico and in vivo networks of various sizes. For the same level of accuracy, the approach with PCA based initialization shows improved convergence speed.
A survey on context awareness in big data analytics for business applications
- Dinh, Loan, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
How much I can rely on you : measuring trustworthiness of a twitter user
- Das, Rajkumar, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
Assessing transformer oil quality using deep convolutional networks
- Alam, Mohammad, Karmakar, Gour, Islam, Syed, Kamruzzaman, Joarder, Chetty, Madhu, Lim, Suryani, Appuhamillage, Gayan, Chattopadhyay, Gopi, Wilcox, Steve, Verheyen, Vincent
- Authors: Alam, Mohammad , Karmakar, Gour , Islam, Syed , Kamruzzaman, Joarder , Chetty, Madhu , Lim, Suryani , Appuhamillage, Gayan , Chattopadhyay, Gopi , Wilcox, Steve , Verheyen, Vincent
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 29th Australasian Universities Power Engineering Conference, AUPEC 2019
- Full Text:
- Reviewed:
- Description: Electrical power grids comprise a significantly large number of transformers that interconnect power generation, transmission and distribution. These transformers having different MVA ratings are critical assets that require proper maintenance to provide long and uninterrupted electrical service. The mineral oil, an essential component of any transformer, not only provides cooling but also acts as an insulating medium within the transformer. The quality and the key dissolved properties of insulating mineral oil for the transformer are critical with its proper and reliable operation. However, traditional chemical diagnostic methods are expensive and time-consuming. A transformer oil image analysis approach, based on the entropy value of oil, which is inexpensive, effective and quick. However, the inability of entropy to estimate the vital transformer oil properties such as equivalent age, Neutralization Number (NN), dissipation factor (tanδ) and power factor (PF); and many intuitively derived constants usage limit its estimation accuracy. To address this issue, in this paper, we introduce an innovative transformer oil analysis using two deep convolutional learning techniques such as Convolutional Neural Network (ConvNet) and Residual Neural Network (ResNet). These two deep neural networks are chosen for this project as they have superior performance in computer vision. After estimating the equivalent aging year of transformer oil from its image by our proposed method, NN, tanδ and PF are computed using that estimated age. Our deep learning based techniques can accurately predict the transformer oil equivalent age, leading to calculate NN, tanδ and PF more accurately. The root means square error of estimated equivalent age produced by entropy, ConvNet and ResNet based methods are 0.718, 0.122 and 0.065, respectively. ConvNet and ResNet based methods have reduced the error of the oil age estimation by 83% and 91%, respectively compared to that of the entropy method. Our proposed oil image analysis can calculate the equivalent age that is very close to the actual age for all images used in the experiment. © 2019 IEEE.
- Description: E1
- Authors: Alam, Mohammad , Karmakar, Gour , Islam, Syed , Kamruzzaman, Joarder , Chetty, Madhu , Lim, Suryani , Appuhamillage, Gayan , Chattopadhyay, Gopi , Wilcox, Steve , Verheyen, Vincent
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 29th Australasian Universities Power Engineering Conference, AUPEC 2019
- Full Text:
- Reviewed:
- Description: Electrical power grids comprise a significantly large number of transformers that interconnect power generation, transmission and distribution. These transformers having different MVA ratings are critical assets that require proper maintenance to provide long and uninterrupted electrical service. The mineral oil, an essential component of any transformer, not only provides cooling but also acts as an insulating medium within the transformer. The quality and the key dissolved properties of insulating mineral oil for the transformer are critical with its proper and reliable operation. However, traditional chemical diagnostic methods are expensive and time-consuming. A transformer oil image analysis approach, based on the entropy value of oil, which is inexpensive, effective and quick. However, the inability of entropy to estimate the vital transformer oil properties such as equivalent age, Neutralization Number (NN), dissipation factor (tanδ) and power factor (PF); and many intuitively derived constants usage limit its estimation accuracy. To address this issue, in this paper, we introduce an innovative transformer oil analysis using two deep convolutional learning techniques such as Convolutional Neural Network (ConvNet) and Residual Neural Network (ResNet). These two deep neural networks are chosen for this project as they have superior performance in computer vision. After estimating the equivalent aging year of transformer oil from its image by our proposed method, NN, tanδ and PF are computed using that estimated age. Our deep learning based techniques can accurately predict the transformer oil equivalent age, leading to calculate NN, tanδ and PF more accurately. The root means square error of estimated equivalent age produced by entropy, ConvNet and ResNet based methods are 0.718, 0.122 and 0.065, respectively. ConvNet and ResNet based methods have reduced the error of the oil age estimation by 83% and 91%, respectively compared to that of the entropy method. Our proposed oil image analysis can calculate the equivalent age that is very close to the actual age for all images used in the experiment. © 2019 IEEE.
- Description: E1
Passive detection of splicing and copy-move attacks in image forgery
- Islam, Mohammad, Kamruzzaman, Joarder, Karmakar, Gour, Murshed, Manzur, Kahandawa, Gayan
- Authors: Islam, Mohammad , Kamruzzaman, Joarder , Karmakar, Gour , Murshed, Manzur , Kahandawa, Gayan
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 25th International Conference on Neural Information Processing, ICONIP 2018; Siem Reap, Cambodia; 13th-16th December 2018; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 11304 LNCS, p. 555-567
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors for surveillance and monitoring, digital cameras, smart phones and social media generate huge volume of digital images every day. Image splicing and copy-move attacks are the most common types of image forgery that can be done very easily using modern photo editing software. Recently, digital forensics has drawn much attention to detect such tampering on images. In this paper, we introduce a novel feature extraction technique, namely Sum of Relevant Inter-Cell Values (SRIV) using which we propose a passive (blind) image forgery detection method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP). First, the input image is divided into non-overlapping blocks and 2D block DCT is applied to capture the changes of a tampered image in the frequency domain. Then LBP operator is applied to enhance the local changes among the neighbouring DCT coefficients, magnifying the changes in high frequency components resulting from splicing and copy-move attacks. The resulting LBP image is again divided into non-overlapping blocks. Finally, SRIV is applied on the LBP image blocks to extract features which are then fed into a Support Vector Machine (SVM) classifier to identify forged images from authentic ones. Extensive experiment on four well-known benchmark datasets of tampered images reveal the superiority of our method over recent state-of-the-art methods.
- Authors: Islam, Mohammad , Kamruzzaman, Joarder , Karmakar, Gour , Murshed, Manzur , Kahandawa, Gayan
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 25th International Conference on Neural Information Processing, ICONIP 2018; Siem Reap, Cambodia; 13th-16th December 2018; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 11304 LNCS, p. 555-567
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors for surveillance and monitoring, digital cameras, smart phones and social media generate huge volume of digital images every day. Image splicing and copy-move attacks are the most common types of image forgery that can be done very easily using modern photo editing software. Recently, digital forensics has drawn much attention to detect such tampering on images. In this paper, we introduce a novel feature extraction technique, namely Sum of Relevant Inter-Cell Values (SRIV) using which we propose a passive (blind) image forgery detection method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP). First, the input image is divided into non-overlapping blocks and 2D block DCT is applied to capture the changes of a tampered image in the frequency domain. Then LBP operator is applied to enhance the local changes among the neighbouring DCT coefficients, magnifying the changes in high frequency components resulting from splicing and copy-move attacks. The resulting LBP image is again divided into non-overlapping blocks. Finally, SRIV is applied on the LBP image blocks to extract features which are then fed into a Support Vector Machine (SVM) classifier to identify forged images from authentic ones. Extensive experiment on four well-known benchmark datasets of tampered images reveal the superiority of our method over recent state-of-the-art methods.
Detecting splicing and copy-move attacks in color images
- Islam, Mohammad, Karmakar, Gour, Kamruzzaman, Joarder, Murshed, Manzur, Kahandawa, Gayan, Parvin, Nahida
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur , Kahandawa, Gayan , Parvin, Nahida
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018; Canberra, Australia; 10th-13th December 2018 p. 1-7
- Full Text:
- Reviewed:
- Description: Image sensors are generating limitless digital images every day. Image forgery like splicing and copy-move are very common type of attacks that are easy to execute using sophisticated photo editing tools. As a result, digital forensics has attracted much attention to identify such tampering on digital images. In this paper, a passive (blind) image tampering identification method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) has been proposed. First, the chroma components of an image is divided into fixed sized non-overlapping blocks and 2D block DCT is applied to identify the changes due to forgery in local frequency distribution of the image. Then a texture descriptor, LBP is applied on the magnitude component of the 2D-DCT array to enhance the artifacts introduced by the tampering operation. The resulting LBP image is again divided into non-overlapping blocks. Finally, summations of corresponding inter-cell values of all the LBP blocks are computed and arranged as a feature vector. These features are fed into a Support Vector Machine (SVM) with Radial Basis Function (RBF) as kernel to distinguish forged images from authentic ones. The proposed method has been experimented extensively on three publicly available well-known image splicing and copy-move detection benchmark datasets of color images. Results demonstrate the superiority of the proposed method over recently proposed state-of-the-art approaches in terms of well accepted performance metrics such as accuracy, area under ROC curve and others.
- Description: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur , Kahandawa, Gayan , Parvin, Nahida
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018; Canberra, Australia; 10th-13th December 2018 p. 1-7
- Full Text:
- Reviewed:
- Description: Image sensors are generating limitless digital images every day. Image forgery like splicing and copy-move are very common type of attacks that are easy to execute using sophisticated photo editing tools. As a result, digital forensics has attracted much attention to identify such tampering on digital images. In this paper, a passive (blind) image tampering identification method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) has been proposed. First, the chroma components of an image is divided into fixed sized non-overlapping blocks and 2D block DCT is applied to identify the changes due to forgery in local frequency distribution of the image. Then a texture descriptor, LBP is applied on the magnitude component of the 2D-DCT array to enhance the artifacts introduced by the tampering operation. The resulting LBP image is again divided into non-overlapping blocks. Finally, summations of corresponding inter-cell values of all the LBP blocks are computed and arranged as a feature vector. These features are fed into a Support Vector Machine (SVM) with Radial Basis Function (RBF) as kernel to distinguish forged images from authentic ones. The proposed method has been experimented extensively on three publicly available well-known image splicing and copy-move detection benchmark datasets of color images. Results demonstrate the superiority of the proposed method over recently proposed state-of-the-art approaches in terms of well accepted performance metrics such as accuracy, area under ROC curve and others.
- Description: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
A dynamic content distribution scheme for decentralized sharing in tourist hotspots
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
Green demand aware fog computing : a prediction-based dynamic resource provisioning approach
- Khadhijah, Pg, Newaz, S., Rahman, Fatin, Lee, Gyu, Karmakar, Gour, Au, Thien-Wan
- Authors: Khadhijah, Pg , Newaz, S. , Rahman, Fatin , Lee, Gyu , Karmakar, Gour , Au, Thien-Wan
- Date: 2022
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 11, no. 4 (2022), p.
- Full Text:
- Reviewed:
- Description: Fog computing could potentially cause the next paradigm shift by extending cloud services to the edge of the network, bringing resources closer to the end-user. With its close proximity to end-users and its distributed nature, fog computing can significantly reduce latency. With the appearance of more and more latency-stringent applications, in the near future, we will witness an unprecedented amount of demand for fog computing. Undoubtedly, this will lead to an increase in the energy footprint of the network edge and access segments. To reduce energy consumption in fog computing without compromising performance, in this paper we propose the Green-Demand-Aware Fog Computing (GDAFC) solution. Our solution uses a prediction technique to identify the working fog nodes (nodes serve when request arrives), standby fog nodes (nodes take over when the computational capacity of the working fog nodes is no longer sufficient), and idle fog nodes in a fog computing infrastructure. Additionally, it assigns an appropriate sleep interval for the fog nodes, taking into account the delay requirement of the applications. Results obtained based on the mathematical formulation show that our solution can save energy up to 65% without deteriorating the delay requirement performance. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Khadhijah, Pg , Newaz, S. , Rahman, Fatin , Lee, Gyu , Karmakar, Gour , Au, Thien-Wan
- Date: 2022
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 11, no. 4 (2022), p.
- Full Text:
- Reviewed:
- Description: Fog computing could potentially cause the next paradigm shift by extending cloud services to the edge of the network, bringing resources closer to the end-user. With its close proximity to end-users and its distributed nature, fog computing can significantly reduce latency. With the appearance of more and more latency-stringent applications, in the near future, we will witness an unprecedented amount of demand for fog computing. Undoubtedly, this will lead to an increase in the energy footprint of the network edge and access segments. To reduce energy consumption in fog computing without compromising performance, in this paper we propose the Green-Demand-Aware Fog Computing (GDAFC) solution. Our solution uses a prediction technique to identify the working fog nodes (nodes serve when request arrives), standby fog nodes (nodes take over when the computational capacity of the working fog nodes is no longer sufficient), and idle fog nodes in a fog computing infrastructure. Additionally, it assigns an appropriate sleep interval for the fog nodes, taking into account the delay requirement of the applications. Results obtained based on the mathematical formulation show that our solution can save energy up to 65% without deteriorating the delay requirement performance. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
Exploring human mobility for multi-pattern passenger prediction : a graph learning framework
- Kong, Xiangjiea, Wang, Kailai, Hou, Mingliang, Xia, Feng, Karmakar, Gour, Li, Jianxin
- Authors: Kong, Xiangjiea , Wang, Kailai , Hou, Mingliang , Xia, Feng , Karmakar, Gour , Li, Jianxin
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 23, no. 9 (2022), p. 16148-16160
- Full Text:
- Reviewed:
- Description: Traffic flow prediction is an integral part of an intelligent transportation system and thus fundamental for various traffic-related applications. Buses are an indispensable way of moving for urban residents with fixed routes and schedules, which leads to latent travel regularity. However, human mobility patterns, specifically the complex relationships between bus passengers, are deeply hidden in this fixed mobility mode. Although many models exist to predict traffic flow, human mobility patterns have not been well explored in this regard. To address this research gap and learn human mobility knowledge from this fixed travel behaviors, we propose a multi-pattern passenger flow prediction framework, MPGCN, based on Graph Convolutional Network (GCN). Firstly, we construct a novel sharing-stop network to model relationships between passengers based on bus record data. Then, we employ GCN to extract features from the graph by learning useful topology information and introduce a deep clustering method to recognize mobility patterns hidden in bus passengers. Furthermore, to fully utilize spatio-temporal information, we propose GCN2Flow to predict passenger flow based on various mobility patterns. To the best of our knowledge, this paper is the first work to adopt a multi-pattern approach to predict the bus passenger flow by taking advantage of graph learning. We design a case study for optimizing routes. Extensive experiments upon a real-world bus dataset demonstrate that MPGCN has potential efficacy in passenger flow prediction and route optimization. © 2000-2011 IEEE.
- Authors: Kong, Xiangjiea , Wang, Kailai , Hou, Mingliang , Xia, Feng , Karmakar, Gour , Li, Jianxin
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 23, no. 9 (2022), p. 16148-16160
- Full Text:
- Reviewed:
- Description: Traffic flow prediction is an integral part of an intelligent transportation system and thus fundamental for various traffic-related applications. Buses are an indispensable way of moving for urban residents with fixed routes and schedules, which leads to latent travel regularity. However, human mobility patterns, specifically the complex relationships between bus passengers, are deeply hidden in this fixed mobility mode. Although many models exist to predict traffic flow, human mobility patterns have not been well explored in this regard. To address this research gap and learn human mobility knowledge from this fixed travel behaviors, we propose a multi-pattern passenger flow prediction framework, MPGCN, based on Graph Convolutional Network (GCN). Firstly, we construct a novel sharing-stop network to model relationships between passengers based on bus record data. Then, we employ GCN to extract features from the graph by learning useful topology information and introduce a deep clustering method to recognize mobility patterns hidden in bus passengers. Furthermore, to fully utilize spatio-temporal information, we propose GCN2Flow to predict passenger flow based on various mobility patterns. To the best of our knowledge, this paper is the first work to adopt a multi-pattern approach to predict the bus passenger flow by taking advantage of graph learning. We design a case study for optimizing routes. Extensive experiments upon a real-world bus dataset demonstrate that MPGCN has potential efficacy in passenger flow prediction and route optimization. © 2000-2011 IEEE.