A multivariant stream analysis approach to detect and mitigate DDoS attacks in vehicular ad hoc networks
- Kolandaisamy, Raenu, Md Noor, Rafidah, Ahmedy, Ismail, Ahmad, Iftikhar, Imran, Muhammad
- Authors: Kolandaisamy, Raenu , Md Noor, Rafidah , Ahmedy, Ismail , Ahmad, Iftikhar , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: Wireless Communications and Mobile Computing Vol. 2018, no. (2018), p.
- Full Text:
- Reviewed:
- Description: Vehicular Ad Hoc Networks (VANETs) are rapidly gaining attention due to the diversity of services that they can potentially offer. However, VANET communication is vulnerable to numerous security threats such as Distributed Denial of Service (DDoS) attacks. Dealing with these attacks in VANET is a challenging problem. Most of the existing DDoS detection techniques suffer from poor accuracy and high computational overhead. To cope with these problems, we present a novel Multivariant Stream Analysis (MVSA) approach. The proposed MVSA approach maintains the multiple stages for detection DDoS attack in network. The Multivariant Stream Analysis gives unique result based on the Vehicle-to-Vehicle communication through Road Side Unit. The approach observes the traffic in different situations and time frames and maintains different rules for various traffic classes in various time windows. The performance of the MVSA is evaluated using an NS2 simulator. Simulation results demonstrate the effectiveness and efficiency of the MVSA regarding detection accuracy and reducing the impact on VANET communication. © 2018 Raenu Kolandaisamy et al. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran” is provided in this record**
- Authors: Kolandaisamy, Raenu , Md Noor, Rafidah , Ahmedy, Ismail , Ahmad, Iftikhar , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: Wireless Communications and Mobile Computing Vol. 2018, no. (2018), p.
- Full Text:
- Reviewed:
- Description: Vehicular Ad Hoc Networks (VANETs) are rapidly gaining attention due to the diversity of services that they can potentially offer. However, VANET communication is vulnerable to numerous security threats such as Distributed Denial of Service (DDoS) attacks. Dealing with these attacks in VANET is a challenging problem. Most of the existing DDoS detection techniques suffer from poor accuracy and high computational overhead. To cope with these problems, we present a novel Multivariant Stream Analysis (MVSA) approach. The proposed MVSA approach maintains the multiple stages for detection DDoS attack in network. The Multivariant Stream Analysis gives unique result based on the Vehicle-to-Vehicle communication through Road Side Unit. The approach observes the traffic in different situations and time frames and maintains different rules for various traffic classes in various time windows. The performance of the MVSA is evaluated using an NS2 simulator. Simulation results demonstrate the effectiveness and efficiency of the MVSA regarding detection accuracy and reducing the impact on VANET communication. © 2018 Raenu Kolandaisamy et al. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran” is provided in this record**
Novel one time signatures (NOTS) : a compact post-quantum digital signature scheme
- Shahid, Furqan, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, Muhammad
- Authors: Shahid, Furqan , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 15895-15906
- Full Text:
- Reviewed:
- Description: The future of the hash based digital signature schemes appears to be very bright in the upcoming quantum era because of the quantum threats to the number theory based digital signature schemes. The Shor's algorithm is available to allow a sufficiently powerful quantum computer to break the building blocks of the number theory based signature schemes in a polynomial time. The hash based signature schemes being quite efficient and provably secure can fill in the gap effectively. However, a draw back of the hash based signature schemes is the larger key and signature sizes which can prove a barrier in their adoption by the space critical applications, like the blockchain. A hash based signature scheme is constructed using a one time signature (OTS) scheme. The underlying OTS scheme plays an important role in determining key and signature sizes of a hash based signature scheme. In this article, we have proposed a novel OTS scheme with minimized key and signature sizes as compared to all of the existing OTS schemes. Our proposed OTS scheme offers an 88% reduction in both key and signature sizes as compared to the popular Winternitz OTS scheme. Furthermore, our proposed OTS scheme offers an 84% and an 86% reductions in the signature and the key sizes respectively as compared to an existing compact variant of the WOTS scheme, i.e. WOTS + .
- Authors: Shahid, Furqan , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 15895-15906
- Full Text:
- Reviewed:
- Description: The future of the hash based digital signature schemes appears to be very bright in the upcoming quantum era because of the quantum threats to the number theory based digital signature schemes. The Shor's algorithm is available to allow a sufficiently powerful quantum computer to break the building blocks of the number theory based signature schemes in a polynomial time. The hash based signature schemes being quite efficient and provably secure can fill in the gap effectively. However, a draw back of the hash based signature schemes is the larger key and signature sizes which can prove a barrier in their adoption by the space critical applications, like the blockchain. A hash based signature scheme is constructed using a one time signature (OTS) scheme. The underlying OTS scheme plays an important role in determining key and signature sizes of a hash based signature scheme. In this article, we have proposed a novel OTS scheme with minimized key and signature sizes as compared to all of the existing OTS schemes. Our proposed OTS scheme offers an 88% reduction in both key and signature sizes as compared to the popular Winternitz OTS scheme. Furthermore, our proposed OTS scheme offers an 84% and an 86% reductions in the signature and the key sizes respectively as compared to an existing compact variant of the WOTS scheme, i.e. WOTS + .
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
Characterizing the role of vehicular cloud computing in road traffic management
- Ahmad, Iftikhar, Noor, Rafidah, Ali, Ihsan, Imran, Muhammad, Vasilakos, Athanasios
- Authors: Ahmad, Iftikhar , Noor, Rafidah , Ali, Ihsan , Imran, Muhammad , Vasilakos, Athanasios
- Date: 2017
- Type: Text , Journal article
- Relation: International Journal of Distributed Sensor Networks Vol. 13, no. 5 (2017), p.
- Full Text:
- Reviewed:
- Description: Vehicular cloud computing is envisioned to deliver services that provide traffic safety and efficiency to vehicles. Vehicular cloud computing has great potential to change the contemporary vehicular communication paradigm. Explicitly, the underutilized resources of vehicles can be shared with other vehicles to manage traffic during congestion. These resources include but are not limited to storage, computing power, and Internet connectivity. This study reviews current traffic management systems to analyze the role and significance of vehicular cloud computing in road traffic management. First, an abstraction of the vehicular cloud infrastructure in an urban scenario is presented to explore the vehicular cloud computing process. A taxonomy of vehicular clouds that defines the cloud formation, integration types, and services is presented. A taxonomy of vehicular cloud services is also provided to explore the object types involved and their positions within the vehicular cloud. A comparison of the current state-of-the-art traffic management systems is performed in terms of parameters, such as vehicular ad hoc network infrastructure, Internet dependency, cloud management, scalability, traffic flow control, and emerging services. Potential future challenges and emerging technologies, such as the Internet of vehicles and its incorporation in traffic congestion control, are also discussed. Vehicular cloud computing is envisioned to have a substantial role in the development of smart traffic management solutions and in emerging Internet of vehicles. © The Author(s) 2017.
- Authors: Ahmad, Iftikhar , Noor, Rafidah , Ali, Ihsan , Imran, Muhammad , Vasilakos, Athanasios
- Date: 2017
- Type: Text , Journal article
- Relation: International Journal of Distributed Sensor Networks Vol. 13, no. 5 (2017), p.
- Full Text:
- Reviewed:
- Description: Vehicular cloud computing is envisioned to deliver services that provide traffic safety and efficiency to vehicles. Vehicular cloud computing has great potential to change the contemporary vehicular communication paradigm. Explicitly, the underutilized resources of vehicles can be shared with other vehicles to manage traffic during congestion. These resources include but are not limited to storage, computing power, and Internet connectivity. This study reviews current traffic management systems to analyze the role and significance of vehicular cloud computing in road traffic management. First, an abstraction of the vehicular cloud infrastructure in an urban scenario is presented to explore the vehicular cloud computing process. A taxonomy of vehicular clouds that defines the cloud formation, integration types, and services is presented. A taxonomy of vehicular cloud services is also provided to explore the object types involved and their positions within the vehicular cloud. A comparison of the current state-of-the-art traffic management systems is performed in terms of parameters, such as vehicular ad hoc network infrastructure, Internet dependency, cloud management, scalability, traffic flow control, and emerging services. Potential future challenges and emerging technologies, such as the Internet of vehicles and its incorporation in traffic congestion control, are also discussed. Vehicular cloud computing is envisioned to have a substantial role in the development of smart traffic management solutions and in emerging Internet of vehicles. © The Author(s) 2017.
Towards a low complexity scheme for medical images in scalable video coding
- Shoaib, Muhammad, Imran, Muhammad, Subhan, Fazli, Ahmad, Iftikhar
- Authors: Shoaib, Muhammad , Imran, Muhammad , Subhan, Fazli , Ahmad, Iftikhar
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 41439-41451
- Full Text:
- Reviewed:
- Description: Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio. © 2013 IEEE.
- Authors: Shoaib, Muhammad , Imran, Muhammad , Subhan, Fazli , Ahmad, Iftikhar
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 41439-41451
- Full Text:
- Reviewed:
- Description: Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio. © 2013 IEEE.
An efficient network intrusion detection and classification system
- Ahmad, Iftikhar, Haq, Qazi, Imran, Muhammad, Alassafi, Madini, Alghamdi, Rayed
- Authors: Ahmad, Iftikhar , Haq, Qazi , Imran, Muhammad , Alassafi, Madini , Alghamdi, Rayed
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 3 (2022), p.
- Full Text:
- Reviewed:
- Description: Intrusion detection in computer networks is of great importance because of its effects on the different communication and security domains. The detection of network intrusion is a challenge. Moreover, network intrusion detection remains a challenging task as a massive amount of data is required to train the state-of-the-art machine learning models to detect network intrusion threats. Many approaches have already been proposed recently on network intrusion detection. However, they face critical challenges owing to the continuous increase in new threats that current systems do not understand. This paper compares multiple techniques to develop a network intrusion detection system. Optimum features are selected from the dataset based on the correlation between the features. Furthermore, we propose an AdaBoost-based approach for network intrusion detection based on these selected features and present its detailed functionality and performance. Unlike most previous studies, which employ the KDD99 dataset, we used a recent and comprehensive UNSW-NB 15 dataset for network anomaly detection. This dataset is a collection of network packets exchanged between hosts. It comprises 49 attributes, including nine types of threats such as DoS, Fuzzers, Exploit, Worm, shellcode, reconnaissance, generic, and analysis Backdoor. In this study, we employ SVM and MLP for comparison. Finally, we propose AdaBoost based on the decision tree classifier to classify normal activity and possible threats. We monitored the network traffic and classified it into either threats or non-threats. The experimental findings showed that our proposed method effectively detects different forms of network intrusions on computer networks and achieves an accuracy of 99.3% on the UNSW-NB15 dataset. The proposed system will be helpful in network security applications and research domains. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Ahmad, Iftikhar , Haq, Qazi , Imran, Muhammad , Alassafi, Madini , Alghamdi, Rayed
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 3 (2022), p.
- Full Text:
- Reviewed:
- Description: Intrusion detection in computer networks is of great importance because of its effects on the different communication and security domains. The detection of network intrusion is a challenge. Moreover, network intrusion detection remains a challenging task as a massive amount of data is required to train the state-of-the-art machine learning models to detect network intrusion threats. Many approaches have already been proposed recently on network intrusion detection. However, they face critical challenges owing to the continuous increase in new threats that current systems do not understand. This paper compares multiple techniques to develop a network intrusion detection system. Optimum features are selected from the dataset based on the correlation between the features. Furthermore, we propose an AdaBoost-based approach for network intrusion detection based on these selected features and present its detailed functionality and performance. Unlike most previous studies, which employ the KDD99 dataset, we used a recent and comprehensive UNSW-NB 15 dataset for network anomaly detection. This dataset is a collection of network packets exchanged between hosts. It comprises 49 attributes, including nine types of threats such as DoS, Fuzzers, Exploit, Worm, shellcode, reconnaissance, generic, and analysis Backdoor. In this study, we employ SVM and MLP for comparison. Finally, we propose AdaBoost based on the decision tree classifier to classify normal activity and possible threats. We monitored the network traffic and classified it into either threats or non-threats. The experimental findings showed that our proposed method effectively detects different forms of network intrusions on computer networks and achieves an accuracy of 99.3% on the UNSW-NB15 dataset. The proposed system will be helpful in network security applications and research domains. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
An optimized hybrid deep intrusion detection model (HD-IDM) for enhancing network security
- Ahmad, Iftikhar, Imran, Muhammad, Qayyum, Abdul, Ramzan, Muhammad, Alassafi, Madini
- Authors: Ahmad, Iftikhar , Imran, Muhammad , Qayyum, Abdul , Ramzan, Muhammad , Alassafi, Madini
- Date: 2023
- Type: Text , Journal article
- Relation: Mathematics Vol. 11, no. 21 (2023), p.
- Full Text:
- Reviewed:
- Description: Detecting cyber intrusions in network traffic is a tough task for cybersecurity. Current methods struggle with the complexity of understanding patterns in network data. To solve this, we present the Hybrid Deep Learning Intrusion Detection Model (HD-IDM), a new way that combines GRU and LSTM classifiers. GRU is good at catching quick patterns, while LSTM handles long-term ones. HD-IDM blends these models using weighted averaging, boosting accuracy, especially with complex patterns. We tested HD-IDM on four datasets: CSE-CIC-IDS2017, CSE-CIC-IDS2018, NSL KDD, and CIC-DDoS2019. The HD-IDM classifier achieved remarkable performance metrics on all datasets. It attains an outstanding accuracy of 99.91%, showcasing its consistent precision across the dataset. With an impressive precision of 99.62%, it excels in accurately categorizing positive cases, crucial for minimizing false positives. Additionally, maintaining a high recall of 99.43%, it effectively identifies the majority of actual positive cases while minimizing false negatives. The F1-score of 99.52% emphasizes its robustness, making it the top choice for classification tasks requiring precision and reliability. It is particularly good at ROC and precision/recall curves, discriminating normal and harmful network activities. While HD-IDM is promising, it has limits. It needs labeled data and may struggle with new intrusion methods. Future work should find ways to handle unlabeled data and adapt to emerging threats. Also, making HD-IDM work faster for real-time use and dealing with scalability challenges is key for its broader use in changing network environments. © 2023 by the authors.
- Authors: Ahmad, Iftikhar , Imran, Muhammad , Qayyum, Abdul , Ramzan, Muhammad , Alassafi, Madini
- Date: 2023
- Type: Text , Journal article
- Relation: Mathematics Vol. 11, no. 21 (2023), p.
- Full Text:
- Reviewed:
- Description: Detecting cyber intrusions in network traffic is a tough task for cybersecurity. Current methods struggle with the complexity of understanding patterns in network data. To solve this, we present the Hybrid Deep Learning Intrusion Detection Model (HD-IDM), a new way that combines GRU and LSTM classifiers. GRU is good at catching quick patterns, while LSTM handles long-term ones. HD-IDM blends these models using weighted averaging, boosting accuracy, especially with complex patterns. We tested HD-IDM on four datasets: CSE-CIC-IDS2017, CSE-CIC-IDS2018, NSL KDD, and CIC-DDoS2019. The HD-IDM classifier achieved remarkable performance metrics on all datasets. It attains an outstanding accuracy of 99.91%, showcasing its consistent precision across the dataset. With an impressive precision of 99.62%, it excels in accurately categorizing positive cases, crucial for minimizing false positives. Additionally, maintaining a high recall of 99.43%, it effectively identifies the majority of actual positive cases while minimizing false negatives. The F1-score of 99.52% emphasizes its robustness, making it the top choice for classification tasks requiring precision and reliability. It is particularly good at ROC and precision/recall curves, discriminating normal and harmful network activities. While HD-IDM is promising, it has limits. It needs labeled data and may struggle with new intrusion methods. Future work should find ways to handle unlabeled data and adapt to emerging threats. Also, making HD-IDM work faster for real-time use and dealing with scalability challenges is key for its broader use in changing network environments. © 2023 by the authors.
- «
- ‹
- 1
- ›
- »