Digital forensic techniques for static analysis of NTFS images
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at 4th International Conference of Information Technology, ICIT 2009, AL-Zaytoonah University, Amman, Jordan : 3rd-5th June 2009
- Full Text:
- Description: Static analysis of the Windows NTS File System (NTFS) which is the standard and most commonly used file system could provide useful information for digital forensics. However, since the NFTS disk image records every event in the system, forensic tools need to process an enormous amount of information related to user / kernel environment, buffer overflows, trace conditions, network stack and other related subsystems. This leads to imperfect forensic tools that are practical for implementation but not comprehensive and effective. This research discusses the analysis technique to detect data hidden based on the internal structure of the NTFS file system in the boot sector. Further, it attempts to unearth the vulnerabilities of NTFS disk image and weaknesses of the current forensic techniques. The paper argues that a comprehensive tool with improved techniques is warranted for a successful forensic analysis.
- Description: 2003007524
Towards automatic image segmentation using optimised region growing technique
- Authors: Nicholson, Ann , Li, Xiaodong , Alazab, Mamoun , Islam, Mofakharul , Venkatraman, Sitalakshmi
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at 22nd Australasian Joint Conference, AI 2009: Advances in Artificial Intelligence, Melbourne, Victoria : 1st-4th December 2009 Vol. 5866, p. 131-139
- Full Text: false
- Description: Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.
- Description: 2003007514
Towards understanding and improving E-government strategies in Jordan
- Authors: Alkhaleefah, Mohammad , Alkhawaldeh, Mahmoud , Venkatraman, Sitalakshmi , Alazab, Mamoun
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at (ICCBS 2010) International Conference on e-Commerce, e-Business and e-Service Vol. 66, p. 1871-1877
- Full Text: false
- Reviewed:
- Description: Electronic government or e-government initiatives in Jordan are facing major challenges that hinder the country's expected economic and social transformation. The aims of this paper are two-fold, firstly to provide an insight into the understanding of these challenges, and secondly to propose an insight into the understanding of these challenges, and secondly to propose a four-step improvement plan for a successful implementation of Jordan's e-government project. The proposed pragmatic method, strategies and action plan are envisaged to improve Jordan's potential in developing the capability, resources, law and infrastructure for enhancing the e-service delivery to citizens and businesses. Such a method of developing an improvement plan that uniquely aligns with Jordan's e-government strategic pillars would result in the fruitful realization of their e-government vision as a major contributor towards economic and social development. The proposed improvement plan could be adopted by other similar developing countries for successfully implementing their e-government projects as well.
Skype Traffic Classification Using Cost Sensitive Algorithms
- Authors: Azab, Azab , Layton, Robert , Alazab, Mamoun , Watters, Paul
- Date: 2013
- Type: Text , Conference paper
- Relation: Proceedings - 4th Cybercrime and Trustworthy Computing Workshop, CTC 2013 p. 14-21
- Full Text: false
- Reviewed:
- Description: Voice over IP (VoIP) technologies such as Skype are becoming increasingly popular and widely used in different organisations, and therefore identifying the usage of this service at the network level becomes very important. Reasons for this include applying Quality of Service (QoS), network planning, prohibiting its use in some networks and lawful interception of communications. Researchers have addressed VoIP traffic classification from different viewpoints, such as classifier accuracy, building time, classification time and online classification. This previous research tested their models using the same version of a VoIP product they used for training the model, giving generalizability only to that version of the product. This means that as new VoIP versions are released, these classifiers become obsolete. In this paper, we address if this approach is applicable to detecting new, untrained, versions of Skype. We suggest that using cost-sensitive classifiers can help to improve the accuracy of detecting untrained versions, by testing compared to other algorithms. Our experiment demonstrates promising preliminary results to detect Skype version 4, by building a cost sensitive classifier on Skype version 3, achieving an F-measure score of 0.57. This is a drastic improvement from not using cost sensitivity, which scores an F-measure of 0. This approach may be enhanced to improve the detection results and extended to improve detection for other applications that change protocols from version to version.
Malicious Spam Emails Developments and Authorship Attribution
- Authors: Alazab, Mamoun , Layton, Robert , Broadhurst, Roderic , Bouhours, Brigitte
- Date: 2013
- Type: Text , Conference paper
- Relation: Proceedings - 4th Cybercrime and Trustworthy Computing Workshop, CTC 2013 p. 58-68
- Full Text: false
- Reviewed:
- Description: The Internet is a decentralized structure that offers speedy communication, has a global reach and provides anonymity, a characteristic invaluable for committing illegal activities. In parallel with the spread of the Internet, cybercrime has rapidly evolved from a relatively low volume crime to a common high volume crime. A typical example of such a crime is the spreading of spam emails, where the content of the email tries to entice the recipient to click a URL linking to a malicious Web site or downloading a malicious attachment. Analysts attempting to provide intelligence on spam activities quickly find that the volume of spam circulating daily is overwhelming; therefore, any intelligence gathered is representative of only a small sample, not of the global picture. While past studies have looked at automating some of these analyses using topic-based models, i.e. separating email clusters into groups with similar topics, our preliminary research investigates the usefulness of applying authorship-based models for this purpose. In the first phase, we clustered a set of spam emails using an authorship-based clustering algorithm. In the second phase, we analysed those clusters using a set of linguistic, structural and syntactic features. These analyses reveal that emails within each cluster were likely written by the same author, but that it is unlikely we have managed to group together all spam produced by each group. This problem of high purity with low recall, has been faced in past authorship research. While it is also a limitation of our research, the clusters themselves are still useful for the purposes of automating analysis, because they reduce the work needing to be performed. Our second phase revealed useful information on the group that can be utilized in future research for further analysis of such groups, for example, identifying further linkages behind spam campaigns.
Cybercrime : The case of obfuscated malware
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul , Alazab, Moutaz , Alazab, Ammar
- Date: 2011
- Type: Text , Conference paper
- Relation: Joint 7th International Conference on Global Security, Safety and Sustainability, ICGS3 2011, and the 4th Conference on e-Democracy Vol. 99 LNICST, p. 204-211
- Full Text: false
- Reviewed:
- Description: Cybercrime has rapidly developed in recent years and malware is one of the major security threats in computer which have been in existence from the very early days. There is a lack of understanding of such malware threats and what mechanisms can be used in implementing security prevention as well as to detect the threat. The main contribution of this paper is a step towards addressing this by investigating the different techniques adopted by obfuscated malware as they are growingly widespread and increasingly sophisticated with zero-day exploits. In particular, by adopting certain effective detection methods our investigations show how cybercriminals make use of file system vulnerabilities to inject hidden malware into the system. The paper also describes the recent trends of Zeus botnets and the importance of anomaly detection to be employed in addressing the new Zeus generation of malware. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
- Description: 2003010650
Mining malware to detect variants
- Authors: Azab, Ahmad , Layton, Robert , Alazab, Mamoun , Oliver, Jonathan
- Date: 2015
- Type: Text , Conference paper
- Relation: 5th Cybercrime and Trustworthy Computing Conference, CTC 2014; Aukland, New Zealand; 24th-25th November 2014 p. 44-53
- Full Text: false
- Reviewed:
- Description: Cybercrime continues to be a growing challenge and malware is one of the most serious security threats on the Internet today which have been in existence from the very early days. Cyber criminals continue to develop and advance their malicious attacks. Unfortunately, existing techniques for detecting malware and analysing code samples are insufficient and have significant limitations. For example, most of malware detection studies focused only on detection and neglected the variants of the code. Investigating malware variants allows antivirus products and governments to more easily detect these new attacks, attribution, predict such or similar attacks in the future, and further analysis. The focus of this paper is performing similarity measures between different malware binaries for the same variant utilizing data mining concepts in conjunction with hashing algorithms. In this paper, we investigate and evaluate using the Trend Locality Sensitive Hashing (TLSH) algorithm to group binaries that belong to the same variant together, utilizing the k-NN algorithm. Two Zeus variants were tested, TSPY-ZBOT and MAL-ZBOT to address the effectiveness of the proposed approach. We compare TLSH to related hashing methods (SSDEEP, SDHASH and NILSIMSA) that are currently used for this purpose. Experimental evaluation demonstrates that our method can effectively detect variants of malware and resilient to common obfuscations used by cyber criminals. Our results show that TLSH and SDHASH provide the highest accuracy results in scoring an F-measure of 0.989 and 0.999 respectively. © 2014 IEEE.
PyHENet : a generic framework for privacy-preserving DL inference based on fully homomorphic encryption
- Authors: Chen, Qian , Yao, Lin , Wu, Yulin , Wang, Xuan , Zhang, Weizhe , Jiang, Zoe , Liu, Yang , Alazab, Mamoun
- Date: 2022
- Type: Text , Conference paper
- Relation: 4th International Conference on Data Intelligence and Security, ICDIS 2022, Shenzhen, China, 24-26 August 2022, Proceedings 2022 4th International Conference on Data Intelligence and Security ICDIS 2022 p. 127-133
- Full Text: false
- Reviewed:
- Description: Deep learning inference provides inference service by service provider with model for client with input of personal data. Due to the huge commercial value inside, on one hand, both client's original data and inference output should be kept secret from others, even including service provider. On the other hand, service provider's model should be kept secret, especially from his competitor. Current research on privacy-preserving deep learning inference focuses on building models with specific data. This paper proposes a generic framework PyHENet of privacy-preserving deep learning inference based on Pytorch and lattice-based FHE, such that crypto library can be flexibly embedded into network. Firstly, raw data is encrypted by lattice-based FHE and uploaded to service provider. Secondly, convolutional computation over float-point ciphertext data is proposed for service provider to execute low accuracy loss inference with aided parallel method SIMD. Thirdly, inference result in ciphertext format is sent back to client for decryption. To improve efficiency, inference procedure can be further divided into two phases. All the computations during the second phase are in plaintext format with GPU acceleration, while keeping the first phase unchanged. Using the same model and parameters, the relative accuracy of PyHENet is almost 100% compared to the plaintext inference. This paper is the first to propose a general framework of neural networks for fully homomorphic cryptographic inference, and is based on mainstream deep learning frameworks, which is both secure and more conducive to development. © 2022 IEEE.