A consolidated process model for identity management
- Authors: Ng, Alex , Watters, Paul , Chen, Shiping
- Date: 2012
- Type: Text , Journal article
- Relation: Information Resources Management Journal Vol. 25, no. 3 (2012), p. 1-29
- Full Text: false
- Reviewed:
- Description: Recently, identity management has gained increasing attention from both enterprises and government organisations, in terms of security, privacy, and trust. A considerable number of theories and techniques have been developed to deal with identity management issues within and between organisations. In this paper, the authors reviewed, assessed, and consolidated the research and development activities of identity management in 14 privately and publicly funded organisations. Furthermore, the authors developed a taxonomy to characterise and classify these identity management frameworks into two categories: processes and technologies. The authors then studied these frameworks by systematically reviewing the whole lifecycle of an identity management framework, including actors, roles, security, privacy, trust, interoperability, and federation. This paper aims to provide the reader with the state of art of existing identity management frameworks and a good understanding of the research issues and progress in this area. Copyright © 2012, IGI Global.
- Description: 2003010402
A Grobner-Shirshov Algorithm for Applications in Internet Security
- Authors: Kelarev, Andrei , Yearwood, John , Watters, Paul , Wu, Xinwen , Ma, Liping , Abawajy, Jemal , Pan, L.
- Date: 2011
- Type: Text , Journal article
- Relation: Southeast Asian Bulletin of Mathematics Vol. 35, no. (2011), p. 807-820
- Full Text: false
- Reviewed:
- Description: The design of multiple classication and clustering systems for the detection of malware is an important problem in internet security. Grobner-Shirshov bases have been used recently by Dazeley et al. [15] to develop an algorithm for constructions with certain restrictions on the sandwich-matrices. We develop a new Grobner-Shirshov algorithm which applies to a larger variety of constructions based on combinatorial Rees matrix semigroups without any restrictions on the sandwich-matrices.
A methodology for analyzing the credential marketplace
- Authors: Watters, Paul , McCombie, Stephen
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Money Laundering Control Vol. 14, no. 1 (2011), p. 32-43
- Full Text: false
- Reviewed:
- Description: Purpose – Cybercrime has rapidly developed in recent years thanks in part to online markets for tools and credentials. Credential trading operates along the lines of a wholesale distribution model, where compromised credentials are bundled together for sale to end-users. Thus, the criminals who specialize in obtaining credentials (through phishing, dumpster diving, etc.) are typically not the same as the end-users. This research aims to propose an initial methodology for further understanding of how credentials are traded in online marketplaces (such as internet relay chat (IRC) channels), such as typical amounts charged per credential, and with a view to preliminary profiling, especially based on language identification. Design/methodology/approach – This research proposes an initial methodology for further understanding of how credentials are traded in online marketplaces (such as IRC channels), such as typical amounts charged per credential, and with a view to preliminary profiling, especially based on language identification. Initial results from a small sample of credential chatroom data is analysed using the technique. Findings – The paper identified five key term categories from the subset of the 100 most frequent terms (bank/payment provider names, supported trading actions, non-cash commodities for trading, targeted countries and times), and demonstrated how actors and processes could be extracted to identify common business processes in credential trading. In turn, these elements could potentially be used to track the specific trading activities of individuals or groups. The hope in the long-term is that we may be able to cross-reference named entities in the credential trading world (or a pattern of activity) and cross-reference this with known credential theft attacks, such as phishing. Originality/value – This is the first study to propose a methodology to systematically analyse credential trading on the internet. Acknowledgements: This work was supported in part by the Australian Federal Police, Westpac Banking Corporation, IBM, the State Government of Victoria and the University of Ballarat.
- Description: 2003011113
A new procedure to help system/network administrators identify multiple rootkit infections
- Authors: Lobo, Desmond , Watters, Paul , Wu, Xinwen
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 2nd International Conference on Communication Software and Networks, ICCSN 2010, Singapore : 26th-28th February 2010 p. 124-128
- Full Text:
- Description: Rootkits refer to software that is used to hide the presence of malware from system/network administrators and permit an attacker to take control of a computer. In our previous work, we designed a system that would categorize rootkits based on the hooks that had been created. Focusing on rootkits that use inline function hooking techniques, we showed that our system could successfully categorize a sample of rootkits using unsupervised EM clustering. In this paper, we extend our previous work by outlining a new procedure to help system/network administrators identify the rootkits that have infected their machines. Using a logistic regression model for profiling families of rootkits, we were able to identify at least one of the rootkits that had infected each of the systems that we tested. © 2010 IEEE.
- Description: Rootkits refer to software that is used to hide the presence of malware from system/network administrators and permit an attacker to take control of a computer. In our previous work, we designed a system that would categorize rootkits based on the hooks that had been created. Focusing on rootkits that use inline function hooking techniques, we showed that our system could successfully categorize a sample of rootkits using unsupervised EM clustering. In this paper, we extend our previous work by outlining a new procedure to help system/network administrators identify the rootkits that have infected their machines. Using a logistic regression model for profiling families of rootkits, we were able to identify at least one of the rootkits that had infected each of the systems that we tested. © 2010 IEEE.
A new stochastic model based approach for object identification and segmentation in textured color image
- Authors: Islam, Mofakharul , Watters, Paul
- Date: 2010
- Type: Text , Book chapter
- Relation: Technological Developments in Networking, Education and Automation p. 309-314
- Full Text: false
- Reviewed:
- Description: We investigate and propose a novel stochastic model based approach to implement a robust unsupervised color image content understanding technique that segments a color textured image into its constituent parts automatically and meaningfully. The aim of this work is to detection and identification of different objects in a color image using image segmentation. Image segments or objects are produced using precise color information, texture information and neighborhood relationships among neighboring image pixels. As a whole, in this particular work, the problem we want to investigate is to implement a robust Maximum a posteriori (MAP) based unsupervised color textured image segmentation approach using Cluster Ensembles, MRF model and Daubechies wavelet transform for identification and segmentation of image contents or objects. In addition,Cluster Ensemble has been utilizedfor introducing a robust technique for finding the number of components in an image automatically. The experimental results reveal that the proposed model is able to find the accurate number of objects or components in a color image and can produce more accurate and faithful segmentation of different meaningful objects from relatively complex background. Finally, we have compared our results with another similar existing segmentation approach.
A preliminary profiling of internet money mules : An Australian perspective
- Authors: Aston, Manny , McCombie, Stephen , Reardon, Ben , Watters, Paul
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at 2009 Symposia and Workshops on Ubiquitous, Autonomic and Trusted Computing, UIC-ATC '09, Brisbane, Queensland : 7th-9th July 2009 p. 482-487
- Full Text:
- Description: Along with the massive growth in Internet commerce over the last ten years there has been a corresponding boom in Internet related crime, or cybercrime. According to research recently released by the Australian Bureau of Statistics in 2006 57,000 Australians aged 15 years and over fell victim to phishing and related Internet scams. Of all the victims of cybercrime, only one group is potentially subject to criminal prosecution: `Internet money mules'-those who, either knowingly or unknowingly, launder money. This paper examines the demographic profile-specifically age, gender and postcode-related to 660 confirmed money mule incidents recorded during the calendar year 2007, for a major Australian financial institution. This data is compared to ABS statistics of Internet usage in 2006. There is clear evidence of a strong gender bias towards males, particularly in the older age group. This is directly relevant when considering education and training programs for both corporations and the community on the issues surrounding Internet money mule scams and in ultimately understanding the problem of Internet banking fraud.
- Description: 2003007858
A survey on latest botnet attack and defense
- Authors: Zhang, Lei , Yu, Shui , Wu, Di , Watters, Paul
- Date: 2011
- Type: Text , Conference proceedings
- Full Text: false
- Description: A botnet is a group of compromised computers, which are remotely controlled by hackers to launch various network attacks, such as DDoS attack and information phishing. Botnet has become a popular and productive tool behind many cyber attacks. Recently, the owners of some botnets, such as storm worm, torpig and conflicker, are employing fluxing techniques to evade detection. Therefore, the understanding of their fluxing tricks is critical to the success of defending from botnet attacks. Motivated by this, we survey the latest botnet attacks and defenses in this paper. We begin with introducing the principles of fast fluxing (FF) and domain fluxing (DF), and explain how these techniques were employed by botnet owners to fly under the radar. Furthermore, we investigate the state-of-art research on fluxing detection. We also compare and evaluate those fluxing detection methods by multiple criteria. Finally, we discuss future directions on fighting against botnet based attacks. © 2011 IEEE.
A trust based access control framework for P2P file-sharing systems
- Authors: Tran, H , Hitchens, M , Varadharajan, V , Watters, Paul
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at Hawaii International Conference on System Sciences, HICSS-38, 2005
- Full Text: false
- Reviewed:
Accessibility solutions for visually impaired users of web discussion boards
- Authors: Watters, Paul , Arajuo, A , Hezart, A , Naik, S
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at 3rd IEEE International Conference on Information Technology and Applications 2005 p. 1-10
- Full Text: false
- Reviewed:
Accessible virtual reality therapy using portable media devices
- Authors: Bruck, Susan , Watters, Paul
- Date: 2010
- Type: Text , Journal article
- Relation: Annual Review of CyberTherapy and Telemedicine Vol. 8, no. 1 (2010), p. 69-72
- Full Text: false
- Description: Simulated immersive environments displayed on large screens are a valuable therapeutic asset in the treatment of a range of psychological disorders. Permanent environments are expensive to build and maintain, require specialized clinician training and technical support and often have limited accessibility for clients. Ideally, virtual reality exposure therapy (VRET) could be accessible to the broader community if we could use inexpensive hardware with specifically designed software. This study tested whether watching a handheld non-immersive media device causes nausea and other cybersickness responses. Using a repeated measure design we found that nausea, general discomfort, eyestrain, blurred vision and an increase in salivation significantly increased in response to handheld non-immersive media device exposure.
An unsupervised stochastic model for detection and identification of objects in textured color images using segmentation technique
- Authors: Islam, Mofakharul , Watters, Paul
- Date: 2009
- Type: Text , Conference proceedings
- Full Text: false
- Description: The process of meaningful image object identification is the critical first step in the extraction of image information for computer vision and image understanding. The disjoint regions correspond to visually distinct objects in a scene. In this particular work, we investigate and propose a novel stochastic model based approach to implement a robust unsupervised color image content understanding technique that segments a color textured image into its constituent parts automatically and meaningfully. The aim of this work is to produce precise segmentation of different objects in a color image using color information, texture information and neighborhood relationships among neighboring image pixels in terms of their features using Markov Random Field (MRF) Model to get the maximum accuracy in segmentation. The evaluation of the results is done through comparison of the segmentation quality and accuracy with another similar existing method which demonstrates that the proposed approach outperforms the existing method by achieving better segmentation accuracy with faithful segmentation results.
API based discrimination of ransomware and benign cryptographic programs
- Authors: Black, Paul , Sohail, Ammar , Gondal, Iqbal , Kamruzzaman, Joarder , Vamplew, Peter , Watters, Paul
- Date: 2020
- Type: Text , Conference paper
- Relation: 27th International Conference on Neural Information Processing, ICONIP 2020, Bangkok, 18 to 22 November 2020, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 12533 LNCS, p. 177-188
- Full Text: false
- Reviewed:
- Description: Ransomware is a widespread class of malware that encrypts files in a victim’s computer and extorts victims into paying a fee to regain access to their data. Previous research has proposed methods for ransomware detection using machine learning techniques. However, this research has not examined the precision of ransomware detection. While existing techniques show an overall high accuracy in detecting novel ransomware samples, previous research does not investigate the discrimination of novel ransomware from benign cryptographic programs. This is a critical, practical limitation of current research; machine learning based techniques would be limited in their practical benefit if they generated too many false positives (at best) or deleted/quarantined critical data (at worst). We examine the ability of machine learning techniques based on Application Programming Interface (API) profile features to discriminate novel ransomware from benign-cryptographic programs. This research provides a ransomware detection technique that provides improved detection accuracy and precision compared to other API profile based ransomware detection techniques while using significantly simpler features than previous dynamic ransomware detection research. © 2020, Springer Nature Switzerland AG.
Are sparse-coding simple cell receptive field models physiologically plausible?
- Authors: Watters, Paul
- Date: 2006
- Type: Text , Journal article
- Relation: Journal of Integrative Neuroscience Vol. 5, no. 3 (2006), p. 333-353
- Full Text: false
- Reviewed:
- Description: Olshausen and Field (1996) developed a simple cell receptive field model for natural scene processing in V1, based on unsupervised learning and non-orthogonal basis function optimization of an overcomplete representation of visual space. The model was originally tested with an ensemble of whitened natural scenes, simulating pre-cortical filtering in the retinal ganglia and lateral geniculate nucleus, and the basis functions qualitatively resembled the orientation-specific responses of V1 simple cells in the spatial domain. In this study, the quantitative tuning responses of the basis functions in the spectral domain are estimated using a Gaussian model, to determine their goodness-of-fit to the known bandwidths of simple cells in primate V1. Five simulation experiments which examined key features of the model are reported: changing the size of the basis functions; using a complete versus over-complete representation; changing the sparseness factor; using a variable learning rate; and mapping the basis functions with a whitening spatial function. The key finding of this study is that across all image themes, basis function sizes, number of basis functions, sparseness factors and learning rates, the spatial-frequency tuning did not closely resemble that of primate area 17 — the model results more closely resembled the unclassified cat neurones of area 19 with a single exception, and not area 17 as predicted. [ABSTRACT FROM AUTHOR]
- Description: 2003007801
Authorship analysis of aliases: Does topic influence accuracy?
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. Online first, no. (2013), p.
- Full Text:
- Reviewed:
- Description: Aliases play an important role in online environments by facilitating anonymity, but also can be used to hide the identity of cybercriminals. Previous studies have investigated this alias matching problem in an attempt to identify whether two aliases are shared by an author, which can assist with identifying users. Those studies create their training data by randomly splitting the documents associated with an alias into two sub-aliases. Models have been built that can regularly achieve over 90% accuracy for recovering the linkage between these ‘random sub-aliases’. In this paper, random sub-alias generation is shown to enable these high accuracies, and thus does not adequately model the real-world problem. In contrast, creating sub-aliases using topic-based splitting drastically reduces the accuracy of all authorship methods tested. We then present a methodology that can be performed on non-topic controlled datasets, to produce topic-based sub-aliases that are more difficult to match. Finally, we present an experimental comparison between many authorship methods to see which methods better match aliases under these conditions, finding that local n-gram methods perform better than others.
Authorship attribution for Twitter in 140 characters or less
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at - 2nd Cybercrime and Trustworthy Computing Workshop, CTC 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Authorship attribution is a growing field, moving from beginnings in linguistics to recent advances in text mining. Through this change came an increase in the capability of authorship attribution methods both in their accuracy and the ability to consider more difficult problems. Research into authorship attribution in the 19th century considered it difficult to determine the authorship of a document of fewer than 1000 words. By the 1990s this values had decreased to less than 500 words and in the early 21 st century it was considered possible to determine the authorship of a document in 250 words. The need for this ever decreasing limit is exemplified by the trend towards many shorter communications rather than fewer longer communications, such as the move from traditional multi-page handwritten letters to shorter, more focused emails. This trend has also been shown in online crime, where many attacks such as phishing or bullying are performed using very concise language. Cybercrime messages have long been hosted on Internet Relay Chats (IRCs) which have allowed members to hide behind screen names and connect anonymously. More recently, Twitter and other short message based web services have been used as a hosting ground for online crimes. This paper presents some evaluations of current techniques and identifies some new preprocessing methods that can be used to enable authorship to be determined at rates significantly better than chance for documents of 140 characters or less, a format popularised by the micro-blogging website Twitter1. We show that the SCAP methodology performs extremely well on twitter messages and even with restrictions on the types of information allowed, such as the recipient of directed messages, still perform significantly higher than chance. Further to this, we show that 120 tweets per user is an important threshold, at which point adding more tweets per user gives a small but non-significant increase in accuracy. © 2010 IEEE.
Authorship attribution of IRC messages using inverse author frequency
- Authors: Layton, Robert , McCombie, Stephen , Watters, Paul
- Date: 2012
- Type: Text , Conference proceedings
- Full Text: false
- Description: Internet Relay Chat (IRC) is a useful and relativelysimple protocol for text based chat online, used in a variety ofareas online such as for discussion and technical support. IRC isalso used for cybercrime, with online rooms selling stolen creditcard details, botnet access and malware. The reasons for theuse of IRC in cybercrime include the widespread adoption andease of use, but also focus around the anonymity granted bythe protocol, allowing users to hide behind aliases that can bechanged regularly. In this research, we apply authorship analysistechniques to be able to attribute chat messages to known aliases.A preliminary experiment shows that this application is verydifficult, due to the short messages and repeated information.To improve the accuracy, we apply inverse-author-frequency(iaf) weighting, which gives higher weights to features used byfewer authors. This research is the first time that iaf has beenapplied to character n-gram models, previously being applied toword based models of authorship. We find that this improvesthe accuracy significantly for the RLP method and provides aplatform for successful applications of authorship analysis in thefuture. Overall, the method achieves accuracies of over 55% ina very difficult application domain. © 2012 IEEE.
- Description: 2003011051
Automated unsupervised authorship analysis using evidence accumulation clustering
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 1 (2013), p. 95-120
- Full Text:
- Reviewed:
- Description: Authorship Analysis aims to extract information about the authorship of documents from features within those documents. Typically, this is performed as a classification task with the aim of identifying the author of a document, given a set of documents of known authorship. Alternatively, unsupervised methods have been developed primarily as visualisation tools to assist the manual discovery of clusters of authorship within a corpus by analysts. However, there is a need in many fields for more sophisticated unsupervised methods to automate the discovery, profiling and organisation of related information through clustering of documents by authorship. An automated and unsupervised methodology for clustering documents by authorship is proposed in this paper. The methodology is named NUANCE, for n-gram Unsupervised Automated Natural Cluster Ensemble. Testing indicates that the derived clusters have a strong correlation to the true authorship of unseen documents. © 2011 Cambridge University Press.
- Description: 2003010584
Automatically determining phishing campaigns using the USCAP methodology
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at General Members Meeting and eCrime Researchers Summit, eCrime 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Phishing fraudsters attempt to create an environment which looks and feels like a legitimate institution, while at the same time attempting to bypass filters and suspicions of their targets. This is a difficult compromise for the phishers and presents a weakness in the process of conducting this fraud. In this research, a methodology is presented that looks at the differences that occur between phishing websites from an authorship analysis perspective and is able to determine different phishing campaigns undertaken by phishing groups. The methodology is named USCAP, for Unsupervised SCAP, which builds on the SCAP methodology from supervised authorship and extends it for unsupervised learning problems. The phishing website source code is examined to generate a model that gives the size and scope of each of the recognized phishing campaigns. The USCAP methodology introduces the first time that phishing websites have been clustered by campaign in an automatic and reliable way, compared to previous methods which relied on costly expert analysis of phishing websites. Evaluation of these clusters indicates that each cluster is strongly consistent with a high stability and reliability when analyzed using new information about the attacks, such as the dates that the attack occurred on. The clusters found are indicative of different phishing campaigns, presenting a step towards an automated phishing authorship analysis methodology. © 2010 IEEE.
Automatically generating classifier for phishing email prediction
- Authors: Ma, Liping , Torney, Rosemary , Watters, Paul , Brown, Simon
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at I-SPAN 2009 - The 10th International Symposium on Pervasive Systems, Algorithms, and Networks, Kaohsiung, Taiwan : 14th-16th December 2009 p. 779-783
- Full Text:
- Description: Phishing is a form of online identity theft that employs both social engineering and technical subterfuge to steal consumers' personal identity data and financial account credentials. Phishing email prediction has drawn a lot of attention from many researchers. According to current anti-phishing research, a classifier generated by decision tree produces the most accurate predictions. However, there appears not to be any open source available to transfer such a decision to an implementable classifier. The work presented in this paper builds a decision tree parser which automatically translates a decision tree into an implementable program language so that the decision is useful in real world applications. Experiment results show that the parser performs as well as the original decision. © 2009 IEEE.
- Description: 2003007989
Automating Open Source Intelligence: Algorithms for OSINT
- Authors: Layton, Robert , Watters, Paul
- Date: 2015
- Type: Text , Book
- Full Text: false
- Reviewed:
- Description: Algorithms for Automating Open Source Intelligence (OSINT) presents information on the gathering of information and extraction of actionable intelligence from openly available sources, including news broadcasts, public repositories, and more recently, social media. As OSINT has applications in crime fighting, state-based intelligence, and social research, this book provides recent advances in text mining, web crawling, and other algorithms that have led to advances in methods that can largely automate this process.