The seven scam types: Mapping the terrain of cybercrime
- Stabek, Amber, Watters, Paul, Layton, Robert
- Authors: Stabek, Amber , Watters, Paul , Layton, Robert
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Threat of cybercrime is a growing danger to the economy. Industries and businesses are targeted by cyber-criminals along with members of the general public. Since cybercrime is often a symptom of more complex criminological regimes such as laundering, trafficking and terrorism, the true damage caused to society is unknown. Dissimilarities in reporting procedures and non-uniform cybercrime classifications lead international reporting bodies to produce incompatible results which cause difficulties in making valid comparisons. A cybercrime classification framework has been identified as necessary for the development of an inter-jurisdictional, transnational, and global approach to identify, intercept, and prosecute cyber-criminals. Outlined in this paper is a cybercrime classification framework which has been applied to the incidence of scams. Content analysis was performed on over 250 scam descriptions stemming from in excess of 35 scamming categories and over 80 static features derived. Using hierarchical cluster and discriminant function analysis, the sample was reduced from over 35 ambiguous categories into 7 scam types and the top four scamming functions - identified as scamming business processes, revealed. The results of this research bear significant ramifications to the current state of scam and cybercrime classification, research and analysis, as well as offer significant insight into the business processes and applications adopted by scammers and cyber-criminals. © 2010 IEEE.
- Authors: Stabek, Amber , Watters, Paul , Layton, Robert
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Threat of cybercrime is a growing danger to the economy. Industries and businesses are targeted by cyber-criminals along with members of the general public. Since cybercrime is often a symptom of more complex criminological regimes such as laundering, trafficking and terrorism, the true damage caused to society is unknown. Dissimilarities in reporting procedures and non-uniform cybercrime classifications lead international reporting bodies to produce incompatible results which cause difficulties in making valid comparisons. A cybercrime classification framework has been identified as necessary for the development of an inter-jurisdictional, transnational, and global approach to identify, intercept, and prosecute cyber-criminals. Outlined in this paper is a cybercrime classification framework which has been applied to the incidence of scams. Content analysis was performed on over 250 scam descriptions stemming from in excess of 35 scamming categories and over 80 static features derived. Using hierarchical cluster and discriminant function analysis, the sample was reduced from over 35 ambiguous categories into 7 scam types and the top four scamming functions - identified as scamming business processes, revealed. The results of this research bear significant ramifications to the current state of scam and cybercrime classification, research and analysis, as well as offer significant insight into the business processes and applications adopted by scammers and cyber-criminals. © 2010 IEEE.
Towards understanding malware behaviour by the extraction of API calls
- Alazab, Mamoun, Venkatraman, Sitalakshmi, Watters, Paul
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: One of the recent trends adopted by malware authors is to use packers or software tools that instigate code obfuscation in order to evade detection by antivirus scanners. With evasion techniques such as polymorphism and metamorphism malware is able to fool current detection techniques. Thus, security researchers and the anti-virus industry are facing a herculean task in extracting payloads hidden within packed executables. It is a common practice to use manual unpacking or static unpacking using some software tools and analyse the application programming interface (API) calls for malware detection. However, extracting these features from the unpacked executables for reverse obfuscation is labour intensive and requires deep knowledge of low-level programming that includes kernel and assembly language. This paper presents an automated method of extracting API call features and analysing them in order to understand their use for malicious purpose. While some research has been conducted in arriving at file birthmarks using API call features and the like, there is a scarcity of work that relates to features in malcodes. To address this gap, we attempt to automatically analyse and classify the behavior of API function calls based on the malicious intent hidden within any packed program. This paper uses four-step methodology for developing a fully automated system to arrive at six main categories of suspicious behavior of API call features. © 2010 IEEE.
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: One of the recent trends adopted by malware authors is to use packers or software tools that instigate code obfuscation in order to evade detection by antivirus scanners. With evasion techniques such as polymorphism and metamorphism malware is able to fool current detection techniques. Thus, security researchers and the anti-virus industry are facing a herculean task in extracting payloads hidden within packed executables. It is a common practice to use manual unpacking or static unpacking using some software tools and analyse the application programming interface (API) calls for malware detection. However, extracting these features from the unpacked executables for reverse obfuscation is labour intensive and requires deep knowledge of low-level programming that includes kernel and assembly language. This paper presents an automated method of extracting API call features and analysing them in order to understand their use for malicious purpose. While some research has been conducted in arriving at file birthmarks using API call features and the like, there is a scarcity of work that relates to features in malcodes. To address this gap, we attempt to automatically analyse and classify the behavior of API function calls based on the malicious intent hidden within any packed program. This paper uses four-step methodology for developing a fully automated system to arrive at six main categories of suspicious behavior of API call features. © 2010 IEEE.
Windows rootkits: Attacks and countermeasures
- Lobo, Desmond, Watters, Paul, Wu, Xin, Sun, Li
- Authors: Lobo, Desmond , Watters, Paul , Wu, Xin , Sun, Li
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Windows XP is the dominant operating system in the world today and rootkits have been a major concern for XP users. This paper provides an in-depth analysis of the rootkits that target that operating system, while focusing on those that use various hooking techniques to hide malware on a machine. We identify some of the weaknesses in the Windows XP architecture that rootkits exploit and then evaluate some of the anti-rootkit security features that Microsoft has unveiled in Vista and 7. To reduce the number of rootkit infections in the future, we suggest that Microsoft should take full advantage of Intel's four distinct privilege levels. © 2010 IEEE.
- Authors: Lobo, Desmond , Watters, Paul , Wu, Xin , Sun, Li
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Windows XP is the dominant operating system in the world today and rootkits have been a major concern for XP users. This paper provides an in-depth analysis of the rootkits that target that operating system, while focusing on those that use various hooking techniques to hide malware on a machine. We identify some of the weaknesses in the Windows XP architecture that rootkits exploit and then evaluate some of the anti-rootkit security features that Microsoft has unveiled in Vista and 7. To reduce the number of rootkit infections in the future, we suggest that Microsoft should take full advantage of Intel's four distinct privilege levels. © 2010 IEEE.
Authorship attribution for Twitter in 140 characters or less
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at - 2nd Cybercrime and Trustworthy Computing Workshop, CTC 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Authorship attribution is a growing field, moving from beginnings in linguistics to recent advances in text mining. Through this change came an increase in the capability of authorship attribution methods both in their accuracy and the ability to consider more difficult problems. Research into authorship attribution in the 19th century considered it difficult to determine the authorship of a document of fewer than 1000 words. By the 1990s this values had decreased to less than 500 words and in the early 21 st century it was considered possible to determine the authorship of a document in 250 words. The need for this ever decreasing limit is exemplified by the trend towards many shorter communications rather than fewer longer communications, such as the move from traditional multi-page handwritten letters to shorter, more focused emails. This trend has also been shown in online crime, where many attacks such as phishing or bullying are performed using very concise language. Cybercrime messages have long been hosted on Internet Relay Chats (IRCs) which have allowed members to hide behind screen names and connect anonymously. More recently, Twitter and other short message based web services have been used as a hosting ground for online crimes. This paper presents some evaluations of current techniques and identifies some new preprocessing methods that can be used to enable authorship to be determined at rates significantly better than chance for documents of 140 characters or less, a format popularised by the micro-blogging website Twitter1. We show that the SCAP methodology performs extremely well on twitter messages and even with restrictions on the types of information allowed, such as the recipient of directed messages, still perform significantly higher than chance. Further to this, we show that 120 tweets per user is an important threshold, at which point adding more tweets per user gives a small but non-significant increase in accuracy. © 2010 IEEE.
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at - 2nd Cybercrime and Trustworthy Computing Workshop, CTC 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Authorship attribution is a growing field, moving from beginnings in linguistics to recent advances in text mining. Through this change came an increase in the capability of authorship attribution methods both in their accuracy and the ability to consider more difficult problems. Research into authorship attribution in the 19th century considered it difficult to determine the authorship of a document of fewer than 1000 words. By the 1990s this values had decreased to less than 500 words and in the early 21 st century it was considered possible to determine the authorship of a document in 250 words. The need for this ever decreasing limit is exemplified by the trend towards many shorter communications rather than fewer longer communications, such as the move from traditional multi-page handwritten letters to shorter, more focused emails. This trend has also been shown in online crime, where many attacks such as phishing or bullying are performed using very concise language. Cybercrime messages have long been hosted on Internet Relay Chats (IRCs) which have allowed members to hide behind screen names and connect anonymously. More recently, Twitter and other short message based web services have been used as a hosting ground for online crimes. This paper presents some evaluations of current techniques and identifies some new preprocessing methods that can be used to enable authorship to be determined at rates significantly better than chance for documents of 140 characters or less, a format popularised by the micro-blogging website Twitter1. We show that the SCAP methodology performs extremely well on twitter messages and even with restrictions on the types of information allowed, such as the recipient of directed messages, still perform significantly higher than chance. Further to this, we show that 120 tweets per user is an important threshold, at which point adding more tweets per user gives a small but non-significant increase in accuracy. © 2010 IEEE.
Understanding victims of identity theft: Preliminary insights
- Turville, Kylie, Yearwood, John, Miller, Charlynn
- Authors: Turville, Kylie , Yearwood, John , Miller, Charlynn
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Identity theft is not a new crime, however changes in society and the way that business is conducted have made it an easier, attractive and more lucrative crime. When a victim discovers the misuse of their identity they must then begin the process of recovery, including fixing any issues that may have been created by the misuse. For some victims this may only take a small amount of time and effort, however for others they may continue to experience issues for many years after the initial moment of discovery. To date, little research has been conducted within Australia or internationally regarding what a victim experiences as they work through the recovery process. This paper presents a summary of the identity theft domain with an emphasis on research conducted within Australia, and identifies a number of issues regarding research in this area. The paper also provides an overview of the research project currently being undertaken by the authors in obtaining an understanding of what victims of identity theft experience during the recovery process; particularly their experiences when dealing with organizations. Finally, it reports on some of the preliminary work that has already been conducted for the research project. © 2010 IEEE.
- Authors: Turville, Kylie , Yearwood, John , Miller, Charlynn
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Identity theft is not a new crime, however changes in society and the way that business is conducted have made it an easier, attractive and more lucrative crime. When a victim discovers the misuse of their identity they must then begin the process of recovery, including fixing any issues that may have been created by the misuse. For some victims this may only take a small amount of time and effort, however for others they may continue to experience issues for many years after the initial moment of discovery. To date, little research has been conducted within Australia or internationally regarding what a victim experiences as they work through the recovery process. This paper presents a summary of the identity theft domain with an emphasis on research conducted within Australia, and identifies a number of issues regarding research in this area. The paper also provides an overview of the research project currently being undertaken by the authors in obtaining an understanding of what victims of identity theft experience during the recovery process; particularly their experiences when dealing with organizations. Finally, it reports on some of the preliminary work that has already been conducted for the research project. © 2010 IEEE.
- «
- ‹
- 1
- ›
- »