Feature selection using misclassification counts
- Bagirov, Adil, Yatsko, Andrew, Stranieri, Andrew
- Authors: Bagirov, Adil , Yatsko, Andrew , Stranieri, Andrew
- Date: 2011
- Type: Conference proceedings , Unpublished work
- Relation: Proceedings of the 9th Australasian Data Mining Conference (AusDM 2011), 51-62. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 121.
- Full Text:
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and instance acquisition effort, considering all the data attributes accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance and redundancy, what ranking does not immediately decide. Additionally, feature ranking methods from different independent sources are called in for the direct comparison.
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and the data acquisition effort, considering all data components being accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree, to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance, what ranking does not immediately decide. Additionally, feature ranking methods available from different independent sources are called in for direct comparison.
- Authors: Bagirov, Adil , Yatsko, Andrew , Stranieri, Andrew
- Date: 2011
- Type: Conference proceedings , Unpublished work
- Relation: Proceedings of the 9th Australasian Data Mining Conference (AusDM 2011), 51-62. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 121.
- Full Text:
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and instance acquisition effort, considering all the data attributes accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance and redundancy, what ranking does not immediately decide. Additionally, feature ranking methods from different independent sources are called in for the direct comparison.
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and the data acquisition effort, considering all data components being accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree, to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance, what ranking does not immediately decide. Additionally, feature ranking methods available from different independent sources are called in for direct comparison.
The seven scam types: Mapping the terrain of cybercrime
- Stabek, Amber, Watters, Paul, Layton, Robert
- Authors: Stabek, Amber , Watters, Paul , Layton, Robert
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Threat of cybercrime is a growing danger to the economy. Industries and businesses are targeted by cyber-criminals along with members of the general public. Since cybercrime is often a symptom of more complex criminological regimes such as laundering, trafficking and terrorism, the true damage caused to society is unknown. Dissimilarities in reporting procedures and non-uniform cybercrime classifications lead international reporting bodies to produce incompatible results which cause difficulties in making valid comparisons. A cybercrime classification framework has been identified as necessary for the development of an inter-jurisdictional, transnational, and global approach to identify, intercept, and prosecute cyber-criminals. Outlined in this paper is a cybercrime classification framework which has been applied to the incidence of scams. Content analysis was performed on over 250 scam descriptions stemming from in excess of 35 scamming categories and over 80 static features derived. Using hierarchical cluster and discriminant function analysis, the sample was reduced from over 35 ambiguous categories into 7 scam types and the top four scamming functions - identified as scamming business processes, revealed. The results of this research bear significant ramifications to the current state of scam and cybercrime classification, research and analysis, as well as offer significant insight into the business processes and applications adopted by scammers and cyber-criminals. © 2010 IEEE.
- Authors: Stabek, Amber , Watters, Paul , Layton, Robert
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Threat of cybercrime is a growing danger to the economy. Industries and businesses are targeted by cyber-criminals along with members of the general public. Since cybercrime is often a symptom of more complex criminological regimes such as laundering, trafficking and terrorism, the true damage caused to society is unknown. Dissimilarities in reporting procedures and non-uniform cybercrime classifications lead international reporting bodies to produce incompatible results which cause difficulties in making valid comparisons. A cybercrime classification framework has been identified as necessary for the development of an inter-jurisdictional, transnational, and global approach to identify, intercept, and prosecute cyber-criminals. Outlined in this paper is a cybercrime classification framework which has been applied to the incidence of scams. Content analysis was performed on over 250 scam descriptions stemming from in excess of 35 scamming categories and over 80 static features derived. Using hierarchical cluster and discriminant function analysis, the sample was reduced from over 35 ambiguous categories into 7 scam types and the top four scamming functions - identified as scamming business processes, revealed. The results of this research bear significant ramifications to the current state of scam and cybercrime classification, research and analysis, as well as offer significant insight into the business processes and applications adopted by scammers and cyber-criminals. © 2010 IEEE.
Changing fluxes of sediments and salts as recorded in lower River Murray wetlands, Australia
- Gell, Peter, Fluin, Jennie, Tibby, John, Haynes, Deborah, Khanum, Syeda, Walsh, Brendan, Hancock, Gary, Harrison, Jennifer, Zawadzki, Atun, Little, Fiona
- Authors: Gell, Peter , Fluin, Jennie , Tibby, John , Haynes, Deborah , Khanum, Syeda , Walsh, Brendan , Hancock, Gary , Harrison, Jennifer , Zawadzki, Atun , Little, Fiona
- Date: 2006
- Type: Conference proceedings
- Full Text:
- Description: The River Murray basin, Australia's largest, has been significantly impacted by changed flow regimes and increased fluxes of salts and sediments since settlement in the 1840s. The river's flood plain hosts an array of cut-off meanders, levee lakes and basin depression lakes that archive historical changes. Pre-European sedimentation rates are typically approx. 0.1-1 mm year-1, while those in the period after European arrival are typically 10 to 30 fold greater. This increased sedimentation corresponds to a shift in wetland trophic state from submerged macrophytes in clear waters to phytoplankton-dominated, turbid systems. There is evidence for a decline in sedimentation in some natural wetlands after river regulation from the 1920s, but with the maintenance of the phytoplankton state. Fossil diatom assemblages reveal that, while some wetlands had saline episodes before settlement, others became saline after, and as early as the 1880s. The oxidation of sulphurous salts deposited after regulation has induced hyperacidity in a number of wetlands in recent years. While these wetlands are rightly perceived as being heavily impacted, other, once open water systems, that have infilled and now support rich macrophyte beds, are used as interpretive sites. The rate of filling, however, suggests that the lifespan of these wetlands is short. The rate of wetland loss through such increased infilling is unlikely to be matched by future scouring as regulation has eliminated middle order floods from the lower catchment.
- Authors: Gell, Peter , Fluin, Jennie , Tibby, John , Haynes, Deborah , Khanum, Syeda , Walsh, Brendan , Hancock, Gary , Harrison, Jennifer , Zawadzki, Atun , Little, Fiona
- Date: 2006
- Type: Conference proceedings
- Full Text:
- Description: The River Murray basin, Australia's largest, has been significantly impacted by changed flow regimes and increased fluxes of salts and sediments since settlement in the 1840s. The river's flood plain hosts an array of cut-off meanders, levee lakes and basin depression lakes that archive historical changes. Pre-European sedimentation rates are typically approx. 0.1-1 mm year-1, while those in the period after European arrival are typically 10 to 30 fold greater. This increased sedimentation corresponds to a shift in wetland trophic state from submerged macrophytes in clear waters to phytoplankton-dominated, turbid systems. There is evidence for a decline in sedimentation in some natural wetlands after river regulation from the 1920s, but with the maintenance of the phytoplankton state. Fossil diatom assemblages reveal that, while some wetlands had saline episodes before settlement, others became saline after, and as early as the 1880s. The oxidation of sulphurous salts deposited after regulation has induced hyperacidity in a number of wetlands in recent years. While these wetlands are rightly perceived as being heavily impacted, other, once open water systems, that have infilled and now support rich macrophyte beds, are used as interpretive sites. The rate of filling, however, suggests that the lifespan of these wetlands is short. The rate of wetland loss through such increased infilling is unlikely to be matched by future scouring as regulation has eliminated middle order floods from the lower catchment.
Adaptive clustering with feature ranking for DDoS attacks detection
- Zi, Lifang, Yearwood, John, Wu, Xin
- Authors: Zi, Lifang , Yearwood, John , Wu, Xin
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Distributed Denial of Service (DDoS) attacks pose an increasing threat to the current internet. The detection of such attacks plays an important role in maintaining the security of networks. In this paper, we propose a novel adaptive clustering method combined with feature ranking for DDoS attacks detection. First, based on the analysis of network traffic, preliminary variables are selected. Second, the Modified Global K-means algorithm (MGKM) is used as the basic incremental clustering algorithm to identify the cluster structure of the target data. Third, the linear correlation coefficient is used for feature ranking. Lastly, the feature ranking result is used to inform and recalculate the clusters. This adaptive process can make worthwhile adjustments to the working feature vector according to different patterns of DDoS attacks, and can improve the quality of the clusters and the effectiveness of the clustering algorithm. The experimental results demonstrate that our method is effective and adaptive in detecting the separate phases of DDoS attacks. © 2010 IEEE.
- Authors: Zi, Lifang , Yearwood, John , Wu, Xin
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Distributed Denial of Service (DDoS) attacks pose an increasing threat to the current internet. The detection of such attacks plays an important role in maintaining the security of networks. In this paper, we propose a novel adaptive clustering method combined with feature ranking for DDoS attacks detection. First, based on the analysis of network traffic, preliminary variables are selected. Second, the Modified Global K-means algorithm (MGKM) is used as the basic incremental clustering algorithm to identify the cluster structure of the target data. Third, the linear correlation coefficient is used for feature ranking. Lastly, the feature ranking result is used to inform and recalculate the clusters. This adaptive process can make worthwhile adjustments to the working feature vector according to different patterns of DDoS attacks, and can improve the quality of the clusters and the effectiveness of the clustering algorithm. The experimental results demonstrate that our method is effective and adaptive in detecting the separate phases of DDoS attacks. © 2010 IEEE.
Hybrid wrapper-filter approaches for input feature selection using maximum relevance and Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA)
- Huda, Shamsul, Yearwood, John, Stranieri, Andrew
- Authors: Huda, Shamsul , Yearwood, John , Stranieri, Andrew
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Feature selection is an important research problem in machine learning and data mining applications. This paper proposes a hybrid wrapper and filter feature selection algorithm by introducing the filter's feature ranking score in the wrapper stage to speed up the search process for wrapper and thereby finding a more compact feature subset. The approach hybridizes a Mutual Information (MI) based Maximum Relevance (MR) filter ranking heuristic with an Artificial Neural Network (ANN) based wrapper approach where Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA) has been combined with MR (MR-ANNIGMA) to guide the search process in the wrapper. The novelty of our approach is that we use hybrid of wrapper and filter methods that combines filter's ranking score with the wrapper-heuristic's score to take advantages of both filter and wrapper heuristics. Performance of the proposed MRANNIGMA has been verified using bench mark data sets and compared to both independent filter and wrapper based approaches. Experimental results show that MR-ANNIGMA achieves more compact feature sets and higher accuracies than both filter and wrapper approaches alone. © 2010 IEEE.
- Authors: Huda, Shamsul , Yearwood, John , Stranieri, Andrew
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Feature selection is an important research problem in machine learning and data mining applications. This paper proposes a hybrid wrapper and filter feature selection algorithm by introducing the filter's feature ranking score in the wrapper stage to speed up the search process for wrapper and thereby finding a more compact feature subset. The approach hybridizes a Mutual Information (MI) based Maximum Relevance (MR) filter ranking heuristic with an Artificial Neural Network (ANN) based wrapper approach where Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA) has been combined with MR (MR-ANNIGMA) to guide the search process in the wrapper. The novelty of our approach is that we use hybrid of wrapper and filter methods that combines filter's ranking score with the wrapper-heuristic's score to take advantages of both filter and wrapper heuristics. Performance of the proposed MRANNIGMA has been verified using bench mark data sets and compared to both independent filter and wrapper based approaches. Experimental results show that MR-ANNIGMA achieves more compact feature sets and higher accuracies than both filter and wrapper approaches alone. © 2010 IEEE.
GOM: New Genetic Optimizing Model for broadcasting tree in MANET
- Elaiwat, Said, Alazab, Ammar, Venkatraman, Sitalakshmi, Alazab, Mamoun
- Authors: Elaiwat, Said , Alazab, Ammar , Venkatraman, Sitalakshmi , Alazab, Mamoun
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Data broadcasting in a mobile ad-hoc network (MANET) is the main method of information dissemination in many applications, in particular for sending critical information to all hosts. Finding an optimal broadcast tree in such networks is a challenging task due to the broadcast storm problem. The aim of this work is to propose a new genetic model using a fitness function with the primary goal of finding an optimal broadcast tree. Our new method, called Genetic Optimisation Model (GOM) alleviates the broadcast storm problem to a great extent as the experimental simulations result in efficient broadcast tree with minimal flood and minimal hops. The result of this model also shows that it has the ability to give different optimal solutions according to the nature of the network. © 2010 IEEE.
- Authors: Elaiwat, Said , Alazab, Ammar , Venkatraman, Sitalakshmi , Alazab, Mamoun
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Data broadcasting in a mobile ad-hoc network (MANET) is the main method of information dissemination in many applications, in particular for sending critical information to all hosts. Finding an optimal broadcast tree in such networks is a challenging task due to the broadcast storm problem. The aim of this work is to propose a new genetic model using a fitness function with the primary goal of finding an optimal broadcast tree. Our new method, called Genetic Optimisation Model (GOM) alleviates the broadcast storm problem to a great extent as the experimental simulations result in efficient broadcast tree with minimal flood and minimal hops. The result of this model also shows that it has the ability to give different optimal solutions according to the nature of the network. © 2010 IEEE.
Towards understanding malware behaviour by the extraction of API calls
- Alazab, Mamoun, Venkatraman, Sitalakshmi, Watters, Paul
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: One of the recent trends adopted by malware authors is to use packers or software tools that instigate code obfuscation in order to evade detection by antivirus scanners. With evasion techniques such as polymorphism and metamorphism malware is able to fool current detection techniques. Thus, security researchers and the anti-virus industry are facing a herculean task in extracting payloads hidden within packed executables. It is a common practice to use manual unpacking or static unpacking using some software tools and analyse the application programming interface (API) calls for malware detection. However, extracting these features from the unpacked executables for reverse obfuscation is labour intensive and requires deep knowledge of low-level programming that includes kernel and assembly language. This paper presents an automated method of extracting API call features and analysing them in order to understand their use for malicious purpose. While some research has been conducted in arriving at file birthmarks using API call features and the like, there is a scarcity of work that relates to features in malcodes. To address this gap, we attempt to automatically analyse and classify the behavior of API function calls based on the malicious intent hidden within any packed program. This paper uses four-step methodology for developing a fully automated system to arrive at six main categories of suspicious behavior of API call features. © 2010 IEEE.
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: One of the recent trends adopted by malware authors is to use packers or software tools that instigate code obfuscation in order to evade detection by antivirus scanners. With evasion techniques such as polymorphism and metamorphism malware is able to fool current detection techniques. Thus, security researchers and the anti-virus industry are facing a herculean task in extracting payloads hidden within packed executables. It is a common practice to use manual unpacking or static unpacking using some software tools and analyse the application programming interface (API) calls for malware detection. However, extracting these features from the unpacked executables for reverse obfuscation is labour intensive and requires deep knowledge of low-level programming that includes kernel and assembly language. This paper presents an automated method of extracting API call features and analysing them in order to understand their use for malicious purpose. While some research has been conducted in arriving at file birthmarks using API call features and the like, there is a scarcity of work that relates to features in malcodes. To address this gap, we attempt to automatically analyse and classify the behavior of API function calls based on the malicious intent hidden within any packed program. This paper uses four-step methodology for developing a fully automated system to arrive at six main categories of suspicious behavior of API call features. © 2010 IEEE.
Cluster based rule discovery model for enhancement of government's tobacco control strategy
- Huda, Shamsul, Yearwood, John, Borland, Ron
- Authors: Huda, Shamsul , Yearwood, John , Borland, Ron
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Discovery of interesting rules describing the behavioural patterns of smokers' quitting intentions is an important task in the determination of an effective tobacco control strategy. In this paper, we investigate a compact and simplified rule discovery process for predicting smokers' quitting behaviour that can provide feedback to build an scientific evidence-based adaptive tobacco control policy. Standard decision tree (SDT) based rule discovery depends on decision boundaries in the feature space which are orthogonal to the axis of the feature of a particular decision node. This may limit the ability of SDT to learn intermediate concepts for high dimensional large datasets such as tobacco control. In this paper, we propose a cluster based rule discovery model (CRDM) for generation of more compact and simplified rules for the enhancement of tobacco control policy. The clusterbased approach builds conceptual groups from which a set of decision trees (a decision forest) are constructed. Experimental results on the tobacco control data set show that decision rules from the decision forest constructed by CRDM are simpler and can predict smokers' quitting intention more accurately than a single decision tree. © 2010 IEEE.
- Authors: Huda, Shamsul , Yearwood, John , Borland, Ron
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Discovery of interesting rules describing the behavioural patterns of smokers' quitting intentions is an important task in the determination of an effective tobacco control strategy. In this paper, we investigate a compact and simplified rule discovery process for predicting smokers' quitting behaviour that can provide feedback to build an scientific evidence-based adaptive tobacco control policy. Standard decision tree (SDT) based rule discovery depends on decision boundaries in the feature space which are orthogonal to the axis of the feature of a particular decision node. This may limit the ability of SDT to learn intermediate concepts for high dimensional large datasets such as tobacco control. In this paper, we propose a cluster based rule discovery model (CRDM) for generation of more compact and simplified rules for the enhancement of tobacco control policy. The clusterbased approach builds conceptual groups from which a set of decision trees (a decision forest) are constructed. Experimental results on the tobacco control data set show that decision rules from the decision forest constructed by CRDM are simpler and can predict smokers' quitting intention more accurately than a single decision tree. © 2010 IEEE.
Windows rootkits: Attacks and countermeasures
- Lobo, Desmond, Watters, Paul, Wu, Xin, Sun, Li
- Authors: Lobo, Desmond , Watters, Paul , Wu, Xin , Sun, Li
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Windows XP is the dominant operating system in the world today and rootkits have been a major concern for XP users. This paper provides an in-depth analysis of the rootkits that target that operating system, while focusing on those that use various hooking techniques to hide malware on a machine. We identify some of the weaknesses in the Windows XP architecture that rootkits exploit and then evaluate some of the anti-rootkit security features that Microsoft has unveiled in Vista and 7. To reduce the number of rootkit infections in the future, we suggest that Microsoft should take full advantage of Intel's four distinct privilege levels. © 2010 IEEE.
- Authors: Lobo, Desmond , Watters, Paul , Wu, Xin , Sun, Li
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Windows XP is the dominant operating system in the world today and rootkits have been a major concern for XP users. This paper provides an in-depth analysis of the rootkits that target that operating system, while focusing on those that use various hooking techniques to hide malware on a machine. We identify some of the weaknesses in the Windows XP architecture that rootkits exploit and then evaluate some of the anti-rootkit security features that Microsoft has unveiled in Vista and 7. To reduce the number of rootkit infections in the future, we suggest that Microsoft should take full advantage of Intel's four distinct privilege levels. © 2010 IEEE.
The potential affordances of enterprise wikis for creating community in research networks
- Johnson, Nicola, Clarke, Rodney, Herrington, Jan
- Authors: Johnson, Nicola , Clarke, Rodney , Herrington, Jan
- Date: 2008
- Type: Text , Conference proceedings
- Full Text:
- Description: In this paper, we describe some of the affordance, the (specific enabling features or characteristics) of an enterprise wiki to meet the needs of a developing community of practice. The Social Innovation Network (SInet) is a nascent research network that spans the social sciences, education and commerce at the University of Wollongong. It will use the enterprise wiki software Confluence to assist in the development of communities of practice across its groups and sub-groups. This paper, describes some of the features of the software and how it might be used to perform some of the common activties identified by Wenger (nd) as contributing to development of community.
- Authors: Johnson, Nicola , Clarke, Rodney , Herrington, Jan
- Date: 2008
- Type: Text , Conference proceedings
- Full Text:
- Description: In this paper, we describe some of the affordance, the (specific enabling features or characteristics) of an enterprise wiki to meet the needs of a developing community of practice. The Social Innovation Network (SInet) is a nascent research network that spans the social sciences, education and commerce at the University of Wollongong. It will use the enterprise wiki software Confluence to assist in the development of communities of practice across its groups and sub-groups. This paper, describes some of the features of the software and how it might be used to perform some of the common activties identified by Wenger (nd) as contributing to development of community.
Towards an implementation of information flow security using semantic web technologies
- Ureche, Oana, Layton, Robert, Watters, Paul
- Authors: Ureche, Oana , Layton, Robert , Watters, Paul
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Controlling the flow of sensitive data has been widely acknowledged as a critical aspect for securing web information systems. A common limitation of previous approaches for the implementation of the information flow control is their proposal of new scripting languages. This makes them infeasible to be applied to existing systems written in traditional programming languages as these systems need to be redeveloped in the proposed scripting language. This paper proposes a methodology that offers a common interlinqua through the use of Semantic Web technologies for securing web information systems independently of their programming language. © 2012 IEEE.
- Description: 2003011056
- Authors: Ureche, Oana , Layton, Robert , Watters, Paul
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Controlling the flow of sensitive data has been widely acknowledged as a critical aspect for securing web information systems. A common limitation of previous approaches for the implementation of the information flow control is their proposal of new scripting languages. This makes them infeasible to be applied to existing systems written in traditional programming languages as these systems need to be redeveloped in the proposed scripting language. This paper proposes a methodology that offers a common interlinqua through the use of Semantic Web technologies for securing web information systems independently of their programming language. © 2012 IEEE.
- Description: 2003011056
High definition 3D telemedicine: The next frontier?
- Stranieri, Andrew, Collmann, Richard, Borda, Ann
- Authors: Stranieri, Andrew , Collmann, Richard , Borda, Ann
- Date: 2012
- Type: Text , Conference proceedings
- Relation: Studies in Health Technology and Informatics, 182, p.133-41.
- Full Text:
- Description: Evidence from the literature indicates that the degree of immersion often referred to as the "sense of being there" experienced by clinicians and patients is a factor in the success of tele-health installations. High definition and 3D telemedicine offers a compelling mechanism to achieve a sense of immersion and contribute to an enhanced quality of use. This article surveys HD3D trials in tele-health and concludes that the way HD3D is integrated into telemedicine depends on the clinical, organisational and technological context. In some settings real time HD3D is not so desirable whereas asynchronous transmission of HD3D images and videos is highly desirable. © 2012 The authors and IOS Press.
- Authors: Stranieri, Andrew , Collmann, Richard , Borda, Ann
- Date: 2012
- Type: Text , Conference proceedings
- Relation: Studies in Health Technology and Informatics, 182, p.133-41.
- Full Text:
- Description: Evidence from the literature indicates that the degree of immersion often referred to as the "sense of being there" experienced by clinicians and patients is a factor in the success of tele-health installations. High definition and 3D telemedicine offers a compelling mechanism to achieve a sense of immersion and contribute to an enhanced quality of use. This article surveys HD3D trials in tele-health and concludes that the way HD3D is integrated into telemedicine depends on the clinical, organisational and technological context. In some settings real time HD3D is not so desirable whereas asynchronous transmission of HD3D images and videos is highly desirable. © 2012 The authors and IOS Press.
An evaluation of emergency plans and procedures in fitness facilities in Australia: Implications for policy and practice
- Sekendiz, Betul, Norton, Kevin, Keyzer, Patrick, Dietrich, Joachim, Coyle, Ian, Jones, Veronica, Finch, Caroline
- Authors: Sekendiz, Betul , Norton, Kevin , Keyzer, Patrick , Dietrich, Joachim , Coyle, Ian , Jones, Veronica , Finch, Caroline
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: In 2007-08, fitness facilities contributed $872.9 million to the Australian economy and provided savings in direct health care costs estimated up to $107.9 million through their positive impact on physical inactivity and associated diseases (1). In 2011-12, more than 4.3 million Australians participated in sport and physical recreation at indoor sports or fitness facilities (2). However, research across Queensland (3) and in Victoria (4) showed low compliance with emergency plans and safety practices in fitness facilities. The aim of this study was to analyse emergency plans and procedures in fitness facilities in Australia. A nationwide online risk management survey of fitness professionals (n=1178, mean age=39.9), and observational audits at randomly selected regional and metropolitan fitness facilities (n=11) in New South Wales, South Australia, Victoria and Queensland were conducted. The findings indicated that most of the fitness professionals (68.1%) rated the emergency evacuation plans and other emergency procedures in their facilities as extremely/very good (n=640). Yet, more than one fourth (27.4%) of fitness professionals were somewhat aware (n=152), or very unaware/not at all aware (n=49) of the emergency evacuation plans and other emergency procedures in their facilities. The observational audits showed that most of the fitness facilities did not clearly display their emergency response plans (73%, n=8), emergency evacuation procedures (55%, n=6) or emergency telephone numbers (91%, n=10). Many fitness facilities (36.4%, n=4) did not have an appropriate first aid kit accessible by all staff. Our study shows a lack of emergency preparedness in many fitness facilities in Australia. Emergency response capability is crucial for fitness facility managers to satisfy their duty of care to manage risks of medical emergencies and disasters such as fire, explosion, and floods. Our study has implications for policy development and education of fitness facility managers to improve emergency plans and procedures in fitness facilities in Australia.
- Authors: Sekendiz, Betul , Norton, Kevin , Keyzer, Patrick , Dietrich, Joachim , Coyle, Ian , Jones, Veronica , Finch, Caroline
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: In 2007-08, fitness facilities contributed $872.9 million to the Australian economy and provided savings in direct health care costs estimated up to $107.9 million through their positive impact on physical inactivity and associated diseases (1). In 2011-12, more than 4.3 million Australians participated in sport and physical recreation at indoor sports or fitness facilities (2). However, research across Queensland (3) and in Victoria (4) showed low compliance with emergency plans and safety practices in fitness facilities. The aim of this study was to analyse emergency plans and procedures in fitness facilities in Australia. A nationwide online risk management survey of fitness professionals (n=1178, mean age=39.9), and observational audits at randomly selected regional and metropolitan fitness facilities (n=11) in New South Wales, South Australia, Victoria and Queensland were conducted. The findings indicated that most of the fitness professionals (68.1%) rated the emergency evacuation plans and other emergency procedures in their facilities as extremely/very good (n=640). Yet, more than one fourth (27.4%) of fitness professionals were somewhat aware (n=152), or very unaware/not at all aware (n=49) of the emergency evacuation plans and other emergency procedures in their facilities. The observational audits showed that most of the fitness facilities did not clearly display their emergency response plans (73%, n=8), emergency evacuation procedures (55%, n=6) or emergency telephone numbers (91%, n=10). Many fitness facilities (36.4%, n=4) did not have an appropriate first aid kit accessible by all staff. Our study shows a lack of emergency preparedness in many fitness facilities in Australia. Emergency response capability is crucial for fitness facility managers to satisfy their duty of care to manage risks of medical emergencies and disasters such as fire, explosion, and floods. Our study has implications for policy development and education of fitness facility managers to improve emergency plans and procedures in fitness facilities in Australia.
A surrogate model for evaluation of maximum normalized dynamic load factor in moving load model for pipeline spanning due to slug flow
- Sultan, Ibrahim, Reda, Ahmed, Forbes, Gareth
- Authors: Sultan, Ibrahim , Reda, Ahmed , Forbes, Gareth
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Understanding the problem of slug-flow-induced fatigue damage is of particular importance to the reliable operation of pipelines. Slug flow, across unsupported pipeline spans, produces dynamic vibrations in the pipeline resulting in cyclical fatigue stresses. These dynamic effects will cause the pipeline to fail at a point of stress concentration if proper design procedure is not followed. The response of a pipeline span, under the passage of slug flow, can be represented by dynamic load factors that are functions of the speed ratio and damping characteristics of the span. The aspects of these functional relationships are investigated, in this paper by conducting multiple simulations at different speed ratios and damping factors. The data obtained from the steady state Fourier expansion will, consequently, be used to produce a surrogate model with a level of accuracy that adequately qualifies it for use in determining dynamic loading of pipelines. The closed-form surrogate model can be used to eliminate the need to employ costly mathematical procedures or finite element packages for the analysis. The model will also provide a solid ground for optimization studies and help designers gain an insight into how various model parameters impact the system response. This paper will demonstrate the aspects of a proposed surrogate model and endeavor to obtain parameter domains within which the model's reliability is ensured. A numerical example will be demonstrated to prove the concepts presented in the paper and confirm the validity of the proposed model. Copyright © 2012 by ASME.
- Description: C1
- Authors: Sultan, Ibrahim , Reda, Ahmed , Forbes, Gareth
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Understanding the problem of slug-flow-induced fatigue damage is of particular importance to the reliable operation of pipelines. Slug flow, across unsupported pipeline spans, produces dynamic vibrations in the pipeline resulting in cyclical fatigue stresses. These dynamic effects will cause the pipeline to fail at a point of stress concentration if proper design procedure is not followed. The response of a pipeline span, under the passage of slug flow, can be represented by dynamic load factors that are functions of the speed ratio and damping characteristics of the span. The aspects of these functional relationships are investigated, in this paper by conducting multiple simulations at different speed ratios and damping factors. The data obtained from the steady state Fourier expansion will, consequently, be used to produce a surrogate model with a level of accuracy that adequately qualifies it for use in determining dynamic loading of pipelines. The closed-form surrogate model can be used to eliminate the need to employ costly mathematical procedures or finite element packages for the analysis. The model will also provide a solid ground for optimization studies and help designers gain an insight into how various model parameters impact the system response. This paper will demonstrate the aspects of a proposed surrogate model and endeavor to obtain parameter domains within which the model's reliability is ensured. A numerical example will be demonstrated to prove the concepts presented in the paper and confirm the validity of the proposed model. Copyright © 2012 by ASME.
- Description: C1
Virtual teams : Worlds apart
- Authors: Knox, Ian , Wilmott, Deirdre
- Date: 2008
- Type: Text , Conference proceedings
- Full Text:
- Description: Virtual teams are a relatively new phenomenon. A number of studies have focused on the description of team development and the group process of virtual learning teams as they form. This paper is a study of how Australian and American undergraduates worked together in virtual teams to respond to ethical and business practice problems for a given scenario. The study specifically examined the communication methods, task completion methodology and cultural differences exhibited by two undergraduate classes from the University of Ballarat, Ballarat Australia and Jacksonville State University, Jacksonville, Alabama, United States. Both synchronous and asynchronous communications methods were used with differing levels of enthusiasm and acceptance. Although the study was based on a small sample, which limits its generalisability, there are implications to inform those who are considering similar methods in their teaching. © 2008 Ian Knox and Deirdre Wilmott.
- Description: 2003010647
- Authors: Knox, Ian , Wilmott, Deirdre
- Date: 2008
- Type: Text , Conference proceedings
- Full Text:
- Description: Virtual teams are a relatively new phenomenon. A number of studies have focused on the description of team development and the group process of virtual learning teams as they form. This paper is a study of how Australian and American undergraduates worked together in virtual teams to respond to ethical and business practice problems for a given scenario. The study specifically examined the communication methods, task completion methodology and cultural differences exhibited by two undergraduate classes from the University of Ballarat, Ballarat Australia and Jacksonville State University, Jacksonville, Alabama, United States. Both synchronous and asynchronous communications methods were used with differing levels of enthusiasm and acceptance. Although the study was based on a small sample, which limits its generalisability, there are implications to inform those who are considering similar methods in their teaching. © 2008 Ian Knox and Deirdre Wilmott.
- Description: 2003010647
MapReduce neural network framework for efficient content based image retrieval from large datasets in the cloud
- Venkatraman, Sitalakshmi, Kulkarni, Siddhivinayak
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Recently, content based image retrieval (CBIR) has gained active research focus due to wide applications such as crime prevention, medicine, historical research and digital libraries. With digital explosion, image collections in databases in distributed locations over the Internet pose a challenge to retrieve images that are relevant to user queries efficiently and accurately. It becomes increasingly important to develop new CBIR techniques that are effective and scalable for real-time processing of very large image collections. To address this, the paper proposes a novel MapReduce neural network framework for CBIR from large data collection in a cloud environment. We adopt natural language queries that use a fuzzy approach to classify the colour images based on their content and apply Map and Reduce functions that can operate in cloud clusters for arriving at accurate results in real-time. Preliminary experimental results for classifying and retrieving images from large data sets were quite convincing to carry out further experimental evaluations. © 2012 IEEE.
- Description: 2003010699
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Recently, content based image retrieval (CBIR) has gained active research focus due to wide applications such as crime prevention, medicine, historical research and digital libraries. With digital explosion, image collections in databases in distributed locations over the Internet pose a challenge to retrieve images that are relevant to user queries efficiently and accurately. It becomes increasingly important to develop new CBIR techniques that are effective and scalable for real-time processing of very large image collections. To address this, the paper proposes a novel MapReduce neural network framework for CBIR from large data collection in a cloud environment. We adopt natural language queries that use a fuzzy approach to classify the colour images based on their content and apply Map and Reduce functions that can operate in cloud clusters for arriving at accurate results in real-time. Preliminary experimental results for classifying and retrieving images from large data sets were quite convincing to carry out further experimental evaluations. © 2012 IEEE.
- Description: 2003010699
Understanding victims of identity theft: Preliminary insights
- Turville, Kylie, Yearwood, John, Miller, Charlynn
- Authors: Turville, Kylie , Yearwood, John , Miller, Charlynn
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Identity theft is not a new crime, however changes in society and the way that business is conducted have made it an easier, attractive and more lucrative crime. When a victim discovers the misuse of their identity they must then begin the process of recovery, including fixing any issues that may have been created by the misuse. For some victims this may only take a small amount of time and effort, however for others they may continue to experience issues for many years after the initial moment of discovery. To date, little research has been conducted within Australia or internationally regarding what a victim experiences as they work through the recovery process. This paper presents a summary of the identity theft domain with an emphasis on research conducted within Australia, and identifies a number of issues regarding research in this area. The paper also provides an overview of the research project currently being undertaken by the authors in obtaining an understanding of what victims of identity theft experience during the recovery process; particularly their experiences when dealing with organizations. Finally, it reports on some of the preliminary work that has already been conducted for the research project. © 2010 IEEE.
- Authors: Turville, Kylie , Yearwood, John , Miller, Charlynn
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Identity theft is not a new crime, however changes in society and the way that business is conducted have made it an easier, attractive and more lucrative crime. When a victim discovers the misuse of their identity they must then begin the process of recovery, including fixing any issues that may have been created by the misuse. For some victims this may only take a small amount of time and effort, however for others they may continue to experience issues for many years after the initial moment of discovery. To date, little research has been conducted within Australia or internationally regarding what a victim experiences as they work through the recovery process. This paper presents a summary of the identity theft domain with an emphasis on research conducted within Australia, and identifies a number of issues regarding research in this area. The paper also provides an overview of the research project currently being undertaken by the authors in obtaining an understanding of what victims of identity theft experience during the recovery process; particularly their experiences when dealing with organizations. Finally, it reports on some of the preliminary work that has already been conducted for the research project. © 2010 IEEE.
Fusion of LiDAR data and multispectral imagery for effective building detection based on graph and connected component analysis
- Gilani, Alinaqi, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
Optimal operation of a multi-quality water distribution system with changing turbidity and salinity levels in source reservoirs
- Mala-Jetmarova, Helena, Barton, Andrew, Bagirov, Adil
- Authors: Mala-Jetmarova, Helena , Barton, Andrew , Bagirov, Adil
- Date: 2014
- Type: Text , Conference proceedings
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Relation: 16th International Conference on Water Distribution System Analysis, WDSA 2014; Bari, Italy; 14th-17th July 2014
- Full Text:
- Description: Impact of water quality conditions in sources on the optimal operation of a regional multiquality water distribution system is analysed. Three operational objectives are concurrently minimised, being pump energy costs, turbidity and salinity deviations at customer nodes. The optimisation problem is solved using GANetXL (NSGA-II) linked with EPANet. The example network incorporates scenarios with different water quality in sources. It was discovered that two types of tradeoffs, competing and non-competing, exist between the objectives and that the type of tradeoff is not unique between a particular pair of objectives across scenarios. The findings may be used for system operational planning.
- Authors: Mala-Jetmarova, Helena , Barton, Andrew , Bagirov, Adil
- Date: 2014
- Type: Text , Conference proceedings
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Relation: 16th International Conference on Water Distribution System Analysis, WDSA 2014; Bari, Italy; 14th-17th July 2014
- Full Text:
- Description: Impact of water quality conditions in sources on the optimal operation of a regional multiquality water distribution system is analysed. Three operational objectives are concurrently minimised, being pump energy costs, turbidity and salinity deviations at customer nodes. The optimisation problem is solved using GANetXL (NSGA-II) linked with EPANet. The example network incorporates scenarios with different water quality in sources. It was discovered that two types of tradeoffs, competing and non-competing, exist between the objectives and that the type of tradeoff is not unique between a particular pair of objectives across scenarios. The findings may be used for system operational planning.
Unsupervised authorship analysis of phishing webpages
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Authorship analysis on phishing websites enables the investigation of phishing attacks, beyond basic analysis. In authorship analysis, salient features from documents are used to determine properties about the author, such as which of a set of candidate authors wrote a given document. In unsupervised authorship analysis, the aim is to group documents such that all documents by one author are grouped together. Applying this to cyber-attacks shows the size and scope of attacks from specific groups. This in turn allows investigators to focus their attention on specific attacking groups rather than trying to profile multiple independent attackers. In this paper, we analyse phishing websites using the current state of the art unsupervised authorship analysis method, called NUANCE. The results indicate that the application produces clusters which correlate strongly to authorship, evaluated using expert knowledge and external information as well as showing an improvement over a previous approach with known flaws. © 2012 IEEE.
- Description: 2003010678
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Authorship analysis on phishing websites enables the investigation of phishing attacks, beyond basic analysis. In authorship analysis, salient features from documents are used to determine properties about the author, such as which of a set of candidate authors wrote a given document. In unsupervised authorship analysis, the aim is to group documents such that all documents by one author are grouped together. Applying this to cyber-attacks shows the size and scope of attacks from specific groups. This in turn allows investigators to focus their attention on specific attacking groups rather than trying to profile multiple independent attackers. In this paper, we analyse phishing websites using the current state of the art unsupervised authorship analysis method, called NUANCE. The results indicate that the application produces clusters which correlate strongly to authorship, evaluated using expert knowledge and external information as well as showing an improvement over a previous approach with known flaws. © 2012 IEEE.
- Description: 2003010678