Biometrics in banking security : A case study
- Venkatraman, Sitalakshmi, Delpachitra, Indika
- Authors: Venkatraman, Sitalakshmi , Delpachitra, Indika
- Date: 2008
- Type: Text , Journal article
- Relation: Information Management and Computer Security Vol. 16, no. 4 (2008), p. 415-430
- Full Text:
- Reviewed:
- Description: Purpose - To identify and discuss the issues and success factors surrounding biometrics, especially in the context of user authentication and controls in the banking sector, using a case study. Design/methodology/approach - The literature survey and analysis of the security models of the present information systems and biometric technologies in the banking sector provide the theoretical and practical background for this work. The impact of adopting biometric solutions in banks was analysed by considering the various issues and challenges from technological, managerial, social and ethical angles. These explorations led to identifying the success factors that serve as possible guidelines for a viable implementation of a biometric-enabled authentication system in banking organisations, in particular for a major bank in New Zealand. Findings - As the level of security breaches and transaction frauds increase day by day, the need for highly secure identification and personal verification information systems is becoming extremely important especially in the banking and finance sector. Biometric technology appeals to many banking organisations as a near perfect solution to such security threats. Though biometric technology has gained traction in areas like healthcare and criminology, its application in banking security is still in its infancy. Due to the close association of biometrics to human, physical and behavioural aspects, such technologies pose a multitude of social, ethical and managerial challenges. The key success factors proposed through the case study served as a guideline for a biometric-enabled security project called Bio-Sec, which is envisaged in a large banking organisation in New Zealand. This pilot study reveals that more than coping with the technology issues of gelling biometrics into the existing information systems, formulating a viable security plan that addresses user privacy fears, human tolerance levels, organisational change and legal issues is of prime importance. Originality/value - Though biometric systems are successfully adopted in areas such as immigration control and criminology, there is a paucity of their implementation and research pertaining to banking environments. Not all banks venture into biometric solutions to enhance their security systems due to their socio-technological issues. This paper fulfils the need for a guideline to identify the various issues and success factors for a viable biometric implementation in a bank's access control system. This work is only a starting point for academics to conduct more research in the application of biometrics in the various facets of banking businesses.
- Authors: Venkatraman, Sitalakshmi , Delpachitra, Indika
- Date: 2008
- Type: Text , Journal article
- Relation: Information Management and Computer Security Vol. 16, no. 4 (2008), p. 415-430
- Full Text:
- Reviewed:
- Description: Purpose - To identify and discuss the issues and success factors surrounding biometrics, especially in the context of user authentication and controls in the banking sector, using a case study. Design/methodology/approach - The literature survey and analysis of the security models of the present information systems and biometric technologies in the banking sector provide the theoretical and practical background for this work. The impact of adopting biometric solutions in banks was analysed by considering the various issues and challenges from technological, managerial, social and ethical angles. These explorations led to identifying the success factors that serve as possible guidelines for a viable implementation of a biometric-enabled authentication system in banking organisations, in particular for a major bank in New Zealand. Findings - As the level of security breaches and transaction frauds increase day by day, the need for highly secure identification and personal verification information systems is becoming extremely important especially in the banking and finance sector. Biometric technology appeals to many banking organisations as a near perfect solution to such security threats. Though biometric technology has gained traction in areas like healthcare and criminology, its application in banking security is still in its infancy. Due to the close association of biometrics to human, physical and behavioural aspects, such technologies pose a multitude of social, ethical and managerial challenges. The key success factors proposed through the case study served as a guideline for a biometric-enabled security project called Bio-Sec, which is envisaged in a large banking organisation in New Zealand. This pilot study reveals that more than coping with the technology issues of gelling biometrics into the existing information systems, formulating a viable security plan that addresses user privacy fears, human tolerance levels, organisational change and legal issues is of prime importance. Originality/value - Though biometric systems are successfully adopted in areas such as immigration control and criminology, there is a paucity of their implementation and research pertaining to banking environments. Not all banks venture into biometric solutions to enhance their security systems due to their socio-technological issues. This paper fulfils the need for a guideline to identify the various issues and success factors for a viable biometric implementation in a bank's access control system. This work is only a starting point for academics to conduct more research in the application of biometrics in the various facets of banking businesses.
The Impact of Biometric Systems on Communities: Perspectives and Challenges
- Venkatraman, Sitalakshmi, Kulkarni, Siddhivinayak
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at ACKMIDS 2008: Harnessing Knowledge Management to Build Communities, 11th Annual Australian Conference on Knowledge Management and Intelligent Decision Support p. 1-17
- Full Text:
- Reviewed:
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at ACKMIDS 2008: Harnessing Knowledge Management to Build Communities, 11th Annual Australian Conference on Knowledge Management and Intelligent Decision Support p. 1-17
- Full Text:
- Reviewed:
An adaptive framework for biometric systems
- Authors: Venkatraman, Sitalakshmi
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at 2009 International Conference on Computer Engineering and Technology, ICCET 2009, Singapore Vol. 2, p. 371-375
- Full Text:
- Description: This paper provides guidelines to classify biometric systems based on the level of privacy and security risks associated with their transactions. The classification of biometric systems as Basic, Medium or Advanced details how the transactions make use of biometric information for one or more purposes, such as, authorisation, accountability and analysis of sensitive data. An adaptive framework proposed here considers this classification as the fundamental building block in providing a step-wise implementation procedure for implementing biometric systems. It is believed that by adopting such an adaptive framework, societies, businesses and government would be able to harness the benefits of biometrics. This would pave way for a significantly faster diffusion of biometric systems in many everyday life scenarios. © 2009 IEEE.
- Authors: Venkatraman, Sitalakshmi
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at 2009 International Conference on Computer Engineering and Technology, ICCET 2009, Singapore Vol. 2, p. 371-375
- Full Text:
- Description: This paper provides guidelines to classify biometric systems based on the level of privacy and security risks associated with their transactions. The classification of biometric systems as Basic, Medium or Advanced details how the transactions make use of biometric information for one or more purposes, such as, authorisation, accountability and analysis of sensitive data. An adaptive framework proposed here considers this classification as the fundamental building block in providing a step-wise implementation procedure for implementing biometric systems. It is believed that by adopting such an adaptive framework, societies, businesses and government would be able to harness the benefits of biometrics. This would pave way for a significantly faster diffusion of biometric systems in many everyday life scenarios. © 2009 IEEE.
Autonomic context-dependent architecture for malware detection
- Authors: Venkatraman, Sitalakshmi
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at e-Tech 2009, International Conference on e-Technology, 2009. Singapore : 8-10 January 2009 p. 2927-2947
- Full Text:
- Authors: Venkatraman, Sitalakshmi
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at e-Tech 2009, International Conference on e-Technology, 2009. Singapore : 8-10 January 2009 p. 2927-2947
- Full Text:
Digital forensic techniques for static analysis of NTFS images
- Alazab, Mamoun, Venkatraman, Sitalakshmi, Watters, Paul
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at 4th International Conference of Information Technology, ICIT 2009, AL-Zaytoonah University, Amman, Jordan : 3rd-5th June 2009
- Full Text:
- Description: Static analysis of the Windows NTS File System (NTFS) which is the standard and most commonly used file system could provide useful information for digital forensics. However, since the NFTS disk image records every event in the system, forensic tools need to process an enormous amount of information related to user / kernel environment, buffer overflows, trace conditions, network stack and other related subsystems. This leads to imperfect forensic tools that are practical for implementation but not comprehensive and effective. This research discusses the analysis technique to detect data hidden based on the internal structure of the NTFS file system in the boot sector. Further, it attempts to unearth the vulnerabilities of NTFS disk image and weaknesses of the current forensic techniques. The paper argues that a comprehensive tool with improved techniques is warranted for a successful forensic analysis.
- Description: 2003007524
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2009
- Type: Text , Conference paper
- Relation: Paper presented at 4th International Conference of Information Technology, ICIT 2009, AL-Zaytoonah University, Amman, Jordan : 3rd-5th June 2009
- Full Text:
- Description: Static analysis of the Windows NTS File System (NTFS) which is the standard and most commonly used file system could provide useful information for digital forensics. However, since the NFTS disk image records every event in the system, forensic tools need to process an enormous amount of information related to user / kernel environment, buffer overflows, trace conditions, network stack and other related subsystems. This leads to imperfect forensic tools that are practical for implementation but not comprehensive and effective. This research discusses the analysis technique to detect data hidden based on the internal structure of the NTFS file system in the boot sector. Further, it attempts to unearth the vulnerabilities of NTFS disk image and weaknesses of the current forensic techniques. The paper argues that a comprehensive tool with improved techniques is warranted for a successful forensic analysis.
- Description: 2003007524
Role of mobile technology in the construction industry - A case study
- Venkatraman, Sitalakshmi, Yoong, Pak
- Authors: Venkatraman, Sitalakshmi , Yoong, Pak
- Date: 2009
- Type: Text , Journal article
- Relation: International Journal of Business Information Systems Vol. 4, no. 2 (2009), p. 195-209
- Full Text:
- Description: The construction industry is facing a number of pressures to decrease costs, improve productivity and have a competitive edge in terms of quality of service and customer satisfaction. Recent advancements in mobile technology provide new avenues for addressing this situation. This paper presents the role of emerging mobile technologies and, in particular, the development of a mobile facsimile solution that assists collaborative communications between parties on or away from the construction site. This paper first identifies potential use cases for mobile technologies in the construction industry and highlights the issues that would hamper their adoption. It discusses the modelling of the problems related to the workflow of a construction process with the aid of a focus group formed with various construction industry representatives in New Zealand. The various problem-solving processes adopted by the industry practitioners at different functional levels are analysed and the findings summarised. Finally, this paper describes the development of one such mobile solution, called ClikiFax, which could address some of the issues and pressures prevailing in the context of the New Zealand construction industry. Copyright © 2009, Inderscience Publishers.
- Authors: Venkatraman, Sitalakshmi , Yoong, Pak
- Date: 2009
- Type: Text , Journal article
- Relation: International Journal of Business Information Systems Vol. 4, no. 2 (2009), p. 195-209
- Full Text:
- Description: The construction industry is facing a number of pressures to decrease costs, improve productivity and have a competitive edge in terms of quality of service and customer satisfaction. Recent advancements in mobile technology provide new avenues for addressing this situation. This paper presents the role of emerging mobile technologies and, in particular, the development of a mobile facsimile solution that assists collaborative communications between parties on or away from the construction site. This paper first identifies potential use cases for mobile technologies in the construction industry and highlights the issues that would hamper their adoption. It discusses the modelling of the problems related to the workflow of a construction process with the aid of a focus group formed with various construction industry representatives in New Zealand. The various problem-solving processes adopted by the industry practitioners at different functional levels are analysed and the findings summarised. Finally, this paper describes the development of one such mobile solution, called ClikiFax, which could address some of the issues and pressures prevailing in the context of the New Zealand construction industry. Copyright © 2009, Inderscience Publishers.
The development of an information systems strategic plan : An e-government perspective
- Venkatraman, Sitalakshmi, Hughes, Stephen
- Authors: Venkatraman, Sitalakshmi , Hughes, Stephen
- Date: 2009
- Type: Text , Journal article
- Relation: International Journal of Business Excellence Vol. 2, no. 1 (2009), p. 50-64
- Full Text:
- Reviewed:
- Description: Information and Communications Technologies (ICTs) have been playing a major role in governments across the world to improve the efficiency, effectiveness and quality of public services. However, ICT has been utilised only in a piecemeal fashion by different public sectors and, hence, many governments are keen to incorporate an integrated e-government strategy to achieve business and service excellence. This paper presents the development of an Information Systems Strategic Plan (ISSP) for a large government organisation as a case study. We discuss the evolution of an e-government strategy and examine its influence in the development of an information systems strategy within a state sector organisation in New Zealand. The findings from the case study analysis are used to measure the degree of alignment between the objectives of the e-government strategy and the organisation's ISSP strategy. We identify the challenges that face the organisation and propose a ten-point framework for the improvement of the ISSP alignment with the e-government strategy. Finally, we conclude with a summary of the outcomes of this study and the future research directions. © 2009 Inderscience Enterprises Ltd.
- Authors: Venkatraman, Sitalakshmi , Hughes, Stephen
- Date: 2009
- Type: Text , Journal article
- Relation: International Journal of Business Excellence Vol. 2, no. 1 (2009), p. 50-64
- Full Text:
- Reviewed:
- Description: Information and Communications Technologies (ICTs) have been playing a major role in governments across the world to improve the efficiency, effectiveness and quality of public services. However, ICT has been utilised only in a piecemeal fashion by different public sectors and, hence, many governments are keen to incorporate an integrated e-government strategy to achieve business and service excellence. This paper presents the development of an Information Systems Strategic Plan (ISSP) for a large government organisation as a case study. We discuss the evolution of an e-government strategy and examine its influence in the development of an information systems strategy within a state sector organisation in New Zealand. The findings from the case study analysis are used to measure the degree of alignment between the objectives of the e-government strategy and the organisation's ISSP strategy. We identify the challenges that face the organisation and propose a ten-point framework for the improvement of the ISSP alignment with the e-government strategy. Finally, we conclude with a summary of the outcomes of this study and the future research directions. © 2009 Inderscience Enterprises Ltd.
GOM: New Genetic Optimizing Model for broadcasting tree in MANET
- Elaiwat, Said, Alazab, Ammar, Venkatraman, Sitalakshmi, Alazab, Mamoun
- Authors: Elaiwat, Said , Alazab, Ammar , Venkatraman, Sitalakshmi , Alazab, Mamoun
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Data broadcasting in a mobile ad-hoc network (MANET) is the main method of information dissemination in many applications, in particular for sending critical information to all hosts. Finding an optimal broadcast tree in such networks is a challenging task due to the broadcast storm problem. The aim of this work is to propose a new genetic model using a fitness function with the primary goal of finding an optimal broadcast tree. Our new method, called Genetic Optimisation Model (GOM) alleviates the broadcast storm problem to a great extent as the experimental simulations result in efficient broadcast tree with minimal flood and minimal hops. The result of this model also shows that it has the ability to give different optimal solutions according to the nature of the network. © 2010 IEEE.
- Authors: Elaiwat, Said , Alazab, Ammar , Venkatraman, Sitalakshmi , Alazab, Mamoun
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Data broadcasting in a mobile ad-hoc network (MANET) is the main method of information dissemination in many applications, in particular for sending critical information to all hosts. Finding an optimal broadcast tree in such networks is a challenging task due to the broadcast storm problem. The aim of this work is to propose a new genetic model using a fitness function with the primary goal of finding an optimal broadcast tree. Our new method, called Genetic Optimisation Model (GOM) alleviates the broadcast storm problem to a great extent as the experimental simulations result in efficient broadcast tree with minimal flood and minimal hops. The result of this model also shows that it has the ability to give different optimal solutions according to the nature of the network. © 2010 IEEE.
Risk-based neuro-grid architecture for multimodal biometrics
- Venkatraman, Sitalakshmi, Kulkarni, Siddhivinayak
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, governments
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, governments
Towards understanding malware behaviour by the extraction of API calls
- Alazab, Mamoun, Venkatraman, Sitalakshmi, Watters, Paul
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: One of the recent trends adopted by malware authors is to use packers or software tools that instigate code obfuscation in order to evade detection by antivirus scanners. With evasion techniques such as polymorphism and metamorphism malware is able to fool current detection techniques. Thus, security researchers and the anti-virus industry are facing a herculean task in extracting payloads hidden within packed executables. It is a common practice to use manual unpacking or static unpacking using some software tools and analyse the application programming interface (API) calls for malware detection. However, extracting these features from the unpacked executables for reverse obfuscation is labour intensive and requires deep knowledge of low-level programming that includes kernel and assembly language. This paper presents an automated method of extracting API call features and analysing them in order to understand their use for malicious purpose. While some research has been conducted in arriving at file birthmarks using API call features and the like, there is a scarcity of work that relates to features in malcodes. To address this gap, we attempt to automatically analyse and classify the behavior of API function calls based on the malicious intent hidden within any packed program. This paper uses four-step methodology for developing a fully automated system to arrive at six main categories of suspicious behavior of API call features. © 2010 IEEE.
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: One of the recent trends adopted by malware authors is to use packers or software tools that instigate code obfuscation in order to evade detection by antivirus scanners. With evasion techniques such as polymorphism and metamorphism malware is able to fool current detection techniques. Thus, security researchers and the anti-virus industry are facing a herculean task in extracting payloads hidden within packed executables. It is a common practice to use manual unpacking or static unpacking using some software tools and analyse the application programming interface (API) calls for malware detection. However, extracting these features from the unpacked executables for reverse obfuscation is labour intensive and requires deep knowledge of low-level programming that includes kernel and assembly language. This paper presents an automated method of extracting API call features and analysing them in order to understand their use for malicious purpose. While some research has been conducted in arriving at file birthmarks using API call features and the like, there is a scarcity of work that relates to features in malcodes. To address this gap, we attempt to automatically analyse and classify the behavior of API function calls based on the malicious intent hidden within any packed program. This paper uses four-step methodology for developing a fully automated system to arrive at six main categories of suspicious behavior of API call features. © 2010 IEEE.
Zero-day malware detection based on supervised learning algorithms of API call signatures
- Alazab, Mamoun, Venkatraman, Sitalakshmi, Watters, Paul, Alazab, Moutaz
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul , Alazab, Moutaz
- Date: 2011
- Type: Text , Conference proceedings
- Full Text:
- Description: Zero-day or unknown malware are created using code obfuscation techniques that can modify the parent code to produce offspring copies which have the same functionality but with different signatures. Current techniques reported in literature lack the capability of detecting zero-day malware with the required accuracy and efficiency. In this paper, we have proposed and evaluated a novel method of employing several data mining techniques to detect and classify zero-day malware with high levels of accuracy and efficiency based on the frequency of Windows API calls. This paper describes the methodology employed for the collection of large data sets to train the classifiers, and analyses the performance results of the various data mining algorithms adopted for the study using a fully automated tool developed in this research to conduct the various experimental investigations and evaluation. Through the performance results of these algorithms from our experimental analysis, we are able to evaluate and discuss the advantages of one data mining algorithm over the other for accurately detecting zero-day malware successfully. The data mining framework employed in this research learns through analysing the behavior of existing malicious and benign codes in large datasets. We have employed robust classifiers, namely Naïve Bayes (NB) Algorithm, k-Nearest Neighbor (kNN) Algorithm, Sequential Minimal Optimization (SMO) Algorithm with 4 differents kernels (SMO - Normalized PolyKernel, SMO - PolyKernel, SMO - Puk, and SMO- Radial Basis Function (RBF)), Backpropagation Neural Networks Algorithm, and J48 decision tree and have evaluated their performance. Overall, the automated data mining system implemented for this study has achieved high true positive (TP) rate of more than 98.5%, and low false positive (FP) rate of less than 0.025, which has not been achieved in literature so far. This is much higher than the required commercial acceptance level indicating that our novel technique is a major leap forward in detecting zero-day malware. This paper also offers future directions for researchers in exploring different aspects of obfuscations that are affecting the IT world today. © 2011, Australian Computer Society, Inc.
- Description: 2003009506
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul , Alazab, Moutaz
- Date: 2011
- Type: Text , Conference proceedings
- Full Text:
- Description: Zero-day or unknown malware are created using code obfuscation techniques that can modify the parent code to produce offspring copies which have the same functionality but with different signatures. Current techniques reported in literature lack the capability of detecting zero-day malware with the required accuracy and efficiency. In this paper, we have proposed and evaluated a novel method of employing several data mining techniques to detect and classify zero-day malware with high levels of accuracy and efficiency based on the frequency of Windows API calls. This paper describes the methodology employed for the collection of large data sets to train the classifiers, and analyses the performance results of the various data mining algorithms adopted for the study using a fully automated tool developed in this research to conduct the various experimental investigations and evaluation. Through the performance results of these algorithms from our experimental analysis, we are able to evaluate and discuss the advantages of one data mining algorithm over the other for accurately detecting zero-day malware successfully. The data mining framework employed in this research learns through analysing the behavior of existing malicious and benign codes in large datasets. We have employed robust classifiers, namely Naïve Bayes (NB) Algorithm, k-Nearest Neighbor (kNN) Algorithm, Sequential Minimal Optimization (SMO) Algorithm with 4 differents kernels (SMO - Normalized PolyKernel, SMO - PolyKernel, SMO - Puk, and SMO- Radial Basis Function (RBF)), Backpropagation Neural Networks Algorithm, and J48 decision tree and have evaluated their performance. Overall, the automated data mining system implemented for this study has achieved high true positive (TP) rate of more than 98.5%, and low false positive (FP) rate of less than 0.025, which has not been achieved in literature so far. This is much higher than the required commercial acceptance level indicating that our novel technique is a major leap forward in detecting zero-day malware. This paper also offers future directions for researchers in exploring different aspects of obfuscations that are affecting the IT world today. © 2011, Australian Computer Society, Inc.
- Description: 2003009506
MapReduce neural network framework for efficient content based image retrieval from large datasets in the cloud
- Venkatraman, Sitalakshmi, Kulkarni, Siddhivinayak
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Recently, content based image retrieval (CBIR) has gained active research focus due to wide applications such as crime prevention, medicine, historical research and digital libraries. With digital explosion, image collections in databases in distributed locations over the Internet pose a challenge to retrieve images that are relevant to user queries efficiently and accurately. It becomes increasingly important to develop new CBIR techniques that are effective and scalable for real-time processing of very large image collections. To address this, the paper proposes a novel MapReduce neural network framework for CBIR from large data collection in a cloud environment. We adopt natural language queries that use a fuzzy approach to classify the colour images based on their content and apply Map and Reduce functions that can operate in cloud clusters for arriving at accurate results in real-time. Preliminary experimental results for classifying and retrieving images from large data sets were quite convincing to carry out further experimental evaluations. © 2012 IEEE.
- Description: 2003010699
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Recently, content based image retrieval (CBIR) has gained active research focus due to wide applications such as crime prevention, medicine, historical research and digital libraries. With digital explosion, image collections in databases in distributed locations over the Internet pose a challenge to retrieve images that are relevant to user queries efficiently and accurately. It becomes increasingly important to develop new CBIR techniques that are effective and scalable for real-time processing of very large image collections. To address this, the paper proposes a novel MapReduce neural network framework for CBIR from large data collection in a cloud environment. We adopt natural language queries that use a fuzzy approach to classify the colour images based on their content and apply Map and Reduce functions that can operate in cloud clusters for arriving at accurate results in real-time. Preliminary experimental results for classifying and retrieving images from large data sets were quite convincing to carry out further experimental evaluations. © 2012 IEEE.
- Description: 2003010699
Novel data mining techniques for incompleted clinical data in diabetes management
- Jelinek, Herbert, Yatsko, Andrew, Stranieri, Andrew, Venkatraman, Sitalakshmi
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi
- Date: 2014
- Type: Text , Journal article
- Relation: British Journal of Applied Science & Technology Vol. 4, no. 33 (2014), p. 4591-4606
- Relation: https://doi.org/10.9734/BJAST/2014/11744
- Full Text:
- Reviewed:
- Description: An important part of health care involves upkeep and interpretation of medical databases containing patient records for clinical decision making, diagnosis and follow-up treatment. Missing clinical entries make it difficult to apply data mining algorithms for clinical decision support. This study demonstrates that higher predictive accuracy is possible using conventional data mining algorithms if missing values are dealt with appropriately. We propose a novel algorithm using a convolution of sub-problems to stage a super problem, where classes are defined by Cartesian Product of class values of the underlying problems, and Incomplete Information Dismissal and Data Completion techniques are applied for reducing features and imputing missing values. Predictive accuracies using Decision Branch, Nearest Neighborhood and Naïve Bayesian classifiers were compared to predict diabetes, cardiovascular disease and hypertension. Data is derived from Diabetes Screening Complications Research Initiative (DiScRi) conducted at a regional Australian university involving more than 2400 patient records with more than one hundred clinical risk factors (attributes). The results show substantial improvements in the accuracy achieved with each classifier for an effective diagnosis of diabetes, cardiovascular disease and hypertension as compared to those achieved without substituting missing values. The gain in improvement is 7% for diabetes, 21% for cardiovascular disease and 24% for hypertension, and our integrated novel approach has resulted in more than 90% accuracy for the diagnosis of any of the three conditions. This work advances data mining research towards achieving an integrated and holistic management of diabetes. - See more at: http://www.sciencedomain.org/abstract.php?iid=670&id=5&aid=6128#.VCSxDfmSx8E
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi
- Date: 2014
- Type: Text , Journal article
- Relation: British Journal of Applied Science & Technology Vol. 4, no. 33 (2014), p. 4591-4606
- Relation: https://doi.org/10.9734/BJAST/2014/11744
- Full Text:
- Reviewed:
- Description: An important part of health care involves upkeep and interpretation of medical databases containing patient records for clinical decision making, diagnosis and follow-up treatment. Missing clinical entries make it difficult to apply data mining algorithms for clinical decision support. This study demonstrates that higher predictive accuracy is possible using conventional data mining algorithms if missing values are dealt with appropriately. We propose a novel algorithm using a convolution of sub-problems to stage a super problem, where classes are defined by Cartesian Product of class values of the underlying problems, and Incomplete Information Dismissal and Data Completion techniques are applied for reducing features and imputing missing values. Predictive accuracies using Decision Branch, Nearest Neighborhood and Naïve Bayesian classifiers were compared to predict diabetes, cardiovascular disease and hypertension. Data is derived from Diabetes Screening Complications Research Initiative (DiScRi) conducted at a regional Australian university involving more than 2400 patient records with more than one hundred clinical risk factors (attributes). The results show substantial improvements in the accuracy achieved with each classifier for an effective diagnosis of diabetes, cardiovascular disease and hypertension as compared to those achieved without substituting missing values. The gain in improvement is 7% for diabetes, 21% for cardiovascular disease and 24% for hypertension, and our integrated novel approach has resulted in more than 90% accuracy for the diagnosis of any of the three conditions. This work advances data mining research towards achieving an integrated and holistic management of diabetes. - See more at: http://www.sciencedomain.org/abstract.php?iid=670&id=5&aid=6128#.VCSxDfmSx8E
Data-analytically derived flexible HbA1c thresholds for type 2 diabetes mellitus diagnostic
- Stranieri, Andrew, Yatsko, Andrew, Jelinek, Herbert, Venkatraman, Sitalakshmi
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
Diagnostic with incomplete nominal/discrete data
- Jelinek, Herbert, Yatsko, Andrew, Stranieri, Andrew, Venkatraman, Sitalakshmi, Bagirov, Adil
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Personalised measures of obesity using waist to height ratios from an Australian health screening program
- Jelinek, Herbert, Stranieri, Andrew, Yatsko, Anderw, Venkatraman, Sitalakshmi
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Anderw , Venkatraman, Sitalakshmi
- Date: 2019
- Type: Text , Journal article
- Relation: Digital Health Vol. 5, no. (2019), p. 1-8
- Full Text:
- Reviewed:
- Description: Objectives The aim of the current study is to generate waist circumference to height ratio cut-off values for obesity categories from a model of the relationship between body mass index and waist circumference to height ratio. We compare the waist circumference to height ratio discovered in this way with cut-off values currently prevalent in practice that were originally derived using pragmatic criteria. Method Personalized data including age, gender, height, weight, waist circumference and presence of diabetes, hypertension and cardiovascular disease for 847 participants over eight years were assembled from participants attending a rural Australian health review clinic (DiabHealth). Obesity was classified based on the conventional body mass index measure (weight/height(2)) and compared to the waist circumference to height ratio. Correlations between the measures were evaluated on the screening data, and independently on data from the National Health and Nutrition Examination Survey that included age categories. Results This article recommends waist circumference to height ratio cut-off values based on an Australian rural sample and verified using the National Health and Nutrition Examination Survey database that facilitates the classification of obesity in clinical practice. Gender independent cut-off values are provided for waist circumference to height ratio that identify healthy (waist circumference to height ratio >= 0.45), overweight (0.53) and the three obese (0.60, 0.68, 0.75) categories verified on the National Health and Nutrition Examination Survey dataset. A strong linearity between the waist circumference to height ratio and the body mass index measure is demonstrated. Conclusion The recommended waist circumference to height ratio cut-off values provided a useful index for assessing stages of obesity and risk of chronic disease for improved healthcare in clinical practice.
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Anderw , Venkatraman, Sitalakshmi
- Date: 2019
- Type: Text , Journal article
- Relation: Digital Health Vol. 5, no. (2019), p. 1-8
- Full Text:
- Reviewed:
- Description: Objectives The aim of the current study is to generate waist circumference to height ratio cut-off values for obesity categories from a model of the relationship between body mass index and waist circumference to height ratio. We compare the waist circumference to height ratio discovered in this way with cut-off values currently prevalent in practice that were originally derived using pragmatic criteria. Method Personalized data including age, gender, height, weight, waist circumference and presence of diabetes, hypertension and cardiovascular disease for 847 participants over eight years were assembled from participants attending a rural Australian health review clinic (DiabHealth). Obesity was classified based on the conventional body mass index measure (weight/height(2)) and compared to the waist circumference to height ratio. Correlations between the measures were evaluated on the screening data, and independently on data from the National Health and Nutrition Examination Survey that included age categories. Results This article recommends waist circumference to height ratio cut-off values based on an Australian rural sample and verified using the National Health and Nutrition Examination Survey database that facilitates the classification of obesity in clinical practice. Gender independent cut-off values are provided for waist circumference to height ratio that identify healthy (waist circumference to height ratio >= 0.45), overweight (0.53) and the three obese (0.60, 0.68, 0.75) categories verified on the National Health and Nutrition Examination Survey dataset. A strong linearity between the waist circumference to height ratio and the body mass index measure is demonstrated. Conclusion The recommended waist circumference to height ratio cut-off values provided a useful index for assessing stages of obesity and risk of chronic disease for improved healthcare in clinical practice.
Online dispute resolution in mediating EHR disputes : a case study on the impact of emotional intelligence
- Bellucci, Emilia, Venkatraman, Sitalakshmi, Stranieri, Andrew
- Authors: Bellucci, Emilia , Venkatraman, Sitalakshmi , Stranieri, Andrew
- Date: 2020
- Type: Text , Journal article
- Relation: Behaviour and Information Technology Vol. 39, no. 10 (2020), p. 1124-1139
- Full Text:
- Reviewed:
- Description: An Electronic Health Record (EHR) is an individual’s record of all health events that enables critical information to be documented and shared electronically amongst health care providers and patients. The introduction of an EHR, particularly a patient-accessible EHR, can be expected to lead to an escalation of enquiries, complaints and ultimately, disputes. Prevailing opinion is that Online Dispute Resolution (ODR) systems can help with the mediation of certain types of disputes electronically, particularly systems which deploy Artificial Intelligence (AI) to reduce the need for a human mediator. However, disputes regarding health tend to invoke emotional responses from patients that may conceivably impact ODR efficacy. This raises an interesting question on the influence of emotional intelligence (EI) in the process of mediation. Using a phenomenological research methodology simulating doctor–patient disputes mediated with an AI Smart ODR system in place of a human mediator, we found an association between EI and the propensity for a participant to change their previously asserted claims. Our results indicate participants with lower EI tend to prolong resolution compared to those with higher EI. Future research include trialling larger scale ODR systems for specific cohorts of patients in the area of health related dispute resolution are advanced. © 2019 Informa UK Limited, trading as Taylor & Francis Group.
- Authors: Bellucci, Emilia , Venkatraman, Sitalakshmi , Stranieri, Andrew
- Date: 2020
- Type: Text , Journal article
- Relation: Behaviour and Information Technology Vol. 39, no. 10 (2020), p. 1124-1139
- Full Text:
- Reviewed:
- Description: An Electronic Health Record (EHR) is an individual’s record of all health events that enables critical information to be documented and shared electronically amongst health care providers and patients. The introduction of an EHR, particularly a patient-accessible EHR, can be expected to lead to an escalation of enquiries, complaints and ultimately, disputes. Prevailing opinion is that Online Dispute Resolution (ODR) systems can help with the mediation of certain types of disputes electronically, particularly systems which deploy Artificial Intelligence (AI) to reduce the need for a human mediator. However, disputes regarding health tend to invoke emotional responses from patients that may conceivably impact ODR efficacy. This raises an interesting question on the influence of emotional intelligence (EI) in the process of mediation. Using a phenomenological research methodology simulating doctor–patient disputes mediated with an AI Smart ODR system in place of a human mediator, we found an association between EI and the propensity for a participant to change their previously asserted claims. Our results indicate participants with lower EI tend to prolong resolution compared to those with higher EI. Future research include trialling larger scale ODR systems for specific cohorts of patients in the area of health related dispute resolution are advanced. © 2019 Informa UK Limited, trading as Taylor & Francis Group.
Emerging point of care devices and artificial intelligence : prospects and challenges for public health
- Stranieri, Andrew, Venkatraman, Sitalakshmi, Minicz, John, Zarnegar, Armita, Firmin, Sally, Balasubramanian, Venki, Jelinek, Herbert
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.
- «
- ‹
- 1
- ›
- »