Six sigma approach to improve quality in e-services: An empirical study in Jordan
- Authors: Alhyari, Salah , Alazab, Moutaz , Venkatraman, Sitalakshmi , Alazab, Mamoun , Alazab, Ammar
- Date: 2012
- Type: Text , Journal article
- Relation: International Journal of Electronic Government Research Vol. 8, no. 2 (April, 2012), p. 57-74
- Full Text: false
- Reviewed:
- Description: This paper investigates the application of the Six Sigma approach to improve quality in electronic services (e-services) as more countries are adopting e-services as a means of providing services to their people through the Web. This paper presents a case study about the use of Six Sigma model to measure customer satisfaction and quality levels achieved in e-services that were recently launched by public sector organisations in a developing country, such as Jordan. An empirical study consisting of 280 customers of Jordan's e-services is conducted and problems are identified through the DMAIC phases of Six Sigma. The service quality levels are measured and analysed using six main criteria: Website Design, Reliability, Responsiveness, Personalization, Information Quality, and System Quality. The study indicates a 74% customer satisfaction with a Six Sigma level of 2.12 has enabled the Greater Amman Municipality to identify the usability issues associated with their e-services offered by public sector organisations. The aim of the paper is not only to implement Six Sigma as a measurement-based strategy for improving e-customer service in a newly launched e-service programme, but also widen its scope in investigating other service dimensions and perform comparative studies in other developing countries.
Zero-day malware detection based on supervised learning algorithms of API call signatures
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul , Alazab, Moutaz
- Date: 2011
- Type: Text , Conference proceedings
- Full Text:
- Description: Zero-day or unknown malware are created using code obfuscation techniques that can modify the parent code to produce offspring copies which have the same functionality but with different signatures. Current techniques reported in literature lack the capability of detecting zero-day malware with the required accuracy and efficiency. In this paper, we have proposed and evaluated a novel method of employing several data mining techniques to detect and classify zero-day malware with high levels of accuracy and efficiency based on the frequency of Windows API calls. This paper describes the methodology employed for the collection of large data sets to train the classifiers, and analyses the performance results of the various data mining algorithms adopted for the study using a fully automated tool developed in this research to conduct the various experimental investigations and evaluation. Through the performance results of these algorithms from our experimental analysis, we are able to evaluate and discuss the advantages of one data mining algorithm over the other for accurately detecting zero-day malware successfully. The data mining framework employed in this research learns through analysing the behavior of existing malicious and benign codes in large datasets. We have employed robust classifiers, namely Naïve Bayes (NB) Algorithm, k-Nearest Neighbor (kNN) Algorithm, Sequential Minimal Optimization (SMO) Algorithm with 4 differents kernels (SMO - Normalized PolyKernel, SMO - PolyKernel, SMO - Puk, and SMO- Radial Basis Function (RBF)), Backpropagation Neural Networks Algorithm, and J48 decision tree and have evaluated their performance. Overall, the automated data mining system implemented for this study has achieved high true positive (TP) rate of more than 98.5%, and low false positive (FP) rate of less than 0.025, which has not been achieved in literature so far. This is much higher than the required commercial acceptance level indicating that our novel technique is a major leap forward in detecting zero-day malware. This paper also offers future directions for researchers in exploring different aspects of obfuscations that are affecting the IT world today. © 2011, Australian Computer Society, Inc.
- Description: 2003009506
Information security governance: The art of detecting hidden malware
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2013
- Type: Text , Book chapter
- Relation: IT Security governance innovations: Theory and research p. 293-315
- Full Text: false
- Reviewed:
- Description: Detecting malicious software or malware is one of the major concerns in information security governance as malware authors pose a major challenge to digital forensics by using a variety of highly sophisticated stealth techniques to hide malicious code in computing systems, including smartphones. The current detection techniques are futile, as forensic analysis of infected devices is unable to identify all the hidden malware, thereby resulting in zero day attacks. This chapter takes a key step forward to address this issue and lays foundation for deeper investigations in digital forensics. The goal of this chapter is, firstly, to unearth the recent obfuscation strategies employed to hide malware. Secondly, this chapter proposes innovative techniques that are implemented as a fully-automated tool, and experimentally tested to exhaustively detect hidden malware that leverage on system vulnerabilities. Based on these research investigations, the chapter also arrives at an information security governance plan that would aid in addressing the current and future cybercrime situations.
Risk-based neuro-grid architecture for multimodal biometrics
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, governments
Self-learning framework for intrusion detection
- Authors: Venkatraman, Sitalakshmi
- Date: 2010
- Type: Text , Conference proceedings
- Full Text: false
- Description: Present intrusion detections systems (IDS) in both network (NIDS) and host (HIDS) lack the ability to sense signs of intrusions at early stages of attacks, much before any damage occurs. They are unable to cope with new attacking strategies as they predominantly rely on matching patterns of known behaviour (Known signatures). In addition, they are unable to take automatic action in the event of multiple intrusions as they typically resort to manual or semi-manual identification mechanism that are either network-based or host-based separately, rather than collectively. Hence, there is no need for more research to focus on i) automatically identifying new possible intrusions through self-learning methods in order to address zero-day attacks and ii) integrating observed anomalies from NIDS as well as HIDS. With these two objectives, this paper presents a framework that postulates a self-learning monitoring mechanism with the aid of agents to integrate existing knowledge with new observed behaviour patterns gathered from network and host collectively. It also illustrates the working of an agent-based self-learning mechanism in detecting intrusions effectively.
Does the business size matter on corporate sustainable performance? The Australian business case
- Authors: Nayak, Ravi , Venkatraman, Sitalakshmi
- Date: 2011
- Type: Text , Journal article
- Relation: World Review of Entrepreneurship, Management and Sustainable Development Vol. 7, no. 3 (2011), p. 281-301
- Full Text: false
- Reviewed:
- Description: While a growing majority of research studies have concentrated on triple bottom line public reporting in large organisations, the review of past research suggests there seems to be limited support and importance given to small and medium sized businesses. This paper attempts to examine whether business size matters when it comes to corporate sustainability. To achieve this, we have conducted an empirical study to investigate sustainable business practices in small, medium and large organisations. With a sample of 80 different Australia-based firms, we have examined various parameters attributing to corporate sustainability and have arrived at three kinds of performance outcomes (factors) that concur with triple bottom line principles, which we term as: 1) corporate environmental performance outcome (CEPO); 2) corporate social performance outcome (CSPO); 3) corporate financial performance outcome (CFPO). The results of the ANOVA analysis of these factors against business size have been discussed and the significantly higher CEPO in large size businesses than in small or medium size businesses have been explored. This paper also unearths the implications of these results on corporate sustainability and recommends possible improvements to increase the focus around environmental sustainability.
- Description: 2003008915
An optimal transportation routing approach using GIS-based dynamic traffic flows
- Authors: Alazab, Ammar , Venkatraman, Sitalakshmi , Abawajy, Jemal , Alazab, Mamoun
- Date: 2010
- Type: Text , Conference proceedings
- Full Text: false
- Description: This paper examines the value of real-time traffic information gathered through Geographic Information Systems for achieving an optimal vehicle routing within a dynamically stochastic transportation network. We present a systematic approach in determining the dynamically varying parameters and implementation attributes that were used for the development of a Web-based transportation routing application integrated with real-time GIS services. We propose and implement an optimal routing algorithm by modifying Dijkstra’s algorithm in order to incorporate stochastically changing traffic flows. We describe the significant features of our Web application in making use of the real-time dynamic traffic flow information from GIS services towards achieving total costs savings and vehicle usage reduction. These features help users and vehicle drivers in improving their service levels and productivity as the Web application enables them to interactively find the optimal path and in identifying destinations effectively.
Malware detection based on structural and behavioural features of API calls
- Authors: Alazab, Mamoun , Layton, Robert , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2010
- Type: Text , Conference proceedings
- Full Text: false
- Description: In this paper, we propose a five-step approach to detect obfuscated malware by investigating the structural and behavioural features of API calls. We have developed a fully automated system to disassemble and extract API call features effectively from executables. Using n-gram statistical analysis of binary content, we are able to classify if an executable file is malicious or benign. Our experimental results with a dataset of 242 malwares and 72 benign files have shown a promising accuracy of 96.5% for the unigram model. We also provide a preliminary analysis by our approach using support vector machine (SVM) and by varying n-values from 1 to 5, we have analysed the performance that include accuracy, false positives and false negatives. By applying SVM, we propose to train the classifier and derive an optimum n-gram model for detecting both known and unknown malware efficiently.
The Impact of Biometric Systems on Communities: Perspectives and Challenges
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at ACKMIDS 2008: Harnessing Knowledge Management to Build Communities, 11th Annual Australian Conference on Knowledge Management and Intelligent Decision Support p. 1-17
- Full Text:
- Reviewed:
Cloud computing: A research roadmap in coalescence with software engineering
- Authors: Venkatraman, Sitalakshmi , Wadhwa, Bimlesh
- Date: 2012
- Type: Text , Journal article
- Relation: Software Engineering Vol. 2, no. 2 (2012), p. 7-17
- Full Text: false
- Reviewed:
Malicious code detection using penalized splines on OPcode frequency
- Authors: Alazab, Mamoun , Al Kadiri, Mohammad , Venkatraman, Sitalakshmi , Al-Nemrat, Ameer
- Date: 2012
- Type: Text , Conference proceedings
- Full Text: false
- Description: Recently, malicious software are gaining exponential growth due to the innumerable obfuscations of extended x86 IA-32 (OPcodes) that are being employed to evade from traditional detection methods. In this paper, we design a novel distinguisher to separate malware from benign that combines Multivariate Logistic Regression model using kernel HS in Penalized Splines along with OPcode frequency feature selection technique for efficiently detecting obfuscated malware. The main advantage of our penalized splines based feature selection technique is its performance capability achieved through the efficient filtering and identification of the most important OPcodes used in the obfuscation of malware. This is demonstrated through our successful implementation and experimental results of our proposed model on large malware datasets. The presented approach is effective at identifying previously examined malware and non-malware to assist in reverse engineering. © 2012 IEEE.
- Description: 2003011056
Analysis of firewall log-based detection scenarios for evidence in digital forensics
- Authors: Mukhtar, Rubiu , Al-Nemrat, Ameer , Alazab, Mamoun , Venkatraman, Sitalakshmi , Jahankhani, Hamid
- Date: 2012
- Type: Text , Journal article
- Relation: International Journal of Electronic Security and Digital Forensics Vol. 4, no. 4 (2012), p. 261-279
- Full Text: false
- Reviewed:
- Description: With the recent escalating rise in cybercrime, firewall logs have attained much research focus in assessing their capability to serve as excellent evidence in digital forensics. Even though the main aim of firewalls is to screen or filter part or all network traffic, firewall logs could provide rich traffic information that could be used as evidence to prove or disprove the occurrence of online attack events for legal purposes. Since courts have a definition of what could be presented to it as evidence, this research investigates on the determinants for the acceptability of firewall logs as suitable evidence. Two commonly used determinants are tested using three different firewall-protected network scenarios. These determinants are: 1 admissibility that requires the evidence to satisfy certain legal requirements stipulated by the courts 2 weight that represents the sufficiency and extent to which the evidence convinces the establishment of cybercrime attack. Copyright © 2012 Inderscience Enterprises Ltd.
- Description: 2003010400
The role of emotional intelligence on the resolution of disputes involving the electronic health record
- Authors: Bellucci, Emilia , Venkatraman, Sitalakshmi , Muecke, Nial , Stranieri, Andrew
- Date: 2012
- Type: Text , Conference paper
- Relation: Fifth Australasian workshop on health informatics and knowledge management p. 3-12
- Full Text: false
- Reviewed:
Transforming web and grid services to cloud services - can it be a success?
- Authors: Ramathan, Vengkat , Venkatraman, Sitalakshmi
- Date: 2010
- Type: Text , Conference proceedings
- Full Text: false
- Description: The coming computing service transformation into the cloud is a major change in industry. Cloud services do offer on demand computing as a utility that is warranted to reduce costs and improve service quality levels. However, there are concerns related to privacy, security, control and governance of the cloud.With the main goal of addressing these risks, this paper provides deep insights and practical guidelines for cloud architects, service providers and consumers for a smooth migration into the cloud.
Modeling of secured cloud network: - The case of an educational institute
- Authors: Bevinakoppa, Savitri , Sharma, Geetu , Venkatraman, Sitalakshmi
- Date: 2013
- Type: Text , Conference paper
- Relation: Recent researches in Infromation Science & Applications p. 150-155
- Full Text: false
- Reviewed:
Cybercrime : The case of obfuscated malware
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul , Alazab, Moutaz , Alazab, Ammar
- Date: 2011
- Type: Text , Conference paper
- Relation: Joint 7th International Conference on Global Security, Safety and Sustainability, ICGS3 2011, and the 4th Conference on e-Democracy Vol. 99 LNICST, p. 204-211
- Full Text: false
- Reviewed:
- Description: Cybercrime has rapidly developed in recent years and malware is one of the major security threats in computer which have been in existence from the very early days. There is a lack of understanding of such malware threats and what mechanisms can be used in implementing security prevention as well as to detect the threat. The main contribution of this paper is a step towards addressing this by investigating the different techniques adopted by obfuscated malware as they are growingly widespread and increasingly sophisticated with zero-day exploits. In particular, by adopting certain effective detection methods our investigations show how cybercriminals make use of file system vulnerabilities to inject hidden malware into the system. The paper also describes the recent trends of Zeus botnets and the importance of anomaly detection to be employed in addressing the new Zeus generation of malware. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
- Description: 2003010650
Corporate sustainability : An IS approach for integrating triple bottom line elements
- Authors: Venkatraman, Sitalakshmi , Nayak, Ravi
- Date: 2015
- Type: Text , Journal article
- Relation: Social Responsibility Journal Vol. 11, no. 3 (2015), p. 482-501
- Full Text: false
- Reviewed:
- Description: Purpose - The purpose of this paper is to investigate the inter-relationships among three triple bottom line (TBL) outcomes of corporate sustainability, namely, corporate environmental performance outcome (CEPO), corporate social performance outcome (CSPO) and corporate financial performance outcome (CFPO), with the aid of an empirical study conducted in Australian businesses. The paper also aims to provide a roadmap for integrating sustainable business practices using information systems (IS) approach of continuous improvement lifecycle. Current business practices try to achieve economic, social and ecological goals independently as silos due to the individual operational challenges posed by each of these TBL principles. Design/methodology/approach - The research design mainly adopts a quantitative research methodology with data collected by means of a survey questionnaire that included both descriptive and exploratory flavour. The empirical study examines the relationships of TBL elements as perceived by 85 different Australian-based large, medium as well as small business organisations. The data collected were analysed by performing factor analysis on 21 items, resulting in three latent factors that were aligned to TBL outcomes and the correlations among them were analysed to assess their inter-relationships. Findings - The results of the study report weak and positive relationships existing between the TBL elements, with insights gained through the study leading towards useful implications that are well-supported by the qualitative feedback. The empirical study has also resulted in providing practical recommendations and an implementation framework consisting of a four-step roadmap with the participation of quality circles within an IS approach. Practical implications - The study focuses on inter-relationships and integration of TBL elements in Australian businesses. This could be extended to other businesses in different countries. The proposed roadmap with a continuous improvement cycle of system implementation steps facilitates any organisation to adopt an incremental integration of the social responsibility and environment protection practices within its core business operations for achieving corporate sustainability. Originality/value - While most of the TBL studies conducted worldwide focus on predominantly assessing large organisations towards responsible and sustainable business practices, this paper considers large, medium and small businesses. The research methodology adopted in this study as well as the proposed IS approach with quality circles add value to a growing body of literature with a recent increasing focus on integrated approaches for corporate sustainability.
Diagnostic with incomplete nominal/discrete data
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Data analytics identify glycated haemoglobin co-markers for type 2 diabetes mellitus diagnosis
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi
- Date: 2016
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 75, no. (2016), p. 90-97
- Full Text: false
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is being more commonly used as an alternative test for the identification of type 2 diabetes mellitus (T2DM) or to add to fasting blood glucose level and oral glucose tolerance test results, because it is easily obtained using point-of-care technology and represents long-term blood sugar levels. HbA1c cut-off values of 6.5% or above have been recommended for clinical use based on the presence of diabetic comorbidities from population studies. However, outcomes of large trials with a HbA1c of 6.5% as a cut-off have been inconsistent for a diagnosis of T2DM. This suggests that a HbA1c cut-off of 6.5% as a single marker may not be sensitive enough or be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied on a large clinical dataset to identify an optimal cut-off value for HbA1c and to identify whether additional biomarkers can be used together with HbA1c to enhance diagnostic accuracy of T2DM. T2DM classification accuracy increased if 8-hydroxy-2-deoxyguanosine (8-OhdG), an oxidative stress marker, was included in the algorithm from 78.71% for HbA1c at 6.5% to 86.64%. A similar result was obtained when interleukin-6 (IL-6) was included (accuracy=85.63%) but with a lower optimal HbA1c range between 5.73 and 6.22%. The application of data analytics to medical records from the Diabetes Screening programme demonstrates that data analytics, combined with large clinical datasets can be used to identify clinically appropriate cut-off values and identify novel biomarkers that when included improve the accuracy of T2DM diagnosis even when HbA1c levels are below or equal to the current cut-off of 6.5%. © 2016 Elsevier Ltd.
Missing data imputation for individualised CVD diagnostic and treatment
- Authors: Venkatraman, Sitalakshmi , Yatsko, Andrew , Stranieri, Andrew , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference paper
- Relation: Computing in Cardiology, 2016 Vol. 43 I E E E Computer Society
- Full Text: false
- Reviewed:
- Description: Cardiac health screening standards require increasingly more clinical tests consisting of blood, urine and anthropometric measures as well as an extensive clinical and medication history. To ensure optimal screening referrals, diagnostic determinants need to be highly accurate to reduce false positives and ensuing stress to individual patients. However, the data from individual patients partaking in population screening is often incomplete. The current study provides an imputation algorithm that has been applied to patientcentered cardiac health screening. Missing values are iteratively imputed in conjunction with combinations of values on subsets of selected features. The approach was evaluated on the DiabHealth dataset containing 2800 records with over 180 attributes. The results for predicting CVD after data completion showed sensitivity and specificity of 94% and 99% respectively. Removing variables that define cardiac events and associated conditions directly, left ‘age’ followed by ‘use’ of antihypertensive and anti-cholesterol medication, especially statins among the best predictors.