A count data model for heart rate variability forecasting and premature ventricular contraction detection
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
Data-analytically derived flexible HbA1c thresholds for type 2 diabetes mellitus diagnostic
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
Diagnostic with incomplete nominal/discrete data
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Patient admission prediction using a pruned fuzzy min-max neural network with rule extraction
- Authors: Wang, Jin , Lim, Cheepeng , Creighton, Douglas , Khorsavi, Abbas , Nahavandi, Saeid , Ugon, Julien , Vamplew, Peter , Stranieri, Andrew , Martin, Laura , Freischmidt, Anton
- Date: 2015
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 26, no. 2 (2015), p. 277-289
- Full Text: false
- Reviewed:
- Description: A useful patient admission prediction model that helps the emergency department of a hospital admit patients efficiently is of great importance. It not only improves the care quality provided by the emergency department but also reduces waiting time of patients. This paper proposes an automatic prediction method for patient admission based on a fuzzy min–max neural network (FMM) with rules extraction. The FMM neural network forms a set of hyperboxes by learning through data samples, and the learned knowledge is used for prediction. In addition to providing predictions, decision rules are extracted from the FMM hyperboxes to provide an explanation for each prediction. In order to simplify the structure of FMM and the decision rules, an optimization method that simultaneously maximizes prediction accuracy and minimizes the number of FMM hyperboxes is proposed. Specifically, a genetic algorithm is formulated to find the optimal configuration of the decision rules. The experimental results using a large data set consisting of 450740 real patient records reveal that the proposed method achieves comparable or even better prediction accuracy than state-of-the-art classifiers with the additional ability to extract a set of explanatory rules to justify its predictions.
Using meta-regression data mining to improve predictions of performance based on heart rate dynamics for Australian football
- Authors: Jelinek, Herbert , Kelarev, Andrei , Robinson, Dean , Stranieri, Andrew , Cornforth, David
- Date: 2014
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 14, no. PART A (2014), p. 81-87
- Full Text: false
- Reviewed:
- Description: This work investigates the effectiveness of using computer-based machine learning regression algorithms and meta-regression methods to predict performance data for Australian football players based on parameters collected during daily physiological tests. Three experiments are described. The first uses all available data with a variety of regression techniques. The second uses a subset of features selected from the available data using the Random Forest method. The third used meta-regression with the selected feature subset. Our experiments demonstrate that feature selection and meta-regression methods improve the accuracy of predictions for match performance of Australian football players based on daily data of medical tests, compared to regression methods alone. Meta-regression methods and feature selection were able to obtain performance prediction outcomes with significant correlation coefficients. The best results were obtained by the additive regression based on isotonic regression for a set of most influential features selected by Random Forest. This model was able to predict athlete performance data with a correlation coefficient of 0.86 (p < 0.05). © 2013 Published by Elsevier B.V. All rights reserved.
- Description: C1
Structured reasoning to support deliberative dialogue
- Authors: Macfadyen, Alyx , Stranieri, Andrew , Yearwood, John
- Date: 2005
- Type: Text , Journal article
- Relation: Lecture Notes in Artificial Intelligence 3681: Knowledge-Based Intelligent Information and Engineering Systems, 9th International Conference, KES 2005, Melbourne, Australia, September 2005, Proceedings, Part 1 Vol. 1, no. (2005), p. 283-289
- Full Text:
- Reviewed:
- Description: Deliberative dialogue is a form of dialogue that involves participants advancing claims and, without power plays or posturing, deliberating on the claims of others until a consensus decision is reached. This paper describes a deliberative support system to facilitate and encourage participants to engage in a discussion deliberatively. A knowledge representation framework is deployed to generate a strong domain model of reasoning structure. The structure, coupled with a deliberative dialogue protocol results in a web based system that regulates a discussion to avoid combative, non-deliberative exchanges. The system has been designed for online dispute resolution between husband and wife in divorce proceedings involving property.
- Description: C1
- Description: 2003001381
Argumentation structures that integrate dialectical and non-dialectical reasoning
- Authors: Stranieri, Andrew , Zeleznikow, John , Yearwood, John
- Date: 2001
- Type: Text , Journal article
- Relation: Knowledge Engineering Review Vol. 16, no. 4 (Dec 2001), p. 331-348
- Full Text:
- Reviewed:
- Description: Argumentation concepts have been applied to numerous knowledge engineering endeavours in recent years. For example, a variety of logics have been developed to represent argumentation in the context of a dialectical situation such as a dialogue. In contrast to the dialectical approach, argumentation has also been used to structure knowledge. This can be seen as a non-dialectical approach. The Toulmin argument structure has often been used to structure knowledge non-dialectically yet most studies that apply the Toulmin structure do not use the original structure but vary one or more components. Variations to the Toulmin structure can be understood as different ways to integrate a dialectical perspective with a non-dialectical one. Drawing the dialectical/non-dialectical distinction enables the specification of a framework called the generic actual argument model that is expressly non-dialectical. The framework enables the development of knowledge-based systems that integrate a variety of inference procedures, combine information retrieval with reasoning and facilitate automated document drafting. Furthermore, the non-dialectical framework provides the foundation for simple dialectical models. Systems based on our approach have been developed in family law, refugee law, determining eligibility for government legal aid, copyright law and e-tourism.
- Description: C1
- Description: 2003002516
Tools for placing legal decision support systems on the world wide web
- Authors: Stranieri, Andrew , Yearwood, John , Zeleznikow, John
- Date: 2001
- Type: Text , Conference paper
- Relation: Paper presented at Eighth International Conference on Artificial Intelligence and Law, ICAIL 2001, St. Louis, USA : 21st-25th May 2001
- Full Text: false
- Description: 2003003944