Data analytics to select markers and cut-off values for clinical scoring
- Authors: Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi , Jelinek, Herbert
- Date: 2018
- Type: Text , Conference proceedings
- Relation: ACSW '18: Proceedings of the Australasian Computer Science Week Multiconference; Brisbane; 29th January -2nd February 2018 p. 1-6
- Full Text: false
- Reviewed:
- Description: Scoring systems such as the Glasgow-Coma scale used to assess consciousness AusDrisk to assess the risk of diabetes, are prevalent in clinical practice. Scoring systems typically include relevant variables with ordinal values where each value is assigned a weight. Weights for selected values are summed and compared to thresholds for health care professionals to rapidly generate a score. Scoring systems are prevalent in clinical practice because they are easy and quick to use. However, most scoring systems comprise many variables and require some time to calculate an final score. Further, expensive population-wide studies are required to validate a scoring system. In this article, we present a new approach for the generation of a scoring system. The approach uses a search procedure invoking iterative decision tree induction to identify a suite of scoring rules, each of which requires values on only two variables. Twelve scoring rules were discovered using the approach, from an Australian screening program for the assessment of Type 2 Diabetes risk. However, classifications from the 12 rules can conflict. In this paper we argue that a simple rule preference relation is sufficient for the resolution of rule conflicts.
Data analytics identify glycated haemoglobin co-markers for type 2 diabetes mellitus diagnosis
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi
- Date: 2016
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 75, no. (2016), p. 90-97
- Full Text: false
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is being more commonly used as an alternative test for the identification of type 2 diabetes mellitus (T2DM) or to add to fasting blood glucose level and oral glucose tolerance test results, because it is easily obtained using point-of-care technology and represents long-term blood sugar levels. HbA1c cut-off values of 6.5% or above have been recommended for clinical use based on the presence of diabetic comorbidities from population studies. However, outcomes of large trials with a HbA1c of 6.5% as a cut-off have been inconsistent for a diagnosis of T2DM. This suggests that a HbA1c cut-off of 6.5% as a single marker may not be sensitive enough or be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied on a large clinical dataset to identify an optimal cut-off value for HbA1c and to identify whether additional biomarkers can be used together with HbA1c to enhance diagnostic accuracy of T2DM. T2DM classification accuracy increased if 8-hydroxy-2-deoxyguanosine (8-OhdG), an oxidative stress marker, was included in the algorithm from 78.71% for HbA1c at 6.5% to 86.64%. A similar result was obtained when interleukin-6 (IL-6) was included (accuracy=85.63%) but with a lower optimal HbA1c range between 5.73 and 6.22%. The application of data analytics to medical records from the Diabetes Screening programme demonstrates that data analytics, combined with large clinical datasets can be used to identify clinically appropriate cut-off values and identify novel biomarkers that when included improve the accuracy of T2DM diagnosis even when HbA1c levels are below or equal to the current cut-off of 6.5%. © 2016 Elsevier Ltd.
Missing data imputation for individualised CVD diagnostic and treatment
- Authors: Venkatraman, Sitalakshmi , Yatsko, Andrew , Stranieri, Andrew , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference paper
- Relation: Computing in Cardiology, 2016 Vol. 43 I E E E Computer Society
- Full Text: false
- Reviewed:
- Description: Cardiac health screening standards require increasingly more clinical tests consisting of blood, urine and anthropometric measures as well as an extensive clinical and medication history. To ensure optimal screening referrals, diagnostic determinants need to be highly accurate to reduce false positives and ensuing stress to individual patients. However, the data from individual patients partaking in population screening is often incomplete. The current study provides an imputation algorithm that has been applied to patientcentered cardiac health screening. Missing values are iteratively imputed in conjunction with combinations of values on subsets of selected features. The approach was evaluated on the DiabHealth dataset containing 2800 records with over 180 attributes. The results for predicting CVD after data completion showed sensitivity and specificity of 94% and 99% respectively. Removing variables that define cardiac events and associated conditions directly, left ‘age’ followed by ‘use’ of antihypertensive and anti-cholesterol medication, especially statins among the best predictors.
Data-analytically derived flexible HbA1c thresholds for type 2 diabetes mellitus diagnostic
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
Diagnostic with incomplete nominal/discrete data
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Novel data mining techniques for incompleted clinical data in diabetes management
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi
- Date: 2014
- Type: Text , Journal article
- Relation: British Journal of Applied Science & Technology Vol. 4, no. 33 (2014), p. 4591-4606
- Relation: https://doi.org/10.9734/BJAST/2014/11744
- Full Text:
- Reviewed:
- Description: An important part of health care involves upkeep and interpretation of medical databases containing patient records for clinical decision making, diagnosis and follow-up treatment. Missing clinical entries make it difficult to apply data mining algorithms for clinical decision support. This study demonstrates that higher predictive accuracy is possible using conventional data mining algorithms if missing values are dealt with appropriately. We propose a novel algorithm using a convolution of sub-problems to stage a super problem, where classes are defined by Cartesian Product of class values of the underlying problems, and Incomplete Information Dismissal and Data Completion techniques are applied for reducing features and imputing missing values. Predictive accuracies using Decision Branch, Nearest Neighborhood and Naïve Bayesian classifiers were compared to predict diabetes, cardiovascular disease and hypertension. Data is derived from Diabetes Screening Complications Research Initiative (DiScRi) conducted at a regional Australian university involving more than 2400 patient records with more than one hundred clinical risk factors (attributes). The results show substantial improvements in the accuracy achieved with each classifier for an effective diagnosis of diabetes, cardiovascular disease and hypertension as compared to those achieved without substituting missing values. The gain in improvement is 7% for diabetes, 21% for cardiovascular disease and 24% for hypertension, and our integrated novel approach has resulted in more than 90% accuracy for the diagnosis of any of the three conditions. This work advances data mining research towards achieving an integrated and holistic management of diabetes. - See more at: http://www.sciencedomain.org/abstract.php?iid=670&id=5&aid=6128#.VCSxDfmSx8E
Capped K-NN Editing in definition lacking environments
- Authors: Stranieri, Andrew , Yatsko, Andrew , Golden, Isaac , Mammadov, Musa , Bagirov, Adil
- Date: 2013
- Type: Text , Journal article
- Relation: Journal of Pattern Recognition Research Vol. 8, no. 1 (2013), p. 39-58
- Full Text: false
- Reviewed:
- Description: While any input may be contributing, imprecise specification of class of data subdivided into classes identifies as rather common a source of noise. The misrepresentation may be characteristic of the data or be caused by forcing of a regression problem into the classification type. Consideration is given to examples of this nature, and an alternative is proposed. In the main part, the approach is based on a well-known technique of data treatment for noise using k-NN. The paper advances an editing technique designed around idea of variable number of authenticating instances. Test runs performed on publicly available and proprietary data demonstrate high retention ability of the new procedure without loss of classification accuracy. Noise reduction methods in a broader classification context are extensively surveyed.
Feature selection using misclassification counts
- Authors: Bagirov, Adil , Yatsko, Andrew , Stranieri, Andrew
- Date: 2011
- Type: Conference proceedings , Unpublished work
- Relation: Proceedings of the 9th Australasian Data Mining Conference (AusDM 2011), 51-62. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 121.
- Full Text:
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and instance acquisition effort, considering all the data attributes accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance and redundancy, what ranking does not immediately decide. Additionally, feature ranking methods from different independent sources are called in for the direct comparison.
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and the data acquisition effort, considering all data components being accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree, to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance, what ranking does not immediately decide. Additionally, feature ranking methods available from different independent sources are called in for direct comparison.