Emerging point of care devices and artificial intelligence : prospects and challenges for public health
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.
Comparing Pixel N-grams and bag of visual word features for the classification of diabetic retinopathy
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Jelinek, Herbert
- Date: 2019
- Type: Text , Conference proceedings
- Relation: ACSW 2019: Australasian Computer Science Week 2019;Sydney NSW Australia; January 29 - 31, 2019; published in Proceedings of the Australasian Computer Science Week Multiconference p. 1-7
- Full Text: false
- Reviewed:
- Description: The extraction of Bag of Visual Words (BoVW) features from retinal images for automated classification has been shown to be effective but computationally expensive. Histogram and co-variance matrix features do not generally result in models that have the same predictive accuracy as BoVW and are still computationally expensive. The discovery of features that result in accurate image classification on computationally constrained devices such as smartphones would enable new and promising applications for image classification. For example, smartphone retinal cameras can conceivably make diabetic retinopathy widely available and potentially reduce undiagnosed retinopathy if it could be achieved with computationally simple classification algorithms. A novel image feature extraction technique inspired by N-grams in text mining, called 'Pixel N-grams' is described that can serve this purpose. Results on mammogram and texture classification have shown high accuracy despite the reduced computational complexity. However retinal scan classification results using Pixel N-grams lag behind BoVW approaches. An explanation for the relative poor performance of Pixel N-grams with diabetic retinopathy that draws on concepts associated with the No Free Lunch theorem are presented.
Integrating biological heuristics and gene expression data for gene regulatory network inference
- Authors: Zarnegar, Armita , Jelinek, Herbert , Vamplew, Peter , Stranieri, Andrew
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 2019 Australasian Computer Science Week Multiconference, ACSW 2019; Sydney, Australia; 29th-31st January 2019 p. 1-10
- Full Text: false
- Reviewed:
- Description: Gene Regulatory Networks (GRNs) offer enhanced insight into the biological functions and biochemical pathways of cells associated with gene regulatory mechanisms. However, obtaining accurate GRNs that explain gene expressions and functional associations remains a difficult task. Only a few studies have incorporated heuristics into a GRN discovery process. Doing so has the potential to improve accuracy and reduce the search space and computational time. A technique for GRN discovery that integrates heuristic information into the discovery process is advanced. The approach incorporates three elements: 1) a novel 2D visualized coexpression function that measures the association between genes; 2) a post-processing step that improves detection of up, down and self-regulation and 3) the application of heuristics to generate a Hub network as the backbone of the GRN. Using available microarray and next generation sequencing data from Escherichia coli, six synthetic benchmark GRN datasets were generated with the neighborhood addition and cluster addition methods available in SynTReN. Results of the novel 2D-visualization co-expression function were compared with results obtained using Pearson's correlation and mutual information. The performance of the biological genetics-based heuristics consisting of the 2D-Visualized Co-expression function, post-processing and Hub network was then evaluated by comparing the performance to the GRNs obtained by ARACNe and CLR. The 2D-Visualized Co-expression function significantly improved gene-gene association matching compared to Pearson's correlation coefficient (t = 3.46, df = 5, p = 0.02) and Mutual Information (t = 4.42, df = 5, p = 0.007). The heuristics model gave a 60% improvement against ARACNe (p = 0.02) and CLR (p = 0.019). Analysis of Escherichia coli data suggests that the GRN discovery technique proposed is capable of identifying significant transcriptional regulatory interactions and the corresponding regulatory networks.
Personalised measures of obesity using waist to height ratios from an Australian health screening program
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Anderw , Venkatraman, Sitalakshmi
- Date: 2019
- Type: Text , Journal article
- Relation: Digital Health Vol. 5, no. (2019), p. 1-8
- Full Text:
- Reviewed:
- Description: Objectives The aim of the current study is to generate waist circumference to height ratio cut-off values for obesity categories from a model of the relationship between body mass index and waist circumference to height ratio. We compare the waist circumference to height ratio discovered in this way with cut-off values currently prevalent in practice that were originally derived using pragmatic criteria. Method Personalized data including age, gender, height, weight, waist circumference and presence of diabetes, hypertension and cardiovascular disease for 847 participants over eight years were assembled from participants attending a rural Australian health review clinic (DiabHealth). Obesity was classified based on the conventional body mass index measure (weight/height(2)) and compared to the waist circumference to height ratio. Correlations between the measures were evaluated on the screening data, and independently on data from the National Health and Nutrition Examination Survey that included age categories. Results This article recommends waist circumference to height ratio cut-off values based on an Australian rural sample and verified using the National Health and Nutrition Examination Survey database that facilitates the classification of obesity in clinical practice. Gender independent cut-off values are provided for waist circumference to height ratio that identify healthy (waist circumference to height ratio >= 0.45), overweight (0.53) and the three obese (0.60, 0.68, 0.75) categories verified on the National Health and Nutrition Examination Survey dataset. A strong linearity between the waist circumference to height ratio and the body mass index measure is demonstrated. Conclusion The recommended waist circumference to height ratio cut-off values provided a useful index for assessing stages of obesity and risk of chronic disease for improved healthcare in clinical practice.
Data analytics to select markers and cut-off values for clinical scoring
- Authors: Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi , Jelinek, Herbert
- Date: 2018
- Type: Text , Conference proceedings
- Relation: ACSW '18: Proceedings of the Australasian Computer Science Week Multiconference; Brisbane; 29th January -2nd February 2018 p. 1-6
- Full Text: false
- Reviewed:
- Description: Scoring systems such as the Glasgow-Coma scale used to assess consciousness AusDrisk to assess the risk of diabetes, are prevalent in clinical practice. Scoring systems typically include relevant variables with ordinal values where each value is assigned a weight. Weights for selected values are summed and compared to thresholds for health care professionals to rapidly generate a score. Scoring systems are prevalent in clinical practice because they are easy and quick to use. However, most scoring systems comprise many variables and require some time to calculate an final score. Further, expensive population-wide studies are required to validate a scoring system. In this article, we present a new approach for the generation of a scoring system. The approach uses a search procedure invoking iterative decision tree induction to identify a suite of scoring rules, each of which requires values on only two variables. Twelve scoring rules were discovered using the approach, from an Australian screening program for the assessment of Type 2 Diabetes risk. However, classifications from the 12 rules can conflict. In this paper we argue that a simple rule preference relation is sufficient for the resolution of rule conflicts.
A count data model for heart rate variability forecasting and premature ventricular contraction detection
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
Atrial fibrillation analysis for real time patient monitoring
- Authors: Allami, Ragheed , Stranieri, Andrew , Marzbanrad, Faezeh , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 44th Computing in Cardiology Conference, CinC 2017 Vol. 44, p. 1-4
- Full Text: false
- Reviewed:
- Description: Atrial Fibrillation (AF) can lead to life-threatening conditions such as stroke and heart failure. The instant recognition of life-threatening cardiac arrhythmias based on a 3-lead ECG to record a Lead II configuration for a few seconds is a challenging problem of clinical significance. Five consecutive ECG beats that were identified by a cardiologist to characterise an AF episode and five consecutive heartbeat intervals representing an irregular RR intervals episode were analysed. The detection and analysis of P waves as the morphological features of AF was executed based on two template matching methods. An AF detector was developed by combining the correlation coefficients based on the template matching methods and the standard deviation of the RR intervals. The AF detector was then applied to classify 5 consecutive beats as AF or non-AF based on thresholding the calculated irregularity. The proposed algorithm was tested on the MIT-BIH Atrial Fibrillation and the Challenge 2017 databases. The proposed method resulted in an improved sensitivity, specificity and accuracy of 97.60%, 98.20% and 99% respectively compared to recent published methods. In addition, the proposed method is suitable for real-time patient monitoring as it is computationally simple and requires only a few seconds of ECG recording to detect an AF rhythm. © 2017 IEEE Computer Society. All rights reserved.
A heuristic gene regulatory networks model for cardiac function and pathology
- Authors: Zarnegar, Armita , Vamplew, Peter , Stranieri, Andrew , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 Computing in Cardiology Conference (CinC); Vancouver; 11-14th Sept, 2016
- Full Text: false
- Reviewed:
- Description: Genome-wide association studies (GWAS) and next-generation sequencing (NGS) has led to an increase in information about the human genome and cardiovascular disease. Understanding the role of genes in cardiac function and pathology requires modeling gene interactions and identification of regulatory genes as part of a gene regulatory network (GRN). Feature selection and data reduction not sufficient and require domain knowledge to deal with large data. We propose three novel innovations in constructing a GRN based on heuristics. A 2D Visualised Co-regulation function. Post-processing to identify gene-gene interactions. Finally a threshold algorithm is applied to identify the hub genes that provide the backbone of the GRN. The 2D Visualized Co-regulation function performed significantly better compared to the Pearson's correlation for measuring pairwise associations (t=3.46, df=5, p=0.018). The F-measure, improved from 0.11 to 0.12. The hub network provided a 60% improvement to that reported in the literature. The performance of the hub network was then also compared against ARACNe and performed significantly better (p=0.024). We conclude that a heuristics approach in developing GRNs has potential to improve our understanding of gene regulation and interaction in diverse biological function and disease.
Data analytics identify glycated haemoglobin co-markers for type 2 diabetes mellitus diagnosis
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi
- Date: 2016
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 75, no. (2016), p. 90-97
- Full Text: false
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is being more commonly used as an alternative test for the identification of type 2 diabetes mellitus (T2DM) or to add to fasting blood glucose level and oral glucose tolerance test results, because it is easily obtained using point-of-care technology and represents long-term blood sugar levels. HbA1c cut-off values of 6.5% or above have been recommended for clinical use based on the presence of diabetic comorbidities from population studies. However, outcomes of large trials with a HbA1c of 6.5% as a cut-off have been inconsistent for a diagnosis of T2DM. This suggests that a HbA1c cut-off of 6.5% as a single marker may not be sensitive enough or be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied on a large clinical dataset to identify an optimal cut-off value for HbA1c and to identify whether additional biomarkers can be used together with HbA1c to enhance diagnostic accuracy of T2DM. T2DM classification accuracy increased if 8-hydroxy-2-deoxyguanosine (8-OhdG), an oxidative stress marker, was included in the algorithm from 78.71% for HbA1c at 6.5% to 86.64%. A similar result was obtained when interleukin-6 (IL-6) was included (accuracy=85.63%) but with a lower optimal HbA1c range between 5.73 and 6.22%. The application of data analytics to medical records from the Diabetes Screening programme demonstrates that data analytics, combined with large clinical datasets can be used to identify clinically appropriate cut-off values and identify novel biomarkers that when included improve the accuracy of T2DM diagnosis even when HbA1c levels are below or equal to the current cut-off of 6.5%. © 2016 Elsevier Ltd.
ECG reduction for wearable sensor
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS); Naples, Italy; 28th November-1st December 2016 p. 520-525
- Full Text:
- Reviewed:
- Description: The transmission, storage and analysis of electrocardiogram (ECG) data in real-time is essential for remote patient monitoring with wearable ECG devices and mobile ECG contexts. However, this remains a challenge to achieve within the processing power and the storage capacity of mobile devices. ECG reduction algorithms have an important role to play in reducing the processing requirements for mobile devices, however many existing ECG reduction and compression algorithms are computationally expensive to execute in mobile devices and have not been designed for real-time computation and incremental data arrival. In this paper, we describe a computationally naive, yet effective, algorithm that achieves high ECG reduction rates while maintaining key diagnostic features including PR, QRS, ST, QT and RR intervals. While reduction does not enable ECG waves to be reproduced, the ability to transmit key indicators (diagnostic features) using minimal computational resources, is particularly useful in mobile health contexts involving power constrained sensors and devices. Results of the proposed reduction algorithm indicate that the proposed algorithm outperforms other ECG reduction algorithms at a reduction/compression ratio (CR) of 5:1. If power or processing capacity is low, the algorithm can readily switch to a compression ratio of up to 10: 1 while still maintaining an error rate below 10%.
Missing data imputation for individualised CVD diagnostic and treatment
- Authors: Venkatraman, Sitalakshmi , Yatsko, Andrew , Stranieri, Andrew , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference paper
- Relation: Computing in Cardiology, 2016 Vol. 43 I E E E Computer Society
- Full Text: false
- Reviewed:
- Description: Cardiac health screening standards require increasingly more clinical tests consisting of blood, urine and anthropometric measures as well as an extensive clinical and medication history. To ensure optimal screening referrals, diagnostic determinants need to be highly accurate to reduce false positives and ensuing stress to individual patients. However, the data from individual patients partaking in population screening is often incomplete. The current study provides an imputation algorithm that has been applied to patientcentered cardiac health screening. Missing values are iteratively imputed in conjunction with combinations of values on subsets of selected features. The approach was evaluated on the DiabHealth dataset containing 2800 records with over 180 attributes. The results for predicting CVD after data completion showed sensitivity and specificity of 94% and 99% respectively. Removing variables that define cardiac events and associated conditions directly, left ‘age’ followed by ‘use’ of antihypertensive and anti-cholesterol medication, especially statins among the best predictors.
Addressing the complexities of big data analytics in healthcare : The diabetes screening case
- Authors: De Silva, Daswin , Burstein, Frada , Jelinek, Herbert , Stranieri, Andrew
- Date: 2015
- Type: Text , Journal article
- Relation: Australasian Journal of Information Systems Vol. 19, no. (2015), p. S99-S115
- Full Text:
- Reviewed:
- Description: The healthcare industry generates a high throughput of medical, clinical and omics data of varying complexity and features. Clinical decision-support is gaining widespread attention as medical institutions and governing bodies turn towards better management of this data for effective and efficient healthcare delivery and quality assured outcomes. Amass of data across all stages, from disease diagnosis to palliative care, is further indication of the opportunities and challenges to effective data management, analysis, prediction and optimization techniques as parts of knowledge management in clinical environments. Big Data analytics (BDA) presents the potential to advance this industry with reforms in clinical decision-support and translational research. However, adoption of big data analytics has been slow due to complexities posed by the nature of healthcare data. The success of these systems is hard to predict, so further research is needed to provide a robust framework to ensure investment in BDA is justified. In this paper we investigate these complexities from the perspective of updated Information Systems (IS) participation theory. We present a case study on a large diabetes screening project to integrate, converge and derive expedient insights from such an accumulation of data and make recommendations for a successful BDA implementation grounded in a participatory framework and the specificities of big data in healthcare context. © 2015 De Silva, Burstein, Jelinek, Stranieri.
Data-analytically derived flexible HbA1c thresholds for type 2 diabetes mellitus diagnostic
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
Diagnostic with incomplete nominal/discrete data
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Novel data mining techniques for incompleted clinical data in diabetes management
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi
- Date: 2014
- Type: Text , Journal article
- Relation: British Journal of Applied Science & Technology Vol. 4, no. 33 (2014), p. 4591-4606
- Relation: https://doi.org/10.9734/BJAST/2014/11744
- Full Text:
- Reviewed:
- Description: An important part of health care involves upkeep and interpretation of medical databases containing patient records for clinical decision making, diagnosis and follow-up treatment. Missing clinical entries make it difficult to apply data mining algorithms for clinical decision support. This study demonstrates that higher predictive accuracy is possible using conventional data mining algorithms if missing values are dealt with appropriately. We propose a novel algorithm using a convolution of sub-problems to stage a super problem, where classes are defined by Cartesian Product of class values of the underlying problems, and Incomplete Information Dismissal and Data Completion techniques are applied for reducing features and imputing missing values. Predictive accuracies using Decision Branch, Nearest Neighborhood and Naïve Bayesian classifiers were compared to predict diabetes, cardiovascular disease and hypertension. Data is derived from Diabetes Screening Complications Research Initiative (DiScRi) conducted at a regional Australian university involving more than 2400 patient records with more than one hundred clinical risk factors (attributes). The results show substantial improvements in the accuracy achieved with each classifier for an effective diagnosis of diabetes, cardiovascular disease and hypertension as compared to those achieved without substituting missing values. The gain in improvement is 7% for diabetes, 21% for cardiovascular disease and 24% for hypertension, and our integrated novel approach has resulted in more than 90% accuracy for the diagnosis of any of the three conditions. This work advances data mining research towards achieving an integrated and holistic management of diabetes. - See more at: http://www.sciencedomain.org/abstract.php?iid=670&id=5&aid=6128#.VCSxDfmSx8E
Using meta-regression data mining to improve predictions of performance based on heart rate dynamics for Australian football
- Authors: Jelinek, Herbert , Kelarev, Andrei , Robinson, Dean , Stranieri, Andrew , Cornforth, David
- Date: 2014
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 14, no. PART A (2014), p. 81-87
- Full Text: false
- Reviewed:
- Description: This work investigates the effectiveness of using computer-based machine learning regression algorithms and meta-regression methods to predict performance data for Australian football players based on parameters collected during daily physiological tests. Three experiments are described. The first uses all available data with a variety of regression techniques. The second uses a subset of features selected from the available data using the Random Forest method. The third used meta-regression with the selected feature subset. Our experiments demonstrate that feature selection and meta-regression methods improve the accuracy of predictions for match performance of Australian football players based on daily data of medical tests, compared to regression methods alone. Meta-regression methods and feature selection were able to obtain performance prediction outcomes with significant correlation coefficients. The best results were obtained by the additive regression based on isotonic regression for a set of most influential features selected by Random Forest. This model was able to predict athlete performance data with a correlation coefficient of 0.86 (p < 0.05). © 2013 Published by Elsevier B.V. All rights reserved.
- Description: C1
An approach for Ewing test selection to support the clinical assessment of cardiac autonomic neuropathy
- Authors: Stranieri, Andrew , Abawajy, Jemal , Kelarev, Andrei , Huda, Shamsul , Chowdhury, Morshed , Jelinek, Herbert
- Date: 2013
- Type: Text , Journal article
- Relation: Artificial Intelligence in Medicine Vol. 58, no. 3 (2013), p. 185-193
- Full Text:
- Reviewed:
- Description: Objective: This article addresses the problem of determining optimal sequences of tests for the clinical assessment of cardiac autonomic neuropathy (CAN) We investigate the accuracy of using only one of the recommended Ewing tests to classify CAN and the additional accuracy obtained by adding the remaining tests of the Ewing battery This is important as not all five Ewing tests can always be applied in each situation in practice Methods and material: We used new and unique database of the diabetes screening research initiative project, which is more than ten times larger than the data set used by Ewing in his original investigation of CAN We utilized decision trees and the optimal decision path finder (ODPF) procedure for identifying optimal sequences of tests Results: We present experimental results on the accuracy of using each one of the recommended Ewing tests to classify CAN and the additional accuracy that can be achieved by adding the remaining tests of the Ewing battery We found the best sequences of tests for cost-function equal to the number of tests The accuracies achieved by the initial segments of the optimal sequences for 2, 3 and 4 categories of CAN are 80.80, 91.33, 93.97 and 94.14, and respectively, 79.86, 89.29, 91.16 and 91.76, and 78.90, 86.21, 88.15 and 88.93 They show significant improvement compared to the sequence considered previously in the literature and the mathematical expectations of the accuracies of a random sequence of tests The complete outcomes obtained for all subsets of the Ewing features are required for determining optimal sequences of tests for any cost-function with the use of the ODPF procedure We have also found two most significant additional features that can increase the accuracy when some of the Ewing attributes cannot be obtained Conclusions: The outcomes obtained can be used to determine the optimal sequences of tests for each individual cost-function by following the ODPF procedure The results show that the best single Ewing test for diagnosing CAN is the deep breathing heart rate variation test Optimal sequences found for the cost-function equal to the number of tests guarantee that the best accuracy is achieved after any number of tests and provide an improvement in comparison with the previous ordering of tests or a random sequence © 2013 Elsevier B.V.
- Description: 2003011130
Association of ankle brachial pressure index with heart rate variability in a rural screening clinic
- Authors: Jelinek, Herbert , De Silva, Daswin , Burstein, Frada , Stranieri, Andrew , Khalaf, Kinda , Khandoker, Ahsan , Al-Aubaidy, Hayder
- Date: 2013
- Type: Text , Conference paper
- Relation: 40th Computing in Cardiology Conference, CinC 2013; Vol. 40, p. 755-758
- Full Text: false
- Reviewed:
- Description: Peripheral vascular disease (PVD) can be associated with atherosclerosis and/ or peripheral neuropathy, which can be characterized by impairment of sensory, motor or autonomic nervous system. A noninvasive test to detect PVD is the ankle brachial pressure index (ABPI). Autonomic nervous system function can be determined by assessing heart rate variability from an ECG recording. No clear association between PVD and cardiac autonomic dysfunction has been demonstrated to date. © 2013 CCAL.
Empirical investigation of decision tree ensembles for monitoring cardiac complications of diabetes
- Authors: Kelarev, Andrei , Abawajy, Jemal , Stranieri, Andrew , Jelinek, Herbert
- Date: 2013
- Type: Text , Journal article
- Relation: International Journal of Data Warehousing and mining Vol. 9, no. 4 (2013), p. 1-18
- Full Text: false
- Reviewed:
- Description: Cardiac complications of diabetes require continuous monitoring since they may lead to increased morbidity or sudden death of patients. In order to monitor clinical complications of diabetes using wearable sensors, a small set of features have to be identified and effective algorithms for their processing need to be investigated. This article focuses on detecting and monitoring cardiac autonomic neuropathy (CAN) in diabetes patients. The authors investigate and compare the effectiveness of classifiers based on the following decision trees: ADTree, J48, NBTree, RandomTree, REPTree, and SimpleCart. The authors perform a thorough study comparing these decision trees as well as several decision tree ensembles created by applying the following ensemble methods: AdaBoost, Bagging, Dagging, Decorate, Grading, MultiBoost, Stacking, and two multi-level combinations of AdaBoost and MultiBoost with Bagging for the processing of data from diabetes patients for pervasive health monitoring of CAN. This paper concentrates on the particular task of applying decision tree ensembles for the detection and monitoring of cardiac autonomic neuropathy using these features. Experimental outcomes presented here show that the authors' application of the decision tree ensembles for the detection and monitoring of CAN in diabetes patients achieved better performance parameters compared with the results obtained previously in the literature.
Multivariate data-driven decision guidance for clinical scientists
- Authors: Burstein, Frada , De Silva, Daswin , Jelinek, Herbert , Stranieri, Andrew
- Date: 2013
- Type: Text , Conference paper
- Relation: 29th International Conference on Data Engineering Workshops, ICDEW 2013; Proceedings - International Conference on Data Engineering p. 193-199
- Full Text:
- Reviewed:
- Description: Clinical decision-support is gaining widespread attention as medical institutions and governing bodies turn towards utilising better information management for effective and efficient healthcare delivery and quality assured outcomes. Amass of data across all stages, from disease diagnosis to palliative care, is further indication of the opportunities and challenges created for effective data management, analysis, prediction and optimization techniques as parts of knowledge management in clinical environments. A Data-driven Decision Guidance Management System (DD-DGMS) architecture can encompass solutions into a single closed-loop integrated platform to empower clinical scientists to seamlessly explore a multivariate data space in search of novel patterns and correlations to inform their research and practice. The paper describes the components of such an architecture, which includes a robust data warehouse as an infrastructure for comprehensive clinical knowledge management. The proposed DD-DGMS architecture incorporates the dynamic dimensional data model as its elemental core. Given the heterogeneous nature of clinical contexts and corresponding data, the dimensional data model presents itself as an adaptive model that facilitates knowledge discovery, distribution and application, which is essential for clinical decision support. The paper reports on a trial of the DD-DGMS system prototype conducted on diabetes screening data which further establishes the relevance of the proposed architecture to a clinical context.
- Description: E1