Cardiovascular data analytics for real time patient monitoring
- Authors: Allami, Ragheed
- Date: 2017
- Type: Text , Thesis , PhD
- Full Text:
- Description: Improvements in wearable sensor devices make it possible to constantly monitor physiological parameters such as electrocardiograph (ECG) signals for long periods. Remote patient monitoring with wearable sensors has an important role to play in health care, particularly given the prevalence of chronic conditions such as cardiovascular disease (CVD)—one of the prominent causes of morbidity and mortality worldwide. Approximately 4.2 million Australians suffer from long-term CVD with approximately one death every 12 minutes. The assessment of ECG features, especially heart rate variability (HRV), represents a non-invasive technique which provides an indication of the autonomic nervous system (ANS) function. Conditions such as sudden cardiac death, hypertension, heart failure, myocardial infarction, ischaemia, and coronary heart disease can be detected from HRV analysis. In addition, the analysis of ECG features can also be used to diagnose many types of life-threatening arrhythmias, including ventricular fibrillation and ventricular tachycardia. Non-cardiac conditions, such as diabetes, obesity, metabolic syndrome, insulin resistance, irritable bowel syndrome, dyspepsia, anorexia nervosa, anxiety, and major depressive disorder have also been shown to be associated with HRV. The analysis of ECG features from real time ECG signals generated from wearable sensors provides distinctive challenges. The sensors that receive and process the signals have limited power, storage and processing capacity. Consequently, algorithms that process ECG signals need to be lightweight, use minimal storage resources and accurately detect abnormalities so that alarms can be raised. The existing literature details only a few algorithms which operate within the constraints of wearable sensor networks. This research presents four novel techniques that enable ECG signals to be processed within the limitations of resource constraints on devices to detect some key abnormalities in heart function. - The first technique is a novel real-time ECG data reduction algorithm, which detects and transmits only those key points that are critical for the generation of ECG features for diagnoses. - The second technique accurately predicts the five-minute HRV measure using only three minutes of data with an algorithm that executes in real-time using minimal computational resources. - The third technique introduces a real-time ECG feature recognition system that can be applied to diagnose life threatening conditions such as premature ventricular contractions (PVCs). - The fourth technique advances a classification algorithm to enhance the performance of automated ECG classification to determine arrhythmic heart beats based on noisy ECG signals. The four novel techniques are evaluated in comparison with benchmark algorithms for each task on the standard MIT-BIH Arrhythmia Database and with data generated from patients in a major hospital using Shimmer3 wearable ECG sensors. The four techniques are integrated to demonstrate that remote patient monitoring of ECG using HRV and ECG features is feasible in real time using minimal computational resources. The evaluation show that the ECG reduction algorithm is significantly better than existing algorithms that can be applied within sensor nodes, such as time-domain methods, transformation methods and compressed sensing methods. Furthermore, the proposed ECG reduction is found to be computationally less complex for resource constrained sensors and achieves higher compression ratios than existing algorithms. The prediction of a common HRV measure, the five-minute standard deviation of inter-beat variations (SDNN) and the accurate detection of PVC beats was achieved using a Count Data Model, combined with a Poisson-generated function from three-minute ECG recordings. This was achieved with minimal computational resources and was well suited to remote patient monitoring with wearable sensors. The PVC beats detection was implemented using the same count data model together with knowledge-based rules derived from clinical knowledge. A real-time cardiac patient monitoring system was implemented using an ECG sensor and smartphone to detect PVC beats within a few seconds using artificial neural networks (ANN), and it was proven to provide highly accurate results. The automated detection and classification were implemented using a new wrapper-based hybrid approach that utilized t-distributed stochastic neighbour embedding (t-SNE) in combination with self-organizing maps (SOM) to improve classification performance. The t-SNE-SOM hybrid resulted in improved sensitivity, specificity and accuracy compared to most common hybrid methods in the presence of noise. It also provided a better, more accurate identification for the presence of many types of arrhythmias from the ECG recordings, leading to a more timely diagnosis and treatment outcome.
- Description: Doctor of Philosophy
- Authors: Allami, Ragheed
- Date: 2017
- Type: Text , Thesis , PhD
- Full Text:
- Description: Improvements in wearable sensor devices make it possible to constantly monitor physiological parameters such as electrocardiograph (ECG) signals for long periods. Remote patient monitoring with wearable sensors has an important role to play in health care, particularly given the prevalence of chronic conditions such as cardiovascular disease (CVD)—one of the prominent causes of morbidity and mortality worldwide. Approximately 4.2 million Australians suffer from long-term CVD with approximately one death every 12 minutes. The assessment of ECG features, especially heart rate variability (HRV), represents a non-invasive technique which provides an indication of the autonomic nervous system (ANS) function. Conditions such as sudden cardiac death, hypertension, heart failure, myocardial infarction, ischaemia, and coronary heart disease can be detected from HRV analysis. In addition, the analysis of ECG features can also be used to diagnose many types of life-threatening arrhythmias, including ventricular fibrillation and ventricular tachycardia. Non-cardiac conditions, such as diabetes, obesity, metabolic syndrome, insulin resistance, irritable bowel syndrome, dyspepsia, anorexia nervosa, anxiety, and major depressive disorder have also been shown to be associated with HRV. The analysis of ECG features from real time ECG signals generated from wearable sensors provides distinctive challenges. The sensors that receive and process the signals have limited power, storage and processing capacity. Consequently, algorithms that process ECG signals need to be lightweight, use minimal storage resources and accurately detect abnormalities so that alarms can be raised. The existing literature details only a few algorithms which operate within the constraints of wearable sensor networks. This research presents four novel techniques that enable ECG signals to be processed within the limitations of resource constraints on devices to detect some key abnormalities in heart function. - The first technique is a novel real-time ECG data reduction algorithm, which detects and transmits only those key points that are critical for the generation of ECG features for diagnoses. - The second technique accurately predicts the five-minute HRV measure using only three minutes of data with an algorithm that executes in real-time using minimal computational resources. - The third technique introduces a real-time ECG feature recognition system that can be applied to diagnose life threatening conditions such as premature ventricular contractions (PVCs). - The fourth technique advances a classification algorithm to enhance the performance of automated ECG classification to determine arrhythmic heart beats based on noisy ECG signals. The four novel techniques are evaluated in comparison with benchmark algorithms for each task on the standard MIT-BIH Arrhythmia Database and with data generated from patients in a major hospital using Shimmer3 wearable ECG sensors. The four techniques are integrated to demonstrate that remote patient monitoring of ECG using HRV and ECG features is feasible in real time using minimal computational resources. The evaluation show that the ECG reduction algorithm is significantly better than existing algorithms that can be applied within sensor nodes, such as time-domain methods, transformation methods and compressed sensing methods. Furthermore, the proposed ECG reduction is found to be computationally less complex for resource constrained sensors and achieves higher compression ratios than existing algorithms. The prediction of a common HRV measure, the five-minute standard deviation of inter-beat variations (SDNN) and the accurate detection of PVC beats was achieved using a Count Data Model, combined with a Poisson-generated function from three-minute ECG recordings. This was achieved with minimal computational resources and was well suited to remote patient monitoring with wearable sensors. The PVC beats detection was implemented using the same count data model together with knowledge-based rules derived from clinical knowledge. A real-time cardiac patient monitoring system was implemented using an ECG sensor and smartphone to detect PVC beats within a few seconds using artificial neural networks (ANN), and it was proven to provide highly accurate results. The automated detection and classification were implemented using a new wrapper-based hybrid approach that utilized t-distributed stochastic neighbour embedding (t-SNE) in combination with self-organizing maps (SOM) to improve classification performance. The t-SNE-SOM hybrid resulted in improved sensitivity, specificity and accuracy compared to most common hybrid methods in the presence of noise. It also provided a better, more accurate identification for the presence of many types of arrhythmias from the ECG recordings, leading to a more timely diagnosis and treatment outcome.
- Description: Doctor of Philosophy
Data analytics identify glycated haemoglobin co-markers for type 2 diabetes mellitus diagnosis
- Jelinek, Herbert, Stranieri, Andrew, Yatsko, Andrew, Venkatraman, Sitalakshmi
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi
- Date: 2016
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 75, no. (2016), p. 90-97
- Full Text: false
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is being more commonly used as an alternative test for the identification of type 2 diabetes mellitus (T2DM) or to add to fasting blood glucose level and oral glucose tolerance test results, because it is easily obtained using point-of-care technology and represents long-term blood sugar levels. HbA1c cut-off values of 6.5% or above have been recommended for clinical use based on the presence of diabetic comorbidities from population studies. However, outcomes of large trials with a HbA1c of 6.5% as a cut-off have been inconsistent for a diagnosis of T2DM. This suggests that a HbA1c cut-off of 6.5% as a single marker may not be sensitive enough or be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied on a large clinical dataset to identify an optimal cut-off value for HbA1c and to identify whether additional biomarkers can be used together with HbA1c to enhance diagnostic accuracy of T2DM. T2DM classification accuracy increased if 8-hydroxy-2-deoxyguanosine (8-OhdG), an oxidative stress marker, was included in the algorithm from 78.71% for HbA1c at 6.5% to 86.64%. A similar result was obtained when interleukin-6 (IL-6) was included (accuracy=85.63%) but with a lower optimal HbA1c range between 5.73 and 6.22%. The application of data analytics to medical records from the Diabetes Screening programme demonstrates that data analytics, combined with large clinical datasets can be used to identify clinically appropriate cut-off values and identify novel biomarkers that when included improve the accuracy of T2DM diagnosis even when HbA1c levels are below or equal to the current cut-off of 6.5%. © 2016 Elsevier Ltd.
Biopsychosocial Data Analytics and Modeling
- Authors: Santhanagopalan, Meena
- Date: 2021
- Type: Text , Thesis , PhD
- Full Text:
- Description: Sustained customisation of digital health intervention (DHI) programs, in the context of community health engagement, requires strong integration of multi-sourced interdisciplinary biopsychosocial health data. The biopsychosocial model is built upon the idea that biological, psychological and social processes are integrally and interactively involved in physical health and illness. One of the longstanding challenges of dealing with healthcare data is the wide variety of data generated from different sources and the increasing need to learn actionable insights that drive performance improvement. The growth of information and communication technology has led to the increased use of DHI programs. These programs use an observational methodology that helps researchers to study the everyday behaviour of participants during the course of the program by analysing data generated from digital tools such as wearables, online surveys and ecological momentary assessment (EMA). Combined with data reported from biological and psychological tests, this provides rich and unique biopsychosocial data. There is a strong need to review and apply novel approaches to combining biopsychosocial data from a methodological perspective. Although some studies have used data analytics in research on clinical trial data generated from digital interventions, data analytics on biopsychosocial data generated from DHI programs is limited. The study in this thesis develops and implements innovative approaches for analysing the existing unique and rich biopsychosocial data generated from the wellness study, a DHI program conducted by the School of Science, Psychology and Sport at Federation University. The characteristics of variety, value and veracity that usually describe big data are also relevant to the biopsychosocial data handled in this thesis. These historical, retrospective real-life biopsychosocial data provide fertile ground for research through the use of data analytics to discover patterns hidden in the data and to obtain new knowledge. This thesis presents the studies carried out on three aspects of biopsychosocial research. First, we present the salient traits of the three components - biological, psychological and social - of biopsychosocial research. Next, we investigate the challenges of pre-processing biopsychosocial data, placing special emphasis on the time-series data generated from wearable sensor devices. Finally, we present the application of statistical and machine learning (ML) tools to integrate variables from the biopsychosocial disciplines to build a predictive model. The first chapter presents the salient features of the biopsychosocial data for each discipline. The second chapter presents the challenges of pre-processing biopsychosocial data, focusing on the time-series data generated from wearable sensor devices. The third chapter uses statistical and ML tools to integrate variables from the biopsychosocial disciplines to build a predictive model. Among its other important analyses and results, the key contributions of the research described in this thesis include the following: 1. using gamma distribution to model neurocognitive reaction time data that presents interesting properties (skewness and kurtosis for the data distribution) 2. using novel ‘peak heart-rate’ count metric to quantify ‘biological’ stress 3. using the ML approach to evaluate DHIs 4. using a recurrent neural network (RNN) and long short-term memory (LSTM) data prediction model to predict Difficulties in Emotion Regulation Scale (DERS) and primary emotion (PE) using wearable sensor data.
- Description: Doctor of Philosophy
- Authors: Santhanagopalan, Meena
- Date: 2021
- Type: Text , Thesis , PhD
- Full Text:
- Description: Sustained customisation of digital health intervention (DHI) programs, in the context of community health engagement, requires strong integration of multi-sourced interdisciplinary biopsychosocial health data. The biopsychosocial model is built upon the idea that biological, psychological and social processes are integrally and interactively involved in physical health and illness. One of the longstanding challenges of dealing with healthcare data is the wide variety of data generated from different sources and the increasing need to learn actionable insights that drive performance improvement. The growth of information and communication technology has led to the increased use of DHI programs. These programs use an observational methodology that helps researchers to study the everyday behaviour of participants during the course of the program by analysing data generated from digital tools such as wearables, online surveys and ecological momentary assessment (EMA). Combined with data reported from biological and psychological tests, this provides rich and unique biopsychosocial data. There is a strong need to review and apply novel approaches to combining biopsychosocial data from a methodological perspective. Although some studies have used data analytics in research on clinical trial data generated from digital interventions, data analytics on biopsychosocial data generated from DHI programs is limited. The study in this thesis develops and implements innovative approaches for analysing the existing unique and rich biopsychosocial data generated from the wellness study, a DHI program conducted by the School of Science, Psychology and Sport at Federation University. The characteristics of variety, value and veracity that usually describe big data are also relevant to the biopsychosocial data handled in this thesis. These historical, retrospective real-life biopsychosocial data provide fertile ground for research through the use of data analytics to discover patterns hidden in the data and to obtain new knowledge. This thesis presents the studies carried out on three aspects of biopsychosocial research. First, we present the salient traits of the three components - biological, psychological and social - of biopsychosocial research. Next, we investigate the challenges of pre-processing biopsychosocial data, placing special emphasis on the time-series data generated from wearable sensor devices. Finally, we present the application of statistical and machine learning (ML) tools to integrate variables from the biopsychosocial disciplines to build a predictive model. The first chapter presents the salient features of the biopsychosocial data for each discipline. The second chapter presents the challenges of pre-processing biopsychosocial data, focusing on the time-series data generated from wearable sensor devices. The third chapter uses statistical and ML tools to integrate variables from the biopsychosocial disciplines to build a predictive model. Among its other important analyses and results, the key contributions of the research described in this thesis include the following: 1. using gamma distribution to model neurocognitive reaction time data that presents interesting properties (skewness and kurtosis for the data distribution) 2. using novel ‘peak heart-rate’ count metric to quantify ‘biological’ stress 3. using the ML approach to evaluate DHIs 4. using a recurrent neural network (RNN) and long short-term memory (LSTM) data prediction model to predict Difficulties in Emotion Regulation Scale (DERS) and primary emotion (PE) using wearable sensor data.
- Description: Doctor of Philosophy
Data analytics to select markers and cut-off values for clinical scoring
- Stranieri, Andrew, Yatsko, Andrew, Venkatraman, Sitalakshmi, Jelinek, Herbert
- Authors: Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi , Jelinek, Herbert
- Date: 2018
- Type: Text , Conference proceedings
- Relation: ACSW '18: Proceedings of the Australasian Computer Science Week Multiconference; Brisbane; 29th January -2nd February 2018 p. 1-6
- Full Text: false
- Reviewed:
- Description: Scoring systems such as the Glasgow-Coma scale used to assess consciousness AusDrisk to assess the risk of diabetes, are prevalent in clinical practice. Scoring systems typically include relevant variables with ordinal values where each value is assigned a weight. Weights for selected values are summed and compared to thresholds for health care professionals to rapidly generate a score. Scoring systems are prevalent in clinical practice because they are easy and quick to use. However, most scoring systems comprise many variables and require some time to calculate an final score. Further, expensive population-wide studies are required to validate a scoring system. In this article, we present a new approach for the generation of a scoring system. The approach uses a search procedure invoking iterative decision tree induction to identify a suite of scoring rules, each of which requires values on only two variables. Twelve scoring rules were discovered using the approach, from an Australian screening program for the assessment of Type 2 Diabetes risk. However, classifications from the 12 rules can conflict. In this paper we argue that a simple rule preference relation is sufficient for the resolution of rule conflicts.
An Observation of research complexity in top universities based on research publications
- Lee, Ivan, Xia, Feng, Roos, Goran
- Authors: Lee, Ivan , Xia, Feng , Roos, Goran
- Date: 2017
- Type: Text , Conference proceedings
- Relation: WWW '17 Companion; Perth, Australia; 3-7 April, 2017 ; Published in Proceedings of the 26th International Conference on World Wide Web Companion April 2017 p. 1259-1265
- Full Text: false
- Reviewed:
- Description: This paper investigates research specialisation of top ranked universities around the world. The revealed comparative advantage in different research fields are determined according to the number of research articles published. Subsequently, measures of research ubiquity and diversity, and research complexity index of each university, are obtained and discussed. The study is conducted on top-ranked universities according to Shanghai Jiao Tong Academic Ranking of World Universities, with bibliographical details extracted Microsoft Academic Graph data set and research fields of journals labelled with SCImago Journal Classification. Diversity-ubiquity distributions, relevance of RCI and university ranks, and geographical RCI distributions are examined in this paper.
Big data analytics for preventive medicine
- Razzak, Muhammad, Imran, Muhammad, Xu, Guandong
- Authors: Razzak, Muhammad , Imran, Muhammad , Xu, Guandong
- Date: 2020
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 32, no. 9 (2020), p. 4417-4451
- Full Text:
- Reviewed:
- Description: Medical data is one of the most rewarding and yet most complicated data to analyze. How can healthcare providers use modern data analytics tools and technologies to analyze and create value from complex data? Data analytics, with its promise to efficiently discover valuable pattern by analyzing large amount of unstructured, heterogeneous, non-standard and incomplete healthcare data. It does not only forecast but also helps in decision making and is increasingly noticed as breakthrough in ongoing advancement with the goal is to improve the quality of patient care and reduces the healthcare cost. The aim of this study is to provide a comprehensive and structured overview of extensive research on the advancement of data analytics methods for disease prevention. This review first introduces disease prevention and its challenges followed by traditional prevention methodologies. We summarize state-of-the-art data analytics algorithms used for classification of disease, clustering (unusually high incidence of a particular disease), anomalies detection (detection of disease) and association as well as their respective advantages, drawbacks and guidelines for selection of specific model followed by discussion on recent development and successful application of disease prevention methods. The article concludes with open research challenges and recommendations. © 2019, Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Razzak, Muhammad , Imran, Muhammad , Xu, Guandong
- Date: 2020
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 32, no. 9 (2020), p. 4417-4451
- Full Text:
- Reviewed:
- Description: Medical data is one of the most rewarding and yet most complicated data to analyze. How can healthcare providers use modern data analytics tools and technologies to analyze and create value from complex data? Data analytics, with its promise to efficiently discover valuable pattern by analyzing large amount of unstructured, heterogeneous, non-standard and incomplete healthcare data. It does not only forecast but also helps in decision making and is increasingly noticed as breakthrough in ongoing advancement with the goal is to improve the quality of patient care and reduces the healthcare cost. The aim of this study is to provide a comprehensive and structured overview of extensive research on the advancement of data analytics methods for disease prevention. This review first introduces disease prevention and its challenges followed by traditional prevention methodologies. We summarize state-of-the-art data analytics algorithms used for classification of disease, clustering (unusually high incidence of a particular disease), anomalies detection (detection of disease) and association as well as their respective advantages, drawbacks and guidelines for selection of specific model followed by discussion on recent development and successful application of disease prevention methods. The article concludes with open research challenges and recommendations. © 2019, Springer-Verlag London Ltd., part of Springer Nature.
- Dai, Hong-Ning, Wang, Hao, Xu, Guangquan, Wan, Jiafu, Imran, Muhammad
- Authors: Dai, Hong-Ning , Wang, Hao , Xu, Guangquan , Wan, Jiafu , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: Enterprise Information Systems Vol. 14, no. 9-10 (2020), p. 1279-1303
- Full Text: false
- Reviewed:
- Description: Data analytics in massive manufacturing data can extract huge business values while can also result in research challenges due to the heterogeneous data types, enormous volume and real-time velocity of manufacturing data. This paper provides an overview on big data analytics in manufacturing Internet of Things (MIoT). This paper first starts with a discussion on necessities and challenges of big data analytics in manufacturing data of MIoT. Then, the enabling technologies of big data analytics of manufacturing data are surveyed and discussed. Moreover, this paper also outlines the future directions in this promising area. © 2019 Informa UK Limited, trading as Taylor & Francis Group.
- Stranieri, Andrew, Venkatraman, Sitalakshmi, Minicz, John, Zarnegar, Armita, Firmin, Sally, Balasubramanian, Venki, Jelinek, Herbert
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text: false
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.
- «
- ‹
- 1
- ›
- »