Emerging point of care devices and artificial intelligence : prospects and challenges for public health
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.
Online dispute resolution in mediating EHR disputes : a case study on the impact of emotional intelligence
- Authors: Bellucci, Emilia , Venkatraman, Sitalakshmi , Stranieri, Andrew
- Date: 2020
- Type: Text , Journal article
- Relation: Behaviour and Information Technology Vol. 39, no. 10 (2020), p. 1124-1139
- Full Text:
- Reviewed:
- Description: An Electronic Health Record (EHR) is an individual’s record of all health events that enables critical information to be documented and shared electronically amongst health care providers and patients. The introduction of an EHR, particularly a patient-accessible EHR, can be expected to lead to an escalation of enquiries, complaints and ultimately, disputes. Prevailing opinion is that Online Dispute Resolution (ODR) systems can help with the mediation of certain types of disputes electronically, particularly systems which deploy Artificial Intelligence (AI) to reduce the need for a human mediator. However, disputes regarding health tend to invoke emotional responses from patients that may conceivably impact ODR efficacy. This raises an interesting question on the influence of emotional intelligence (EI) in the process of mediation. Using a phenomenological research methodology simulating doctor–patient disputes mediated with an AI Smart ODR system in place of a human mediator, we found an association between EI and the propensity for a participant to change their previously asserted claims. Our results indicate participants with lower EI tend to prolong resolution compared to those with higher EI. Future research include trialling larger scale ODR systems for specific cohorts of patients in the area of health related dispute resolution are advanced. © 2019 Informa UK Limited, trading as Taylor & Francis Group.
Towards smart online dispute resolution for medical disputes
- Authors: Bellucci, Emilia , Stranieri, Andrew , Venkatraman, Sitalakshmi
- Date: 2020
- Type: Text , Conference proceedings , Conference paper
- Relation: Proceedings of the Australasian Computer Science Week Multiconference (ACSW 2020); Melbourne, Australia; 3rd-7th February 2020. p. 1-5
- Full Text: false
- Reviewed:
- Description: With the advancements in technologies, digitization of health records in the healthcare industry is undertaking a rapid revolution. This is further fueled with the entrance of Internet of Things (IoT), where mobile health devices have resulted in an explosion of health data and increased accessibility via wireless communications and sensor networks. With the introduction of an Electronic Health Record (EHR) system as an important venture for the general health and wellbeing of a country's citizens, privacy issues and medical disputes are expected to rise. In addition to critical health information being documented and shared electronically, integrating data from diverse smart medical IoT devices are leading towards increasingly more complex disputes that require immense time and effort to resolve. Online dispute resolution (ODR) programs have been successfully applied to cost-effectively help disputants resolve commercial, insurance and other legal disputes, but as yet have not been applied to healthcare. This paper takes a modest step in this direction, firstly to identify the drivers of medical disputes that include patient empowerment and technology advancements and trends. Secondly, we explore dispute resolution models and identify the status and limitations of current ODR systems.
- Description: This work was funded by the University of Ballarat Deakin University Collaborative Fund. 160134
Personalised measures of obesity using waist to height ratios from an Australian health screening program
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Anderw , Venkatraman, Sitalakshmi
- Date: 2019
- Type: Text , Journal article
- Relation: Digital Health Vol. 5, no. (2019), p. 1-8
- Full Text:
- Reviewed:
- Description: Objectives The aim of the current study is to generate waist circumference to height ratio cut-off values for obesity categories from a model of the relationship between body mass index and waist circumference to height ratio. We compare the waist circumference to height ratio discovered in this way with cut-off values currently prevalent in practice that were originally derived using pragmatic criteria. Method Personalized data including age, gender, height, weight, waist circumference and presence of diabetes, hypertension and cardiovascular disease for 847 participants over eight years were assembled from participants attending a rural Australian health review clinic (DiabHealth). Obesity was classified based on the conventional body mass index measure (weight/height(2)) and compared to the waist circumference to height ratio. Correlations between the measures were evaluated on the screening data, and independently on data from the National Health and Nutrition Examination Survey that included age categories. Results This article recommends waist circumference to height ratio cut-off values based on an Australian rural sample and verified using the National Health and Nutrition Examination Survey database that facilitates the classification of obesity in clinical practice. Gender independent cut-off values are provided for waist circumference to height ratio that identify healthy (waist circumference to height ratio >= 0.45), overweight (0.53) and the three obese (0.60, 0.68, 0.75) categories verified on the National Health and Nutrition Examination Survey dataset. A strong linearity between the waist circumference to height ratio and the body mass index measure is demonstrated. Conclusion The recommended waist circumference to height ratio cut-off values provided a useful index for assessing stages of obesity and risk of chronic disease for improved healthcare in clinical practice.
Data analytics to select markers and cut-off values for clinical scoring
- Authors: Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi , Jelinek, Herbert
- Date: 2018
- Type: Text , Conference proceedings
- Relation: ACSW '18: Proceedings of the Australasian Computer Science Week Multiconference; Brisbane; 29th January -2nd February 2018 p. 1-6
- Full Text: false
- Reviewed:
- Description: Scoring systems such as the Glasgow-Coma scale used to assess consciousness AusDrisk to assess the risk of diabetes, are prevalent in clinical practice. Scoring systems typically include relevant variables with ordinal values where each value is assigned a weight. Weights for selected values are summed and compared to thresholds for health care professionals to rapidly generate a score. Scoring systems are prevalent in clinical practice because they are easy and quick to use. However, most scoring systems comprise many variables and require some time to calculate an final score. Further, expensive population-wide studies are required to validate a scoring system. In this article, we present a new approach for the generation of a scoring system. The approach uses a search procedure invoking iterative decision tree induction to identify a suite of scoring rules, each of which requires values on only two variables. Twelve scoring rules were discovered using the approach, from an Australian screening program for the assessment of Type 2 Diabetes risk. However, classifications from the 12 rules can conflict. In this paper we argue that a simple rule preference relation is sufficient for the resolution of rule conflicts.
Data analytics identify glycated haemoglobin co-markers for type 2 diabetes mellitus diagnosis
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi
- Date: 2016
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 75, no. (2016), p. 90-97
- Full Text: false
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is being more commonly used as an alternative test for the identification of type 2 diabetes mellitus (T2DM) or to add to fasting blood glucose level and oral glucose tolerance test results, because it is easily obtained using point-of-care technology and represents long-term blood sugar levels. HbA1c cut-off values of 6.5% or above have been recommended for clinical use based on the presence of diabetic comorbidities from population studies. However, outcomes of large trials with a HbA1c of 6.5% as a cut-off have been inconsistent for a diagnosis of T2DM. This suggests that a HbA1c cut-off of 6.5% as a single marker may not be sensitive enough or be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied on a large clinical dataset to identify an optimal cut-off value for HbA1c and to identify whether additional biomarkers can be used together with HbA1c to enhance diagnostic accuracy of T2DM. T2DM classification accuracy increased if 8-hydroxy-2-deoxyguanosine (8-OhdG), an oxidative stress marker, was included in the algorithm from 78.71% for HbA1c at 6.5% to 86.64%. A similar result was obtained when interleukin-6 (IL-6) was included (accuracy=85.63%) but with a lower optimal HbA1c range between 5.73 and 6.22%. The application of data analytics to medical records from the Diabetes Screening programme demonstrates that data analytics, combined with large clinical datasets can be used to identify clinically appropriate cut-off values and identify novel biomarkers that when included improve the accuracy of T2DM diagnosis even when HbA1c levels are below or equal to the current cut-off of 6.5%. © 2016 Elsevier Ltd.
Missing data imputation for individualised CVD diagnostic and treatment
- Authors: Venkatraman, Sitalakshmi , Yatsko, Andrew , Stranieri, Andrew , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference paper
- Relation: Computing in Cardiology, 2016 Vol. 43 I E E E Computer Society
- Full Text: false
- Reviewed:
- Description: Cardiac health screening standards require increasingly more clinical tests consisting of blood, urine and anthropometric measures as well as an extensive clinical and medication history. To ensure optimal screening referrals, diagnostic determinants need to be highly accurate to reduce false positives and ensuing stress to individual patients. However, the data from individual patients partaking in population screening is often incomplete. The current study provides an imputation algorithm that has been applied to patientcentered cardiac health screening. Missing values are iteratively imputed in conjunction with combinations of values on subsets of selected features. The approach was evaluated on the DiabHealth dataset containing 2800 records with over 180 attributes. The results for predicting CVD after data completion showed sensitivity and specificity of 94% and 99% respectively. Removing variables that define cardiac events and associated conditions directly, left ‘age’ followed by ‘use’ of antihypertensive and anti-cholesterol medication, especially statins among the best predictors.
Corporate sustainability : An IS approach for integrating triple bottom line elements
- Authors: Venkatraman, Sitalakshmi , Nayak, Ravi
- Date: 2015
- Type: Text , Journal article
- Relation: Social Responsibility Journal Vol. 11, no. 3 (2015), p. 482-501
- Full Text: false
- Reviewed:
- Description: Purpose - The purpose of this paper is to investigate the inter-relationships among three triple bottom line (TBL) outcomes of corporate sustainability, namely, corporate environmental performance outcome (CEPO), corporate social performance outcome (CSPO) and corporate financial performance outcome (CFPO), with the aid of an empirical study conducted in Australian businesses. The paper also aims to provide a roadmap for integrating sustainable business practices using information systems (IS) approach of continuous improvement lifecycle. Current business practices try to achieve economic, social and ecological goals independently as silos due to the individual operational challenges posed by each of these TBL principles. Design/methodology/approach - The research design mainly adopts a quantitative research methodology with data collected by means of a survey questionnaire that included both descriptive and exploratory flavour. The empirical study examines the relationships of TBL elements as perceived by 85 different Australian-based large, medium as well as small business organisations. The data collected were analysed by performing factor analysis on 21 items, resulting in three latent factors that were aligned to TBL outcomes and the correlations among them were analysed to assess their inter-relationships. Findings - The results of the study report weak and positive relationships existing between the TBL elements, with insights gained through the study leading towards useful implications that are well-supported by the qualitative feedback. The empirical study has also resulted in providing practical recommendations and an implementation framework consisting of a four-step roadmap with the participation of quality circles within an IS approach. Practical implications - The study focuses on inter-relationships and integration of TBL elements in Australian businesses. This could be extended to other businesses in different countries. The proposed roadmap with a continuous improvement cycle of system implementation steps facilitates any organisation to adopt an incremental integration of the social responsibility and environment protection practices within its core business operations for achieving corporate sustainability. Originality/value - While most of the TBL studies conducted worldwide focus on predominantly assessing large organisations towards responsible and sustainable business practices, this paper considers large, medium and small businesses. The research methodology adopted in this study as well as the proposed IS approach with quality circles add value to a growing body of literature with a recent increasing focus on integrated approaches for corporate sustainability.
Data-analytically derived flexible HbA1c thresholds for type 2 diabetes mellitus diagnostic
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
Diagnostic with incomplete nominal/discrete data
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Novel data mining techniques for incompleted clinical data in diabetes management
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi
- Date: 2014
- Type: Text , Journal article
- Relation: British Journal of Applied Science & Technology Vol. 4, no. 33 (2014), p. 4591-4606
- Relation: https://doi.org/10.9734/BJAST/2014/11744
- Full Text:
- Reviewed:
- Description: An important part of health care involves upkeep and interpretation of medical databases containing patient records for clinical decision making, diagnosis and follow-up treatment. Missing clinical entries make it difficult to apply data mining algorithms for clinical decision support. This study demonstrates that higher predictive accuracy is possible using conventional data mining algorithms if missing values are dealt with appropriately. We propose a novel algorithm using a convolution of sub-problems to stage a super problem, where classes are defined by Cartesian Product of class values of the underlying problems, and Incomplete Information Dismissal and Data Completion techniques are applied for reducing features and imputing missing values. Predictive accuracies using Decision Branch, Nearest Neighborhood and Naïve Bayesian classifiers were compared to predict diabetes, cardiovascular disease and hypertension. Data is derived from Diabetes Screening Complications Research Initiative (DiScRi) conducted at a regional Australian university involving more than 2400 patient records with more than one hundred clinical risk factors (attributes). The results show substantial improvements in the accuracy achieved with each classifier for an effective diagnosis of diabetes, cardiovascular disease and hypertension as compared to those achieved without substituting missing values. The gain in improvement is 7% for diabetes, 21% for cardiovascular disease and 24% for hypertension, and our integrated novel approach has resulted in more than 90% accuracy for the diagnosis of any of the three conditions. This work advances data mining research towards achieving an integrated and holistic management of diabetes. - See more at: http://www.sciencedomain.org/abstract.php?iid=670&id=5&aid=6128#.VCSxDfmSx8E
Information security governance: The art of detecting hidden malware
- Authors: Alazab, Mamoun , Venkatraman, Sitalakshmi , Watters, Paul
- Date: 2013
- Type: Text , Book chapter
- Relation: IT Security governance innovations: Theory and research p. 293-315
- Full Text: false
- Reviewed:
- Description: Detecting malicious software or malware is one of the major concerns in information security governance as malware authors pose a major challenge to digital forensics by using a variety of highly sophisticated stealth techniques to hide malicious code in computing systems, including smartphones. The current detection techniques are futile, as forensic analysis of infected devices is unable to identify all the hidden malware, thereby resulting in zero day attacks. This chapter takes a key step forward to address this issue and lays foundation for deeper investigations in digital forensics. The goal of this chapter is, firstly, to unearth the recent obfuscation strategies employed to hide malware. Secondly, this chapter proposes innovative techniques that are implemented as a fully-automated tool, and experimentally tested to exhaustively detect hidden malware that leverage on system vulnerabilities. Based on these research investigations, the chapter also arrives at an information security governance plan that would aid in addressing the current and future cybercrime situations.
Modeling of secured cloud network: - The case of an educational institute
- Authors: Bevinakoppa, Savitri , Sharma, Geetu , Venkatraman, Sitalakshmi
- Date: 2013
- Type: Text , Conference paper
- Relation: Recent researches in Infromation Science & Applications p. 150-155
- Full Text: false
- Reviewed:
Analysis of firewall log-based detection scenarios for evidence in digital forensics
- Authors: Mukhtar, Rubiu , Al-Nemrat, Ameer , Alazab, Mamoun , Venkatraman, Sitalakshmi , Jahankhani, Hamid
- Date: 2012
- Type: Text , Journal article
- Relation: International Journal of Electronic Security and Digital Forensics Vol. 4, no. 4 (2012), p. 261-279
- Full Text: false
- Reviewed:
- Description: With the recent escalating rise in cybercrime, firewall logs have attained much research focus in assessing their capability to serve as excellent evidence in digital forensics. Even though the main aim of firewalls is to screen or filter part or all network traffic, firewall logs could provide rich traffic information that could be used as evidence to prove or disprove the occurrence of online attack events for legal purposes. Since courts have a definition of what could be presented to it as evidence, this research investigates on the determinants for the acceptability of firewall logs as suitable evidence. Two commonly used determinants are tested using three different firewall-protected network scenarios. These determinants are: 1 admissibility that requires the evidence to satisfy certain legal requirements stipulated by the courts 2 weight that represents the sufficiency and extent to which the evidence convinces the establishment of cybercrime attack. Copyright © 2012 Inderscience Enterprises Ltd.
- Description: 2003010400
Cloud computing: A research roadmap in coalescence with software engineering
- Authors: Venkatraman, Sitalakshmi , Wadhwa, Bimlesh
- Date: 2012
- Type: Text , Journal article
- Relation: Software Engineering Vol. 2, no. 2 (2012), p. 7-17
- Full Text: false
- Reviewed:
Malicious code detection using penalized splines on OPcode frequency
- Authors: Alazab, Mamoun , Al Kadiri, Mohammad , Venkatraman, Sitalakshmi , Al-Nemrat, Ameer
- Date: 2012
- Type: Text , Conference proceedings
- Full Text: false
- Description: Recently, malicious software are gaining exponential growth due to the innumerable obfuscations of extended x86 IA-32 (OPcodes) that are being employed to evade from traditional detection methods. In this paper, we design a novel distinguisher to separate malware from benign that combines Multivariate Logistic Regression model using kernel HS in Penalized Splines along with OPcode frequency feature selection technique for efficiently detecting obfuscated malware. The main advantage of our penalized splines based feature selection technique is its performance capability achieved through the efficient filtering and identification of the most important OPcodes used in the obfuscation of malware. This is demonstrated through our successful implementation and experimental results of our proposed model on large malware datasets. The presented approach is effective at identifying previously examined malware and non-malware to assist in reverse engineering. © 2012 IEEE.
- Description: 2003011056
MapReduce neural network framework for efficient content based image retrieval from large datasets in the cloud
- Authors: Venkatraman, Sitalakshmi , Kulkarni, Siddhivinayak
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: Recently, content based image retrieval (CBIR) has gained active research focus due to wide applications such as crime prevention, medicine, historical research and digital libraries. With digital explosion, image collections in databases in distributed locations over the Internet pose a challenge to retrieve images that are relevant to user queries efficiently and accurately. It becomes increasingly important to develop new CBIR techniques that are effective and scalable for real-time processing of very large image collections. To address this, the paper proposes a novel MapReduce neural network framework for CBIR from large data collection in a cloud environment. We adopt natural language queries that use a fuzzy approach to classify the colour images based on their content and apply Map and Reduce functions that can operate in cloud clusters for arriving at accurate results in real-time. Preliminary experimental results for classifying and retrieving images from large data sets were quite convincing to carry out further experimental evaluations. © 2012 IEEE.
- Description: 2003010699
Six sigma approach to improve quality in e-services: An empirical study in Jordan
- Authors: Alhyari, Salah , Alazab, Moutaz , Venkatraman, Sitalakshmi , Alazab, Mamoun , Alazab, Ammar
- Date: 2012
- Type: Text , Journal article
- Relation: International Journal of Electronic Government Research Vol. 8, no. 2 (April, 2012), p. 57-74
- Full Text: false
- Reviewed:
- Description: This paper investigates the application of the Six Sigma approach to improve quality in electronic services (e-services) as more countries are adopting e-services as a means of providing services to their people through the Web. This paper presents a case study about the use of Six Sigma model to measure customer satisfaction and quality levels achieved in e-services that were recently launched by public sector organisations in a developing country, such as Jordan. An empirical study consisting of 280 customers of Jordan's e-services is conducted and problems are identified through the DMAIC phases of Six Sigma. The service quality levels are measured and analysed using six main criteria: Website Design, Reliability, Responsiveness, Personalization, Information Quality, and System Quality. The study indicates a 74% customer satisfaction with a Six Sigma level of 2.12 has enabled the Greater Amman Municipality to identify the usability issues associated with their e-services offered by public sector organisations. The aim of the paper is not only to implement Six Sigma as a measurement-based strategy for improving e-customer service in a newly launched e-service programme, but also widen its scope in investigating other service dimensions and perform comparative studies in other developing countries.
The role of emotional intelligence on the resolution of disputes involving the electronic health record
- Authors: Bellucci, Emilia , Venkatraman, Sitalakshmi , Muecke, Nial , Stranieri, Andrew
- Date: 2012
- Type: Text , Conference paper
- Relation: Fifth Australasian workshop on health informatics and knowledge management p. 3-12
- Full Text: false
- Reviewed:
Unification of electronic health records and holistic medicine
- Authors: Venkatraman, Sitalakshmi , Stranieri, Andrew
- Date: 2012
- Type: Text , Journal article
- Relation: ICHM 2012 Vol. , no. (2012), p.53-59
- Full Text: false
- Reviewed:
- Description: Recent trends in the increasing use of complementary and alternative medicine (CAM) as "holistic medicine" by patients in technologically advanced nations have prompted the need to integrate their CAM information into their Electronic health records (EHR). Studies indicate that over 70% of the public in Australia used at least one form of CAM that includes nutritional products such as vitamins, supplements, and herbal medicines, and alternate medicines such as homoeopathic, Ayurvedic and Chinese medicines. There is also a growing acceptance of CAM among healthcare providers, and patients are increasingly visiting CAM practitioners. In this paper, we argue that by unifying patients' information about their CAM history along with their EHR, the healthcare quality and accuracy of measurements could be improved, and we identify six key benefits for healthcare and CAM practitioners as well as consumers. On the other hand we also foresee certain issues, such as availability of electronic data and standardised practice of different forms of CAM, and we have unearthed six main issues that require prime attention. We discuss these issues and provide recommendations for the way to go forward in integrating automated CAM software components into EHR systems.