Argumentation structures that integrate dialectical and non-dialectical reasoning
- Stranieri, Andrew, Zeleznikow, John, Yearwood, John
- Authors: Stranieri, Andrew , Zeleznikow, John , Yearwood, John
- Date: 2001
- Type: Text , Journal article
- Relation: Knowledge Engineering Review Vol. 16, no. 4 (Dec 2001), p. 331-348
- Full Text:
- Reviewed:
- Description: Argumentation concepts have been applied to numerous knowledge engineering endeavours in recent years. For example, a variety of logics have been developed to represent argumentation in the context of a dialectical situation such as a dialogue. In contrast to the dialectical approach, argumentation has also been used to structure knowledge. This can be seen as a non-dialectical approach. The Toulmin argument structure has often been used to structure knowledge non-dialectically yet most studies that apply the Toulmin structure do not use the original structure but vary one or more components. Variations to the Toulmin structure can be understood as different ways to integrate a dialectical perspective with a non-dialectical one. Drawing the dialectical/non-dialectical distinction enables the specification of a framework called the generic actual argument model that is expressly non-dialectical. The framework enables the development of knowledge-based systems that integrate a variety of inference procedures, combine information retrieval with reasoning and facilitate automated document drafting. Furthermore, the non-dialectical framework provides the foundation for simple dialectical models. Systems based on our approach have been developed in family law, refugee law, determining eligibility for government legal aid, copyright law and e-tourism.
- Description: C1
- Description: 2003002516
- Authors: Stranieri, Andrew , Zeleznikow, John , Yearwood, John
- Date: 2001
- Type: Text , Journal article
- Relation: Knowledge Engineering Review Vol. 16, no. 4 (Dec 2001), p. 331-348
- Full Text:
- Reviewed:
- Description: Argumentation concepts have been applied to numerous knowledge engineering endeavours in recent years. For example, a variety of logics have been developed to represent argumentation in the context of a dialectical situation such as a dialogue. In contrast to the dialectical approach, argumentation has also been used to structure knowledge. This can be seen as a non-dialectical approach. The Toulmin argument structure has often been used to structure knowledge non-dialectically yet most studies that apply the Toulmin structure do not use the original structure but vary one or more components. Variations to the Toulmin structure can be understood as different ways to integrate a dialectical perspective with a non-dialectical one. Drawing the dialectical/non-dialectical distinction enables the specification of a framework called the generic actual argument model that is expressly non-dialectical. The framework enables the development of knowledge-based systems that integrate a variety of inference procedures, combine information retrieval with reasoning and facilitate automated document drafting. Furthermore, the non-dialectical framework provides the foundation for simple dialectical models. Systems based on our approach have been developed in family law, refugee law, determining eligibility for government legal aid, copyright law and e-tourism.
- Description: C1
- Description: 2003002516
Structured reasoning to support deliberative dialogue
- Macfadyen, Alyx, Stranieri, Andrew, Yearwood, John
- Authors: Macfadyen, Alyx , Stranieri, Andrew , Yearwood, John
- Date: 2005
- Type: Text , Journal article
- Relation: Lecture Notes in Artificial Intelligence 3681: Knowledge-Based Intelligent Information and Engineering Systems, 9th International Conference, KES 2005, Melbourne, Australia, September 2005, Proceedings, Part 1 Vol. 1, no. (2005), p. 283-289
- Full Text:
- Reviewed:
- Description: Deliberative dialogue is a form of dialogue that involves participants advancing claims and, without power plays or posturing, deliberating on the claims of others until a consensus decision is reached. This paper describes a deliberative support system to facilitate and encourage participants to engage in a discussion deliberatively. A knowledge representation framework is deployed to generate a strong domain model of reasoning structure. The structure, coupled with a deliberative dialogue protocol results in a web based system that regulates a discussion to avoid combative, non-deliberative exchanges. The system has been designed for online dispute resolution between husband and wife in divorce proceedings involving property.
- Description: C1
- Description: 2003001381
- Authors: Macfadyen, Alyx , Stranieri, Andrew , Yearwood, John
- Date: 2005
- Type: Text , Journal article
- Relation: Lecture Notes in Artificial Intelligence 3681: Knowledge-Based Intelligent Information and Engineering Systems, 9th International Conference, KES 2005, Melbourne, Australia, September 2005, Proceedings, Part 1 Vol. 1, no. (2005), p. 283-289
- Full Text:
- Reviewed:
- Description: Deliberative dialogue is a form of dialogue that involves participants advancing claims and, without power plays or posturing, deliberating on the claims of others until a consensus decision is reached. This paper describes a deliberative support system to facilitate and encourage participants to engage in a discussion deliberatively. A knowledge representation framework is deployed to generate a strong domain model of reasoning structure. The structure, coupled with a deliberative dialogue protocol results in a web based system that regulates a discussion to avoid combative, non-deliberative exchanges. The system has been designed for online dispute resolution between husband and wife in divorce proceedings involving property.
- Description: C1
- Description: 2003001381
Group structured reasoning for coalescing group decisions
- Yearwood, John, Stranieri, Andrew
- Authors: Yearwood, John , Stranieri, Andrew
- Date: 2009
- Type: Text , Journal article
- Relation: Group Decision and Negotiation Vol. , no. (2009), p. 1-29
- Full Text:
- Reviewed:
- Description: In this paper we present the notion of structured reasoning through a model, called the Generic/Actual Argument Model (GAAM). The model which has been used as a computational representation for machine modelling of reasoning and for hybrid combinations of human and machine reasoning can be used as a coalescent framework for decision making. Whilst the notion of structuring reasoning is not new, structured reasoning is advanced as a technique where group consensus on reasoning structures at various levels can be used to facilitate the comprehension of complex reasoning particularly where there are multiple perspectives. For an issue, the approach provides a scaffolding structure for cognitive co-operation and a normative reasoning structure against which group participants can identify points of difference and points in common as well as the nature of the differences and similarities. Intra-group transparency characterized by the ability to recognise points in common and understand the nature of differences is important to the process of coalescing group decisions that carry maximum group support. © 2009 Springer Science+Business Media B.V.
- Authors: Yearwood, John , Stranieri, Andrew
- Date: 2009
- Type: Text , Journal article
- Relation: Group Decision and Negotiation Vol. , no. (2009), p. 1-29
- Full Text:
- Reviewed:
- Description: In this paper we present the notion of structured reasoning through a model, called the Generic/Actual Argument Model (GAAM). The model which has been used as a computational representation for machine modelling of reasoning and for hybrid combinations of human and machine reasoning can be used as a coalescent framework for decision making. Whilst the notion of structuring reasoning is not new, structured reasoning is advanced as a technique where group consensus on reasoning structures at various levels can be used to facilitate the comprehension of complex reasoning particularly where there are multiple perspectives. For an issue, the approach provides a scaffolding structure for cognitive co-operation and a normative reasoning structure against which group participants can identify points of difference and points in common as well as the nature of the differences and similarities. Intra-group transparency characterized by the ability to recognise points in common and understand the nature of differences is important to the process of coalescing group decisions that carry maximum group support. © 2009 Springer Science+Business Media B.V.
Supporting discretionary decision-making with information technology
- Hall, Mary Jean, Calabro, Domenico, Sourdin, Tania, Stranieri, Andrew, Zeleznikow, John
- Authors: Hall, Mary Jean , Calabro, Domenico , Sourdin, Tania , Stranieri, Andrew , Zeleznikow, John
- Date: 2005
- Type: Text , Journal article
- Relation: University of Ottawa Law & Technology Journal Vol. 2, no. 1 (2005), p. 1-36
- Full Text:
- Reviewed:
- Description: A NUMBER OF INCREASINGLY SOPHISTICATED technologies are now being used to support complex decision-making in a range of contexts. This paper reports on a project undertaken to provide decision support in discretionary legal domains by referring to a recently created model that involves the interplay and weighting of relevant rule-based and discretionary factors used in a decision-making process. The case study used in the modelling process is the Criminal Jurisdiction of the Victorian Magistrate’s Court (Australia), where the handing down of an appropriate custodial or non-custodial sentence requires the consideration of many factors. Tools and techniques used to capture relevant expert knowledge and to display it both as a paper model and as an online prototype application are discussed. Models of sentencing decision-making with rule-based and discretionary elements are presented and analyzed. This paper concludes by discussing the benefits and disadvantages of such technology and considers some potential appropriate uses of the model and web-based prototype application.
- Description: C1
- Description: 2003001431
- Authors: Hall, Mary Jean , Calabro, Domenico , Sourdin, Tania , Stranieri, Andrew , Zeleznikow, John
- Date: 2005
- Type: Text , Journal article
- Relation: University of Ottawa Law & Technology Journal Vol. 2, no. 1 (2005), p. 1-36
- Full Text:
- Reviewed:
- Description: A NUMBER OF INCREASINGLY SOPHISTICATED technologies are now being used to support complex decision-making in a range of contexts. This paper reports on a project undertaken to provide decision support in discretionary legal domains by referring to a recently created model that involves the interplay and weighting of relevant rule-based and discretionary factors used in a decision-making process. The case study used in the modelling process is the Criminal Jurisdiction of the Victorian Magistrate’s Court (Australia), where the handing down of an appropriate custodial or non-custodial sentence requires the consideration of many factors. Tools and techniques used to capture relevant expert knowledge and to display it both as a paper model and as an online prototype application are discussed. Models of sentencing decision-making with rule-based and discretionary elements are presented and analyzed. This paper concludes by discussing the benefits and disadvantages of such technology and considers some potential appropriate uses of the model and web-based prototype application.
- Description: C1
- Description: 2003001431
Discovering interesting association rules from legal databases
- Ivkovic, Sasha, Yearwood, John, Stranieri, Andrew
- Authors: Ivkovic, Sasha , Yearwood, John , Stranieri, Andrew
- Date: 2002
- Type: Text , Journal article
- Relation: Information & Communication Technology Law Vol. 11, no. 1 (2002), p. 35-47
- Full Text:
- Reviewed:
- Description: The Knowledge Discovery from Databases (KDD) technique called 'association rules' is applied to a large data set representing applicants for government-funded legal aid. Results indicate that KDD can be an invaluable tool for legal analysts. Association rules discovered identify associations between variables that are present in the data set though are not necessarily causal. Interesting rules can prompt analysts to formulate hypotheses for further investigation. The identification of interesting rules is typically performed using an objective measure of 'interesting' although this measure is often not sufficiently accurate to eliminate all uninteresting rules. In this article, a subjective measure of interestingness is adopted in conjunction with the objective measures. This leads to the ability to focus more accurately on those rules that surprise the analyst and are therefore more likely to be interesting. In general, KDD techniques have not been applied to law despite possible benefits because data is often stored in narrative form rather than in structured databases. However, the impending introduction of data warehouses that collect data from a number of organizations across a legal system presents invaluable opportunities for analysts using KDD.
- Description: C1
- Description: 2003000037
- Authors: Ivkovic, Sasha , Yearwood, John , Stranieri, Andrew
- Date: 2002
- Type: Text , Journal article
- Relation: Information & Communication Technology Law Vol. 11, no. 1 (2002), p. 35-47
- Full Text:
- Reviewed:
- Description: The Knowledge Discovery from Databases (KDD) technique called 'association rules' is applied to a large data set representing applicants for government-funded legal aid. Results indicate that KDD can be an invaluable tool for legal analysts. Association rules discovered identify associations between variables that are present in the data set though are not necessarily causal. Interesting rules can prompt analysts to formulate hypotheses for further investigation. The identification of interesting rules is typically performed using an objective measure of 'interesting' although this measure is often not sufficiently accurate to eliminate all uninteresting rules. In this article, a subjective measure of interestingness is adopted in conjunction with the objective measures. This leads to the ability to focus more accurately on those rules that surprise the analyst and are therefore more likely to be interesting. In general, KDD techniques have not been applied to law despite possible benefits because data is often stored in narrative form rather than in structured databases. However, the impending introduction of data warehouses that collect data from a number of organizations across a legal system presents invaluable opportunities for analysts using KDD.
- Description: C1
- Description: 2003000037
Predicting cardiac autonomic neuropathy category for diabetic data with missing values
- Abawajy, Jemal, Kelarev, Andrei, Chowdhury, Morshed, Stranieri, Andrew, Jelinek, Herbert
- Authors: Abawajy, Jemal , Kelarev, Andrei , Chowdhury, Morshed , Stranieri, Andrew , Jelinek, Herbert
- Date: 2013
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 43, no. 10 (2013), p. 1328-1333
- Full Text:
- Reviewed:
- Description: Cardiovascular autonomic neuropathy (CAN) is a serious and well known complication of diabetes. Previous articles circumvented the problem of missing values in CAN data by deleting all records and fields with missing values and applying classifiers trained on different sets of features that were complete. Most of them also added alternative features to compensate for the deleted ones. Here we introduce and investigate a new method for classifying CAN data with missing values. In contrast to all previous papers, our new method does not delete attributes with missing values, does not use classifiers, and does not add features. Instead it is based on regression and meta-regression combined with the Ewing formula for identifying the classes of CAN. This is the first article using the Ewing formula and regression to classify CAN. We carried out extensive experiments to determine the best combination of regression and meta-regression techniques for classifying CAN data with missing values. The best outcomes have been obtained by the additive regression meta-learner based on M5Rules and combined with the Ewing formula. It has achieved the best accuracy of 99.78% for two classes of CAN, and 98.98% for three classes of CAN. These outcomes are substantially better than previous results obtained in the literature by deleting all missing attributes and applying traditional classifiers to different sets of features without regression. Another advantage of our method is that it does not require practitioners to perform more tests collecting additional alternative features. © 2013 Elsevier Ltd.
- Description: C1
- Authors: Abawajy, Jemal , Kelarev, Andrei , Chowdhury, Morshed , Stranieri, Andrew , Jelinek, Herbert
- Date: 2013
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 43, no. 10 (2013), p. 1328-1333
- Full Text:
- Reviewed:
- Description: Cardiovascular autonomic neuropathy (CAN) is a serious and well known complication of diabetes. Previous articles circumvented the problem of missing values in CAN data by deleting all records and fields with missing values and applying classifiers trained on different sets of features that were complete. Most of them also added alternative features to compensate for the deleted ones. Here we introduce and investigate a new method for classifying CAN data with missing values. In contrast to all previous papers, our new method does not delete attributes with missing values, does not use classifiers, and does not add features. Instead it is based on regression and meta-regression combined with the Ewing formula for identifying the classes of CAN. This is the first article using the Ewing formula and regression to classify CAN. We carried out extensive experiments to determine the best combination of regression and meta-regression techniques for classifying CAN data with missing values. The best outcomes have been obtained by the additive regression meta-learner based on M5Rules and combined with the Ewing formula. It has achieved the best accuracy of 99.78% for two classes of CAN, and 98.98% for three classes of CAN. These outcomes are substantially better than previous results obtained in the literature by deleting all missing attributes and applying traditional classifiers to different sets of features without regression. Another advantage of our method is that it does not require practitioners to perform more tests collecting additional alternative features. © 2013 Elsevier Ltd.
- Description: C1
A comparison of machine learning algorithms for multilabel classification of CAN
- Kelarev, Andrei, Stranieri, Andrew, Yearwood, John, Jelinek, Herbert
- Authors: Kelarev, Andrei , Stranieri, Andrew , Yearwood, John , Jelinek, Herbert
- Date: 2012
- Type: Text , Journal article
- Relation: Advances in Computer Science and Engineering Vol. 9, no. 1 (2012), p. 1-4
- Full Text:
- Reviewed:
- Description: This article is devoted to the investigation and comparison of several important machine learning algorithms in their ability to obtain multilabel classifications of the stages of cardiac autonomic neuropathy (CAN). Data was collected by the Diabetes Complications Screening Research Initiative at Charles Sturt University. Our experiments have achieved better results than those published previously in the literature for similar CAN identification tasks.
- Authors: Kelarev, Andrei , Stranieri, Andrew , Yearwood, John , Jelinek, Herbert
- Date: 2012
- Type: Text , Journal article
- Relation: Advances in Computer Science and Engineering Vol. 9, no. 1 (2012), p. 1-4
- Full Text:
- Reviewed:
- Description: This article is devoted to the investigation and comparison of several important machine learning algorithms in their ability to obtain multilabel classifications of the stages of cardiac autonomic neuropathy (CAN). Data was collected by the Diabetes Complications Screening Research Initiative at Charles Sturt University. Our experiments have achieved better results than those published previously in the literature for similar CAN identification tasks.
Rule-based classifiers and meta classifiers for identification of cardiac autonomic neuropathy progression
- Jelinek, Herbert, Kelarev, Andrei, Stranieri, Andrew, Yearwood, John
- Authors: Jelinek, Herbert , Kelarev, Andrei , Stranieri, Andrew , Yearwood, John
- Date: 2012
- Type: Text , Journal article
- Relation: International Journal of Information Science and Computer Mathematics Vol. 5, no. 2 (2012), p. 49-53
- Full Text:
- Reviewed:
- Description: We investigate and compare several rule-based classifiers and meta classifiers in their ability to obtain multi-class classifications of cardiac autonomic neuropathy (CAN) and its progression. The best results obtained in our experiments are significantly better than the outcomes published previously in the literature for analogous CAN identification tasks or simpler binary classification tasks.
- Authors: Jelinek, Herbert , Kelarev, Andrei , Stranieri, Andrew , Yearwood, John
- Date: 2012
- Type: Text , Journal article
- Relation: International Journal of Information Science and Computer Mathematics Vol. 5, no. 2 (2012), p. 49-53
- Full Text:
- Reviewed:
- Description: We investigate and compare several rule-based classifiers and meta classifiers in their ability to obtain multi-class classifications of cardiac autonomic neuropathy (CAN) and its progression. The best results obtained in our experiments are significantly better than the outcomes published previously in the literature for analogous CAN identification tasks or simpler binary classification tasks.
An approach for Ewing test selection to support the clinical assessment of cardiac autonomic neuropathy
- Stranieri, Andrew, Abawajy, Jemal, Kelarev, Andrei, Huda, Shamsul, Chowdhury, Morshed, Jelinek, Herbert
- Authors: Stranieri, Andrew , Abawajy, Jemal , Kelarev, Andrei , Huda, Shamsul , Chowdhury, Morshed , Jelinek, Herbert
- Date: 2013
- Type: Text , Journal article
- Relation: Artificial Intelligence in Medicine Vol. 58, no. 3 (2013), p. 185-193
- Full Text:
- Reviewed:
- Description: Objective: This article addresses the problem of determining optimal sequences of tests for the clinical assessment of cardiac autonomic neuropathy (CAN) We investigate the accuracy of using only one of the recommended Ewing tests to classify CAN and the additional accuracy obtained by adding the remaining tests of the Ewing battery This is important as not all five Ewing tests can always be applied in each situation in practice Methods and material: We used new and unique database of the diabetes screening research initiative project, which is more than ten times larger than the data set used by Ewing in his original investigation of CAN We utilized decision trees and the optimal decision path finder (ODPF) procedure for identifying optimal sequences of tests Results: We present experimental results on the accuracy of using each one of the recommended Ewing tests to classify CAN and the additional accuracy that can be achieved by adding the remaining tests of the Ewing battery We found the best sequences of tests for cost-function equal to the number of tests The accuracies achieved by the initial segments of the optimal sequences for 2, 3 and 4 categories of CAN are 80.80, 91.33, 93.97 and 94.14, and respectively, 79.86, 89.29, 91.16 and 91.76, and 78.90, 86.21, 88.15 and 88.93 They show significant improvement compared to the sequence considered previously in the literature and the mathematical expectations of the accuracies of a random sequence of tests The complete outcomes obtained for all subsets of the Ewing features are required for determining optimal sequences of tests for any cost-function with the use of the ODPF procedure We have also found two most significant additional features that can increase the accuracy when some of the Ewing attributes cannot be obtained Conclusions: The outcomes obtained can be used to determine the optimal sequences of tests for each individual cost-function by following the ODPF procedure The results show that the best single Ewing test for diagnosing CAN is the deep breathing heart rate variation test Optimal sequences found for the cost-function equal to the number of tests guarantee that the best accuracy is achieved after any number of tests and provide an improvement in comparison with the previous ordering of tests or a random sequence © 2013 Elsevier B.V.
- Description: 2003011130
- Authors: Stranieri, Andrew , Abawajy, Jemal , Kelarev, Andrei , Huda, Shamsul , Chowdhury, Morshed , Jelinek, Herbert
- Date: 2013
- Type: Text , Journal article
- Relation: Artificial Intelligence in Medicine Vol. 58, no. 3 (2013), p. 185-193
- Full Text:
- Reviewed:
- Description: Objective: This article addresses the problem of determining optimal sequences of tests for the clinical assessment of cardiac autonomic neuropathy (CAN) We investigate the accuracy of using only one of the recommended Ewing tests to classify CAN and the additional accuracy obtained by adding the remaining tests of the Ewing battery This is important as not all five Ewing tests can always be applied in each situation in practice Methods and material: We used new and unique database of the diabetes screening research initiative project, which is more than ten times larger than the data set used by Ewing in his original investigation of CAN We utilized decision trees and the optimal decision path finder (ODPF) procedure for identifying optimal sequences of tests Results: We present experimental results on the accuracy of using each one of the recommended Ewing tests to classify CAN and the additional accuracy that can be achieved by adding the remaining tests of the Ewing battery We found the best sequences of tests for cost-function equal to the number of tests The accuracies achieved by the initial segments of the optimal sequences for 2, 3 and 4 categories of CAN are 80.80, 91.33, 93.97 and 94.14, and respectively, 79.86, 89.29, 91.16 and 91.76, and 78.90, 86.21, 88.15 and 88.93 They show significant improvement compared to the sequence considered previously in the literature and the mathematical expectations of the accuracies of a random sequence of tests The complete outcomes obtained for all subsets of the Ewing features are required for determining optimal sequences of tests for any cost-function with the use of the ODPF procedure We have also found two most significant additional features that can increase the accuracy when some of the Ewing attributes cannot be obtained Conclusions: The outcomes obtained can be used to determine the optimal sequences of tests for each individual cost-function by following the ODPF procedure The results show that the best single Ewing test for diagnosing CAN is the deep breathing heart rate variation test Optimal sequences found for the cost-function equal to the number of tests guarantee that the best accuracy is achieved after any number of tests and provide an improvement in comparison with the previous ordering of tests or a random sequence © 2013 Elsevier B.V.
- Description: 2003011130
Group decision making in health care : A case study of multidisciplinary meetings
- Sharma, Vishakha, Stranieri, Andrew, Burstein, Frada, Warren, Jim, Daly, Sharon, Patterson, Louise, Yearwood, John, Wolff, Alan
- Authors: Sharma, Vishakha , Stranieri, Andrew , Burstein, Frada , Warren, Jim , Daly, Sharon , Patterson, Louise , Yearwood, John , Wolff, Alan
- Date: 2016
- Type: Text , Journal article
- Relation: Journal of Decision Systems Vol. 25, no. (2016), p. 476-485
- Full Text:
- Reviewed:
- Description: Abstract: Recent studies have demonstrated that Multi-Disciplinary Meetings (MDM) practiced in some medical contexts can contribute to positive health care outcomes. The group reasoning and decision-making in MDMs has been found to be most effective when deliberations revolve around the patient’s needs, comprehensive information is available during the meeting, core members attend and the MDM is effectively facilitated. This article presents a case study of the MDMs in cancer care in a region of Australia. The case study draws on a group reasoning model called the Reasoning Community model to analyse MDM deliberations to illustrate that many factors are important to support group reasoning, not solely the provision of pertinent information. The case study has implications for the use of data analytics in any group reasoning context. © 2016 Informa UK Limited, trading as Taylor & Francis Group.
- Authors: Sharma, Vishakha , Stranieri, Andrew , Burstein, Frada , Warren, Jim , Daly, Sharon , Patterson, Louise , Yearwood, John , Wolff, Alan
- Date: 2016
- Type: Text , Journal article
- Relation: Journal of Decision Systems Vol. 25, no. (2016), p. 476-485
- Full Text:
- Reviewed:
- Description: Abstract: Recent studies have demonstrated that Multi-Disciplinary Meetings (MDM) practiced in some medical contexts can contribute to positive health care outcomes. The group reasoning and decision-making in MDMs has been found to be most effective when deliberations revolve around the patient’s needs, comprehensive information is available during the meeting, core members attend and the MDM is effectively facilitated. This article presents a case study of the MDMs in cancer care in a region of Australia. The case study draws on a group reasoning model called the Reasoning Community model to analyse MDM deliberations to illustrate that many factors are important to support group reasoning, not solely the provision of pertinent information. The case study has implications for the use of data analytics in any group reasoning context. © 2016 Informa UK Limited, trading as Taylor & Francis Group.
Diagnostic with incomplete nominal/discrete data
- Jelinek, Herbert, Yatsko, Andrew, Stranieri, Andrew, Venkatraman, Sitalakshmi, Bagirov, Adil
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Continuous patient monitoring with a patient centric agent : A block architecture
- Uddin, Ashraf, Stranieri, Andrew, Gondal, Iqbal, Balasubramanian, Venki
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 32700-32726
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has facilitated services without human intervention for a wide range of applications, including continuous remote patient monitoring (RPM). However, the complexity of RPM architectures, the size of data sets generated and limited power capacity of devices make RPM challenging. In this paper, we propose a tier-based End to End architecture for continuous patient monitoring that has a patient centric agent (PCA) as its center piece. The PCA manages a blockchain component to preserve privacy when data streaming from body area sensors needs to be stored securely. The PCA based architecture includes a lightweight communication protocol to enforce security of data through different segments of a continuous, real time patient monitoring architecture. The architecture includes the insertion of data into a personal blockchain to facilitate data sharing amongst healthcare professionals and integration into electronic health records while ensuring privacy is maintained. The blockchain is customized for RPM with modifications that include having the PCA select a Miner to reduce computational effort, enabling the PCA to manage multiple blockchains for the same patient, and the modification of each block with a prefix tree to minimize energy consumption and incorporate secure transaction payments. Simulation results demonstrate that security and privacy can be enhanced in RPM with the PCA based End to End architecture.
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 32700-32726
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has facilitated services without human intervention for a wide range of applications, including continuous remote patient monitoring (RPM). However, the complexity of RPM architectures, the size of data sets generated and limited power capacity of devices make RPM challenging. In this paper, we propose a tier-based End to End architecture for continuous patient monitoring that has a patient centric agent (PCA) as its center piece. The PCA manages a blockchain component to preserve privacy when data streaming from body area sensors needs to be stored securely. The PCA based architecture includes a lightweight communication protocol to enforce security of data through different segments of a continuous, real time patient monitoring architecture. The architecture includes the insertion of data into a personal blockchain to facilitate data sharing amongst healthcare professionals and integration into electronic health records while ensuring privacy is maintained. The blockchain is customized for RPM with modifications that include having the PCA select a Miner to reduce computational effort, enabling the PCA to manage multiple blockchains for the same patient, and the modification of each block with a prefix tree to minimize energy consumption and incorporate secure transaction payments. Simulation results demonstrate that security and privacy can be enhanced in RPM with the PCA based End to End architecture.
A count data model for heart rate variability forecasting and premature ventricular contraction detection
- Allami, Ragheed, Stranieri, Andrew, Balasubramanian, Venki, Jelinek, Herbert
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
Addressing the complexities of big data analytics in healthcare : The diabetes screening case
- De Silva, Daswin, Burstein, Frada, Jelinek, Herbert, Stranieri, Andrew
- Authors: De Silva, Daswin , Burstein, Frada , Jelinek, Herbert , Stranieri, Andrew
- Date: 2015
- Type: Text , Journal article
- Relation: Australasian Journal of Information Systems Vol. 19, no. (2015), p. S99-S115
- Full Text:
- Reviewed:
- Description: The healthcare industry generates a high throughput of medical, clinical and omics data of varying complexity and features. Clinical decision-support is gaining widespread attention as medical institutions and governing bodies turn towards better management of this data for effective and efficient healthcare delivery and quality assured outcomes. Amass of data across all stages, from disease diagnosis to palliative care, is further indication of the opportunities and challenges to effective data management, analysis, prediction and optimization techniques as parts of knowledge management in clinical environments. Big Data analytics (BDA) presents the potential to advance this industry with reforms in clinical decision-support and translational research. However, adoption of big data analytics has been slow due to complexities posed by the nature of healthcare data. The success of these systems is hard to predict, so further research is needed to provide a robust framework to ensure investment in BDA is justified. In this paper we investigate these complexities from the perspective of updated Information Systems (IS) participation theory. We present a case study on a large diabetes screening project to integrate, converge and derive expedient insights from such an accumulation of data and make recommendations for a successful BDA implementation grounded in a participatory framework and the specificities of big data in healthcare context. © 2015 De Silva, Burstein, Jelinek, Stranieri.
- Authors: De Silva, Daswin , Burstein, Frada , Jelinek, Herbert , Stranieri, Andrew
- Date: 2015
- Type: Text , Journal article
- Relation: Australasian Journal of Information Systems Vol. 19, no. (2015), p. S99-S115
- Full Text:
- Reviewed:
- Description: The healthcare industry generates a high throughput of medical, clinical and omics data of varying complexity and features. Clinical decision-support is gaining widespread attention as medical institutions and governing bodies turn towards better management of this data for effective and efficient healthcare delivery and quality assured outcomes. Amass of data across all stages, from disease diagnosis to palliative care, is further indication of the opportunities and challenges to effective data management, analysis, prediction and optimization techniques as parts of knowledge management in clinical environments. Big Data analytics (BDA) presents the potential to advance this industry with reforms in clinical decision-support and translational research. However, adoption of big data analytics has been slow due to complexities posed by the nature of healthcare data. The success of these systems is hard to predict, so further research is needed to provide a robust framework to ensure investment in BDA is justified. In this paper we investigate these complexities from the perspective of updated Information Systems (IS) participation theory. We present a case study on a large diabetes screening project to integrate, converge and derive expedient insights from such an accumulation of data and make recommendations for a successful BDA implementation grounded in a participatory framework and the specificities of big data in healthcare context. © 2015 De Silva, Burstein, Jelinek, Stranieri.
Data-analytically derived flexible HbA1c thresholds for type 2 diabetes mellitus diagnostic
- Stranieri, Andrew, Yatsko, Andrew, Jelinek, Herbert, Venkatraman, Sitalakshmi
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
Personalised measures of obesity using waist to height ratios from an Australian health screening program
- Jelinek, Herbert, Stranieri, Andrew, Yatsko, Anderw, Venkatraman, Sitalakshmi
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Anderw , Venkatraman, Sitalakshmi
- Date: 2019
- Type: Text , Journal article
- Relation: Digital Health Vol. 5, no. (2019), p. 1-8
- Full Text:
- Reviewed:
- Description: Objectives The aim of the current study is to generate waist circumference to height ratio cut-off values for obesity categories from a model of the relationship between body mass index and waist circumference to height ratio. We compare the waist circumference to height ratio discovered in this way with cut-off values currently prevalent in practice that were originally derived using pragmatic criteria. Method Personalized data including age, gender, height, weight, waist circumference and presence of diabetes, hypertension and cardiovascular disease for 847 participants over eight years were assembled from participants attending a rural Australian health review clinic (DiabHealth). Obesity was classified based on the conventional body mass index measure (weight/height(2)) and compared to the waist circumference to height ratio. Correlations between the measures were evaluated on the screening data, and independently on data from the National Health and Nutrition Examination Survey that included age categories. Results This article recommends waist circumference to height ratio cut-off values based on an Australian rural sample and verified using the National Health and Nutrition Examination Survey database that facilitates the classification of obesity in clinical practice. Gender independent cut-off values are provided for waist circumference to height ratio that identify healthy (waist circumference to height ratio >= 0.45), overweight (0.53) and the three obese (0.60, 0.68, 0.75) categories verified on the National Health and Nutrition Examination Survey dataset. A strong linearity between the waist circumference to height ratio and the body mass index measure is demonstrated. Conclusion The recommended waist circumference to height ratio cut-off values provided a useful index for assessing stages of obesity and risk of chronic disease for improved healthcare in clinical practice.
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Anderw , Venkatraman, Sitalakshmi
- Date: 2019
- Type: Text , Journal article
- Relation: Digital Health Vol. 5, no. (2019), p. 1-8
- Full Text:
- Reviewed:
- Description: Objectives The aim of the current study is to generate waist circumference to height ratio cut-off values for obesity categories from a model of the relationship between body mass index and waist circumference to height ratio. We compare the waist circumference to height ratio discovered in this way with cut-off values currently prevalent in practice that were originally derived using pragmatic criteria. Method Personalized data including age, gender, height, weight, waist circumference and presence of diabetes, hypertension and cardiovascular disease for 847 participants over eight years were assembled from participants attending a rural Australian health review clinic (DiabHealth). Obesity was classified based on the conventional body mass index measure (weight/height(2)) and compared to the waist circumference to height ratio. Correlations between the measures were evaluated on the screening data, and independently on data from the National Health and Nutrition Examination Survey that included age categories. Results This article recommends waist circumference to height ratio cut-off values based on an Australian rural sample and verified using the National Health and Nutrition Examination Survey database that facilitates the classification of obesity in clinical practice. Gender independent cut-off values are provided for waist circumference to height ratio that identify healthy (waist circumference to height ratio >= 0.45), overweight (0.53) and the three obese (0.60, 0.68, 0.75) categories verified on the National Health and Nutrition Examination Survey dataset. A strong linearity between the waist circumference to height ratio and the body mass index measure is demonstrated. Conclusion The recommended waist circumference to height ratio cut-off values provided a useful index for assessing stages of obesity and risk of chronic disease for improved healthcare in clinical practice.
Visual character N-grams for classification and retrieval of radiological images
- Kulkarni, Pradnya, Stranieri, Andrew, Kulkarni, Siddhivinayak, Ugon, Julien, Mittal, Manish
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Kulkarni, Siddhivinayak , Ugon, Julien , Mittal, Manish
- Date: 2014
- Type: Text , Journal article
- Relation: International Journal of Multimedia & Its Applications Vol. 6, no. 2 (April 2014), p. 35-49
- Full Text:
- Reviewed:
- Description: Diagnostic radiology struggles to maintain high interpretation accuracy. Retrieval of past similar cases would help the inexperienced radiologist in the interpretation process. Character n-gram model has been effective in text retrieval context in languages such as Chinese where there are no clear word boundaries. We propose the use of visual character n-gram model for representation of image for classification and retrieval purposes. Regions of interests in mammographic images are represented with the character n-gram features. These features are then used as input to back-propagation neural network for classification of regions into normal and abnormal categories. Experiments on miniMIAS database show that character n-gram features are useful in classifying the regions into normal and abnormal categories. Promising classification accuracies are observed (83.33%) for fatty background tissue warranting further investigation. We argue that Classifying regions of interests would reduce the number of comparisons necessary for finding similar images from the database and hence would reduce the time required for retrieval of past similar cases.
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Kulkarni, Siddhivinayak , Ugon, Julien , Mittal, Manish
- Date: 2014
- Type: Text , Journal article
- Relation: International Journal of Multimedia & Its Applications Vol. 6, no. 2 (April 2014), p. 35-49
- Full Text:
- Reviewed:
- Description: Diagnostic radiology struggles to maintain high interpretation accuracy. Retrieval of past similar cases would help the inexperienced radiologist in the interpretation process. Character n-gram model has been effective in text retrieval context in languages such as Chinese where there are no clear word boundaries. We propose the use of visual character n-gram model for representation of image for classification and retrieval purposes. Regions of interests in mammographic images are represented with the character n-gram features. These features are then used as input to back-propagation neural network for classification of regions into normal and abnormal categories. Experiments on miniMIAS database show that character n-gram features are useful in classifying the regions into normal and abnormal categories. Promising classification accuracies are observed (83.33%) for fatty background tissue warranting further investigation. We argue that Classifying regions of interests would reduce the number of comparisons necessary for finding similar images from the database and hence would reduce the time required for retrieval of past similar cases.
Rapid health data repository allocation using predictive machine learning
- Uddin, Ashraf, Stranieri, Andrew, Gondal, Iqbal, Balasubramanian, Venki
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Health Informatics Journal Vol. 26, no. 4 (2020), p. 3009-3036
- Full Text:
- Reviewed:
- Description: Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used. © The Author(s) 2020.
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Health Informatics Journal Vol. 26, no. 4 (2020), p. 3009-3036
- Full Text:
- Reviewed:
- Description: Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used. © The Author(s) 2020.
Blockchain leveraged decentralized IoT eHealth framework
- Uddin, Ashraf, Stranieri, Andrew, Gondal, Iqbal, Balasubramanian, Venki
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Internet of Things Vol. 9, no. March 2020 p. 100159
- Full Text:
- Reviewed:
- Description: Blockchain technologies recently emerging for eHealth, can facilitate a secure, decentral- ized and patient-driven, record management system. However, Blockchain technologies cannot accommodate the storage of data generated from IoT devices in remote patient management (RPM) settings as this application requires a fast consensus mechanism, care- ful management of keys and enhanced protocols for privacy. In this paper, we propose a Blockchain leveraged decentralized eHealth architecture which comprises three layers: (1) The Sensing layer –Body Area Sensor Networks include medical sensors typically on or in a patient body transmitting data to a smartphone. (2) The NEAR processing layer –Edge Networks consist of devices at one hop from data sensing IoT devices. (3) The FAR pro- cessing layer –Core Networks comprise Cloud or other high computing servers). A Patient Agent (PA) software replicated on the three layers processes medical data to ensure reli- able, secure and private communication. The PA executes a lightweight Blockchain consen- sus mechanism and utilizes a Blockchain leveraged task-offloading algorithm to ensure pa- tient’s privacy while outsourcing tasks. Performance analysis of the decentralized eHealth architecture has been conducted to demonstrate the feasibility of the system in the pro- cessing and storage of RPM data.
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Internet of Things Vol. 9, no. March 2020 p. 100159
- Full Text:
- Reviewed:
- Description: Blockchain technologies recently emerging for eHealth, can facilitate a secure, decentral- ized and patient-driven, record management system. However, Blockchain technologies cannot accommodate the storage of data generated from IoT devices in remote patient management (RPM) settings as this application requires a fast consensus mechanism, care- ful management of keys and enhanced protocols for privacy. In this paper, we propose a Blockchain leveraged decentralized eHealth architecture which comprises three layers: (1) The Sensing layer –Body Area Sensor Networks include medical sensors typically on or in a patient body transmitting data to a smartphone. (2) The NEAR processing layer –Edge Networks consist of devices at one hop from data sensing IoT devices. (3) The FAR pro- cessing layer –Core Networks comprise Cloud or other high computing servers). A Patient Agent (PA) software replicated on the three layers processes medical data to ensure reli- able, secure and private communication. The PA executes a lightweight Blockchain consen- sus mechanism and utilizes a Blockchain leveraged task-offloading algorithm to ensure pa- tient’s privacy while outsourcing tasks. Performance analysis of the decentralized eHealth architecture has been conducted to demonstrate the feasibility of the system in the pro- cessing and storage of RPM data.
Criteria to measure social media value in health care settings : narrative literature review
- Ukoha, Chukwuma, Stranieri, Andrew
- Authors: Ukoha, Chukwuma , Stranieri, Andrew
- Date: 2019
- Type: Text , Journal article , Review
- Relation: Journal of Medical Internet Research Vol. 21, no. 12 (2019), p.
- Full Text:
- Reviewed:
- Description: Background: With the growing use of social media in health care settings, there is a need to measure outcomes resulting from its use to ensure continuous performance improvement. Despite the need for measurement, a unified approach for measuring the value of social media used in health care remains elusive. Objective: This study aimed to elucidate how the value of social media in health care settings can be ascertained and to taxonomically identify steps and techniques in social media measurement from a review of relevant literature. Methods: A total of 65 relevant articles drawn from 341 articles on the subject of measuring social media in health care settings were qualitatively analyzed and synthesized. The articles were selected from the literature from diverse disciplines including business, information systems, medical informatics, and medicine. Results: The review of the literature showed different levels and focus of analysis when measuring the value of social media in health care settings. It equally showed that there are various metrics for measurement, levels of measurement, approaches to measurement, and scales of measurement. Each may be relevant, depending on the use case of social media in health care. Conclusions: A comprehensive yardstick is required to simplify the measurement of outcomes resulting from the use of social media in health care. At the moment, there is neither a consensus on what indicators to measure nor on how to measure them. We hope that this review is used as a starting point to create a comprehensive measurement criterion for social media used in health care. © 2019 Chukwuma Ukoha, Andrew Stranieri.
- Authors: Ukoha, Chukwuma , Stranieri, Andrew
- Date: 2019
- Type: Text , Journal article , Review
- Relation: Journal of Medical Internet Research Vol. 21, no. 12 (2019), p.
- Full Text:
- Reviewed:
- Description: Background: With the growing use of social media in health care settings, there is a need to measure outcomes resulting from its use to ensure continuous performance improvement. Despite the need for measurement, a unified approach for measuring the value of social media used in health care remains elusive. Objective: This study aimed to elucidate how the value of social media in health care settings can be ascertained and to taxonomically identify steps and techniques in social media measurement from a review of relevant literature. Methods: A total of 65 relevant articles drawn from 341 articles on the subject of measuring social media in health care settings were qualitatively analyzed and synthesized. The articles were selected from the literature from diverse disciplines including business, information systems, medical informatics, and medicine. Results: The review of the literature showed different levels and focus of analysis when measuring the value of social media in health care settings. It equally showed that there are various metrics for measurement, levels of measurement, approaches to measurement, and scales of measurement. Each may be relevant, depending on the use case of social media in health care. Conclusions: A comprehensive yardstick is required to simplify the measurement of outcomes resulting from the use of social media in health care. At the moment, there is neither a consensus on what indicators to measure nor on how to measure them. We hope that this review is used as a starting point to create a comprehensive measurement criterion for social media used in health care. © 2019 Chukwuma Ukoha, Andrew Stranieri.