Significance level of a query for enterprise data
- Authors: Thi Ngoc Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder , Stranieri, Andrew , Das, Rajkumar
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 30th International Business Information Management Association Conference - Vision 2020: Sustainable Economic development, Innovation Management, and Global Growth, IBIMA 2017; Madrid, Spain; 8th-9th November 2017 Vol. 2017-January, p. 4494-4504
- Full Text: false
- Reviewed:
- Description: To operate enterprise activities, a large number of queries need to be processed every day through an enterprise system. Consequently, such a system frequently faces hugely overloaded information and incurs high delay in producing query responses for big data. This is because, traditional queries are normally treated with equal importance. With the advent of big data and its use in enterprise systems and the growth of process complexity, the traditional approach of query processing is no more suitable as it does not consider semantic information and captures all data irrespective of their relevance to a business organization, which eventually increases the computational time in both big data collection and analysis. The significance level of a query can make a trade-off between query response delay and the extent of data collection and analysis. This motivates us to concentrate on determining the significance level of a query considering its importance to an enterprise system. To our knowledge, no such approach is available in the literature. To bridge this research gap, this paper, for the first time, proposes an approach to determine the significance level of a query to prioritize them with the relevance to a business organization. As business processes play key roles in any enterprise system and all business processes are not equally important, this is done by determining the semantic similarity between a query and the processes of a business organization and the importance of a business process to that organization. With a case study on an enterprise system of a retail company, the results produced by our proposed approach have shown that significance level is higher for more important queries compared to the less important ones.
A genetic algorithm-neural network wrapper approach for bundle branch block detection
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki
- Date: 2016
- Type: Text , Conference paper
- Relation: Computing in Cardiology Conference (CinC), 2016; Vancouver, BC ;11-14 Sept. 2016, published in Computing in Cardiology p. 461-464
- Full Text: false
- Reviewed:
- Description: An Electrocardiogram (ECG) records the electrical impulses of the heart and indicates rhythm anomalies for diagnostic purposes [1], [2]. A typical ECG tracing of the cardiac cycle consists of a P wave, QRS complex, and T wave [3]. Good performance of an ECG analyzing system depends heavily upon the accurate and reliable detection of the QRS complex, as well as the T and P waves [4]. A Bundle Branch Block (BBB) is a delay or obstruction along electrical impulse pathways of the heart manifesting in a prolonged QRS interval usually greater than 120ms. The automated detection and classification of a BBB is important for prompt, accurate diagnosis and treatment to reduce morbidity and mortality.
A heuristic gene regulatory networks model for cardiac function and pathology
- Authors: Zarnegar, Armita , Vamplew, Peter , Stranieri, Andrew , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 Computing in Cardiology Conference (CinC); Vancouver; 11-14th Sept, 2016
- Full Text: false
- Reviewed:
- Description: Genome-wide association studies (GWAS) and next-generation sequencing (NGS) has led to an increase in information about the human genome and cardiovascular disease. Understanding the role of genes in cardiac function and pathology requires modeling gene interactions and identification of regulatory genes as part of a gene regulatory network (GRN). Feature selection and data reduction not sufficient and require domain knowledge to deal with large data. We propose three novel innovations in constructing a GRN based on heuristics. A 2D Visualised Co-regulation function. Post-processing to identify gene-gene interactions. Finally a threshold algorithm is applied to identify the hub genes that provide the backbone of the GRN. The 2D Visualized Co-regulation function performed significantly better compared to the Pearson's correlation for measuring pairwise associations (t=3.46, df=5, p=0.018). The F-measure, improved from 0.11 to 0.12. The hub network provided a 60% improvement to that reported in the literature. The performance of the hub network was then also compared against ARACNe and performed significantly better (p=0.024). We conclude that a heuristics approach in developing GRNs has potential to improve our understanding of gene regulation and interaction in diverse biological function and disease.
A model for the introduction of Ayurvedic and Allopathic Electronic Health Records in Sri Lanka
- Authors: Stranieri, Andrew , Sahama, Tony , Butler-Henderson, Kerryn , Perera, Kamal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 IEEE International Symposium on Technology and Society; Trivandrum, Kerala, India; 20th-22nd October 2016 p. 56-61
- Full Text:
- Reviewed:
- Description: Fully integrated electronic health records (EHR) provide healthcare providers and patients access to records across a health care system and promise efficient and effective provision of health care. However, fully integrated records have proven to be very expensive and difficult to establish. Currently. EHR's have been developed largely to accommodate Western medicine events. These barriers impact on the introduction of EHR's in Sri Lanka, where health budgets are already stretched and Ayurvedic medicine is routinely practiced alongside Allopathic medicine. This article identifies requirements for EHR in the Sri Lankan context and advances a model for the introduction of EHR's that suits that context. The model is justified by drawing on insights and experiences with EHR in Western nations.
A taxonomy for mHealth
- Authors: Edirisinghe, Ruwini , Stranieri, Andrew , Wickramasinghe, Nilmini
- Date: 2016
- Type: Text , Book chapter
- Relation: Handbook of Research on Healthcare Administration and Management Chapter 36 p. 596-615
- Full Text: false
- Reviewed:
- Description: Recently, we are witnessing an exponential growth in remote monitoring and mobile applications for healthcare. These solutions are all designed to ultimately enable the consumer to enjoy better healthcare delivery and /or wellness. In order to understand this growing area, we believe it is necessary to develop a framework to analyse and evaluate these solutions. The purpose of this chapter then is to offer a suitable taxonomy to systematically analyse and evaluate the existing solutions based on number of dimensions including technological, clinical, social, and economic.
Cost-analysis of teledentistry in residential aged care facilities
- Authors: Mariño, Rodrigo , Tonmukayakul, Utsana , Manton, David , Stranieri, Andrew , Clarke, Ken
- Date: 2016
- Type: Text , Journal article
- Relation: Journal of Telemedicine and Telecare Vol. 22, no. 6 (2016), p.326-332
- Full Text: false
- Reviewed:
- Description: Introduction: The purpose of this research was to conduct a cost-analysis, from a public healthcare perspective, comparing the cost and benefits of face-to-face patient examination assessments conducted by a dentist at a residential aged care facility (RACF) situated in rural areas of the Australian state of Victoria, with two teledentistry approaches utilizing virtual oral examination. Methods: The costs associated with implementing and operating the teledentistry approach were identified and measured using 2014 prices in Australian dollars. Costs were measured as direct intervention costs and programme costs. A population of 100 RACF residents was used as a basis to estimate the cost of oral examination and treatment plan development for the traditional face-to-face model vs. two teledentistry models: an asynchronous review and treatment plan preparation; and realtime communication with a remotely located oral health professional. Results: It was estimated that if 100 residents received an asynchronous oral health assessment and treatment plan, the net cost from a healthcare perspective would be AU$32.35 (AU$27.19–AU$38.49) per resident. The total cost of the conventional face-to-face examinations by a dentist would be AU$36.59 ($30.67–AU$42.98) per resident using realistic assumptions. Meanwhile, the total cost of real-time remote oral examination would be AU$41.28 (AU$34.30–AU$48.87) per resident. Discussion: Teledental asynchronous patient assessments were the lowest cost service model. Access to oral health professionals is generally low in RACFs; however, the real-time consultation could potentially achieve better outcomes due to twoway communication between the nurse and a remote oral health professional via health promotion/disease prevention delivered in conjunction with the oral examination
Data analytics identify glycated haemoglobin co-markers for type 2 diabetes mellitus diagnosis
- Authors: Jelinek, Herbert , Stranieri, Andrew , Yatsko, Andrew , Venkatraman, Sitalakshmi
- Date: 2016
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 75, no. (2016), p. 90-97
- Full Text: false
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is being more commonly used as an alternative test for the identification of type 2 diabetes mellitus (T2DM) or to add to fasting blood glucose level and oral glucose tolerance test results, because it is easily obtained using point-of-care technology and represents long-term blood sugar levels. HbA1c cut-off values of 6.5% or above have been recommended for clinical use based on the presence of diabetic comorbidities from population studies. However, outcomes of large trials with a HbA1c of 6.5% as a cut-off have been inconsistent for a diagnosis of T2DM. This suggests that a HbA1c cut-off of 6.5% as a single marker may not be sensitive enough or be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied on a large clinical dataset to identify an optimal cut-off value for HbA1c and to identify whether additional biomarkers can be used together with HbA1c to enhance diagnostic accuracy of T2DM. T2DM classification accuracy increased if 8-hydroxy-2-deoxyguanosine (8-OhdG), an oxidative stress marker, was included in the algorithm from 78.71% for HbA1c at 6.5% to 86.64%. A similar result was obtained when interleukin-6 (IL-6) was included (accuracy=85.63%) but with a lower optimal HbA1c range between 5.73 and 6.22%. The application of data analytics to medical records from the Diabetes Screening programme demonstrates that data analytics, combined with large clinical datasets can be used to identify clinically appropriate cut-off values and identify novel biomarkers that when included improve the accuracy of T2DM diagnosis even when HbA1c levels are below or equal to the current cut-off of 6.5%. © 2016 Elsevier Ltd.
ECG reduction for wearable sensor
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS); Naples, Italy; 28th November-1st December 2016 p. 520-525
- Full Text:
- Reviewed:
- Description: The transmission, storage and analysis of electrocardiogram (ECG) data in real-time is essential for remote patient monitoring with wearable ECG devices and mobile ECG contexts. However, this remains a challenge to achieve within the processing power and the storage capacity of mobile devices. ECG reduction algorithms have an important role to play in reducing the processing requirements for mobile devices, however many existing ECG reduction and compression algorithms are computationally expensive to execute in mobile devices and have not been designed for real-time computation and incremental data arrival. In this paper, we describe a computationally naive, yet effective, algorithm that achieves high ECG reduction rates while maintaining key diagnostic features including PR, QRS, ST, QT and RR intervals. While reduction does not enable ECG waves to be reproduced, the ability to transmit key indicators (diagnostic features) using minimal computational resources, is particularly useful in mobile health contexts involving power constrained sensors and devices. Results of the proposed reduction algorithm indicate that the proposed algorithm outperforms other ECG reduction algorithms at a reduction/compression ratio (CR) of 5:1. If power or processing capacity is low, the algorithm can readily switch to a compression ratio of up to 10: 1 while still maintaining an error rate below 10%.
Group decision making in health care : A case study of multidisciplinary meetings
- Authors: Sharma, Vishakha , Stranieri, Andrew , Burstein, Frada , Warren, Jim , Daly, Sharon , Patterson, Louise , Yearwood, John , Wolff, Alan
- Date: 2016
- Type: Text , Journal article
- Relation: Journal of Decision Systems Vol. 25, no. (2016), p. 476-485
- Full Text:
- Reviewed:
- Description: Abstract: Recent studies have demonstrated that Multi-Disciplinary Meetings (MDM) practiced in some medical contexts can contribute to positive health care outcomes. The group reasoning and decision-making in MDMs has been found to be most effective when deliberations revolve around the patient’s needs, comprehensive information is available during the meeting, core members attend and the MDM is effectively facilitated. This article presents a case study of the MDMs in cancer care in a region of Australia. The case study draws on a group reasoning model called the Reasoning Community model to analyse MDM deliberations to illustrate that many factors are important to support group reasoning, not solely the provision of pertinent information. The case study has implications for the use of data analytics in any group reasoning context. © 2016 Informa UK Limited, trading as Taylor & Francis Group.
Missing data imputation for individualised CVD diagnostic and treatment
- Authors: Venkatraman, Sitalakshmi , Yatsko, Andrew , Stranieri, Andrew , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference paper
- Relation: Computing in Cardiology, 2016 Vol. 43 I E E E Computer Society
- Full Text: false
- Reviewed:
- Description: Cardiac health screening standards require increasingly more clinical tests consisting of blood, urine and anthropometric measures as well as an extensive clinical and medication history. To ensure optimal screening referrals, diagnostic determinants need to be highly accurate to reduce false positives and ensuing stress to individual patients. However, the data from individual patients partaking in population screening is often incomplete. The current study provides an imputation algorithm that has been applied to patientcentered cardiac health screening. Missing values are iteratively imputed in conjunction with combinations of values on subsets of selected features. The approach was evaluated on the DiabHealth dataset containing 2800 records with over 180 attributes. The results for predicting CVD after data completion showed sensitivity and specificity of 94% and 99% respectively. Removing variables that define cardiac events and associated conditions directly, left ‘age’ followed by ‘use’ of antihypertensive and anti-cholesterol medication, especially statins among the best predictors.
Remote monitoring and mobile Apps
- Authors: Stranieri, Andrew , Edirisinghe, Ruwini , Wickramasinghe, Nilmini
- Date: 2016
- Type: Text , Book chapter
- Relation: Contemporary Consumer Health Informatics Chapter 16 p. 297-318
- Full Text: false
- Reviewed:
- Description: Recently, we have been witnessing an exponential growth in mobile health (mHealth) for health care. These solutions are all designed ultimate to enable the consumer to enjoy better health-care delivery and/or wellness. In order to understand this growing area, we believe it is necessary to develop a framework to analyse and evaluate these solutions. The purpose of this chapter is to proffer a suitable taxonomy to do this.
Texture image classification using pixel N-grams
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Ugon, Julien
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 IEEE International Conference on Signal and Image Processing (ICSIP); Beijing, China; 13-15 Aug, 2016 p. 137-141
- Full Text: false
- Reviewed:
- Description: Various statistical methods such as co-occurrence matrix, local binary patterns and spectral approaches such as Gabor filters have been used for generating global features for image classification. However, global image features fail to distinguish between local variations within an image. Bag-of-visual-words (BoVW) model do capture local variations in an image, but typically do not consider spatial relationships between the visual words. Here, a novel image representation ‘Pixel N-grams’, inspired from the character N-gram concept in text retrieval has been applied for texture classification purpose. Texture is an important property for image classification. Experiments on the benchmark texture database (UIUC) demonstrates that the overall classification accuracy resulting from Pixel N-gram approach (89.5%) is comparable with that achieved using BoVW approach (84.4%) with the added advantage of simplicity and reduced computational cost.
A scalable cloud Platform for Active healthcare monitoring applications
- Authors: Balasubramanian, Venki , Stranieri, Andrew
- Date: 2015
- Type: Text , Conference paper
- Relation: 2014 IEEE Conference on e-Learning, e-Management and e-Services, IC3e 2014; Melbourne, Australia; 10th-12th December 2014 p. 93-98
- Full Text:
- Reviewed:
- Description: Continuous, remote monitoring of patients using wearable sensors can facilitate early detection of many conditions and can help to manage the growing healthcare crisis worldwide. A remote patient monitoring application consists of many emerging services such as wireless wearable sensor configuration, patient registration and authentication, collaborative consultation of doctors, storage and maintenance of electronic health record. The provision of these services requires the development and maintenance of a remote healthcare monitoring application (HMA) that includes a body area wireless sensor network (BASWN) and Health Applications (HA) to detect specific health issues. In addition, the deployment of HMAs for different hospitals is not easily scalable owing to the heterogeneous nature of hardware and software involved. Cloud computing overcomes this aspect by allowing simple and easy maintenance of ICT infrastructure. In this work, we report a real-time-like cloud based architecture known as Assistive Patient monitoring cloud Platform for Active healthcare applications (AppA) using a delegate pattern. The built AppA is highly scalable and capable of spawning new instances based on monitoring requirements from the health care providers, and are aligned with scalable economic models. © 2014 IEEE.
Addressing the complexities of big data analytics in healthcare : The diabetes screening case
- Authors: De Silva, Daswin , Burstein, Frada , Jelinek, Herbert , Stranieri, Andrew
- Date: 2015
- Type: Text , Journal article
- Relation: Australasian Journal of Information Systems Vol. 19, no. (2015), p. S99-S115
- Full Text:
- Reviewed:
- Description: The healthcare industry generates a high throughput of medical, clinical and omics data of varying complexity and features. Clinical decision-support is gaining widespread attention as medical institutions and governing bodies turn towards better management of this data for effective and efficient healthcare delivery and quality assured outcomes. Amass of data across all stages, from disease diagnosis to palliative care, is further indication of the opportunities and challenges to effective data management, analysis, prediction and optimization techniques as parts of knowledge management in clinical environments. Big Data analytics (BDA) presents the potential to advance this industry with reforms in clinical decision-support and translational research. However, adoption of big data analytics has been slow due to complexities posed by the nature of healthcare data. The success of these systems is hard to predict, so further research is needed to provide a robust framework to ensure investment in BDA is justified. In this paper we investigate these complexities from the perspective of updated Information Systems (IS) participation theory. We present a case study on a large diabetes screening project to integrate, converge and derive expedient insights from such an accumulation of data and make recommendations for a successful BDA implementation grounded in a participatory framework and the specificities of big data in healthcare context. © 2015 De Silva, Burstein, Jelinek, Stranieri.
Analysis and comparison of co-occurrence matrix and pixel n-gram features for mammographic images
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Kulkarni, Sid , Ugon, Julien , Mittal, Manish
- Date: 2015
- Type: Text , Conference paper
- Relation: International Conference on Communication and Computing p. 7-14
- Full Text: false
- Reviewed:
- Description: Mammography is a proven way of detecting breast cancer at an early stage. Various feature extraction techniques such as histograms, co-occurrence matrix, local binary patterns, Gabor filters, wavelet transforms are used for analysing mammograms. The novel pixel N-gram feature extraction technique has been inspired from the character N-gram concept of text retrieval. In this paper, we have compared the novel N-gram feature extraction technique with the co-occurrence matrix feature extraction technique. The experiments were conducted on the benchmark miniMIAS mammography database. Classification of mammograms into normal and abnormal category using N-gram features showed promising results with greater classification accuracy, sensitivity and specificity compared to classification using co-occurrence matrix features. Moreover, N-gram features computation are found to be considerably faster than co-occurrence matrix feature computation
AppA : Assistive patient monitoring cloud platform for active healthcare applications
- Authors: Balasubramanian, Venki , Stranieri, Andrew , Kaur, Ranjit
- Date: 2015
- Type: Text , Conference paper
- Relation: 9th International Conference on Ubiquitous Information Management and Communication, ACM IMCOM 2015; Bali, Indonesia; 8th-10th January 2015
- Full Text:
- Reviewed:
- Description: Continuous, remote monitoring of patients using wearable sensors can facilitate early detection of many conditions and can help to manage the growing healthcare crisis worldwide. A remote patient monitoring application consists of many emerging services such as wireless wearable sensor configuration, patient registration and authentication, collaborative consultation of doctors, storage and maintenance of electronic health record. The provision of these services requires the development and maintenance of a remote healthcare monitoring application (HMA) that includes a body area wireless sensor network (BASWN) and Health Applications (HA) to detect specific health issues. In addition, the deployment of HMAs for different hospitals is not easily scalable owing to the heterogeneous nature of hardware and software involved. Cloud computing overcomes this aspect by allowing simple and easy maintenance of ICT infrastructure. In this work, we report a realtime- like cloud based architecture known as Assistive Patient monitoring cloud Platform for Active healthcare applications (AppA) using a delegate pattern. The built AppA is highly scalable and capable of spawning new instances based on the monitoring requirements from the health care providers, and is aligned with scalable economic models.
Business context in big data analytics
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder , Stranieri, Andrew
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 10th International Conference on Information, Communications and Signal Processing, ICICS 2015; Singapore; 2nd-4th December 2015
- Full Text: false
- Reviewed:
- Description: Big data are generated from a variety of sources having different representation forms and formats, it raises a research question as how important data relevant to a business context can be captured and analyzed more accurately to represent deep and relevant business insight. There is a number of existing big data analytic methods available in the literature that consider contextual information such as the context of a query and its users, the context of a query-driven recommendation system, etc. However, these methods still have many challenges and none of them has considered the context of a business in either data collection or analysis process. To address this research gap, we introduce a big data analytic technique which embeds a business context in terms of the significance level of a query into the bedrock of its data collection and analysis process. We implemented our proposed model under the framework of Hadoop considering the context of a grocery shop. The results exhibit that our method substantially increases the amount of data collection and their deep insight with an increase of the significance level value. © 2015 IEEE.
- Description: 2015 10th International Conference on Information, Communications and Signal Processing, ICICS 2015
Data-analytically derived flexible HbA1c thresholds for type 2 diabetes mellitus diagnostic
- Authors: Stranieri, Andrew , Yatsko, Andrew , Jelinek, Herbert , Venkatraman, Sitalakshmi
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 5, no. 1 (2015), p. 111-134
- Full Text:
- Reviewed:
- Description: Glycated haemoglobin (HbA1c) is now more commonly used as an alternative test to the fasting plasma glucose and oral glucose tolerance tests for the identification of Type 2 Diabetes Mellitus (T2DM) because it is easily obtained using the point-of-care technology and represents long-term blood sugar levels. According to WHO guidelines, HbA1c values of 6.5% or above are required for a diagnosis of T2DM. However outcomes of a large number of trials with HbA1c have been inconsistent across the clinical spectrum and further research is required to determine the efficacy of HbA1c testing in identification of T2DM. Medical records from a diabetes screening program in Australia illustrate that many patients could be classified as diabetics if other clinical indicators are included, even though the HbA1c result does not exceed 6.5%. This suggests that a cutoff for the general population of 6.5% may be too simple and miss individuals at risk or with already overt, undiagnosed diabetes. In this study, data mining algorithms have been applied to identify markers that can be used with HbA1c. The results indicate that T2DM is best classified by HbA1c at 6.2% - a cutoff level lower than the currently recommended one, which can be even less, having assumed the threshold flexibility, if additionally to HbA1c being high the rule is conditioned on oxidative stress or inflammation being present, atherogenicity or adiposity being high, or hypertension being diagnosed, etc.
Diagnostic with incomplete nominal/discrete data
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Patient admission prediction using a pruned fuzzy min-max neural network with rule extraction
- Authors: Wang, Jin , Lim, Cheepeng , Creighton, Douglas , Khorsavi, Abbas , Nahavandi, Saeid , Ugon, Julien , Vamplew, Peter , Stranieri, Andrew , Martin, Laura , Freischmidt, Anton
- Date: 2015
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 26, no. 2 (2015), p. 277-289
- Full Text: false
- Reviewed:
- Description: A useful patient admission prediction model that helps the emergency department of a hospital admit patients efficiently is of great importance. It not only improves the care quality provided by the emergency department but also reduces waiting time of patients. This paper proposes an automatic prediction method for patient admission based on a fuzzy min–max neural network (FMM) with rules extraction. The FMM neural network forms a set of hyperboxes by learning through data samples, and the learned knowledge is used for prediction. In addition to providing predictions, decision rules are extracted from the FMM hyperboxes to provide an explanation for each prediction. In order to simplify the structure of FMM and the decision rules, an optimization method that simultaneously maximizes prediction accuracy and minimizes the number of FMM hyperboxes is proposed. Specifically, a genetic algorithm is formulated to find the optimal configuration of the decision rules. The experimental results using a large data set consisting of 450740 real patient records reveal that the proposed method achieves comparable or even better prediction accuracy than state-of-the-art classifiers with the additional ability to extract a set of explanatory rules to justify its predictions.