A count data model for heart rate variability forecasting and premature ventricular contraction detection
- Allami, Ragheed, Stranieri, Andrew, Balasubramanian, Venki, Jelinek, Herbert
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2017
- Type: Text , Journal article
- Relation: Signal Image and Video Processing Vol. 11, no. 8 (2017), p. 1427-1435
- Full Text:
- Reviewed:
- Description: Heart rate variability (HRV) measures including the standard deviation of inter-beat variations (SDNN) require at least 5 min of ECG recordings to accurately measure HRV. In this paper, we predict, using counts data derived from a 3-min ECG recording, the 5-min SDNN and also detect premature ventricular contraction (PVC) beats with a high degree of accuracy. The approach uses counts data combined with a Poisson-generated function that requires minimal computational resources and is well suited to remote patient monitoring with wearable sensors that have limited power, storage and processing capacity. The ease of use and accuracy of the algorithm provide opportunity for accurate assessment of HRV and reduce the time taken to review patients in real time. The PVC beat detection is implemented using the same count data model together with knowledge-based rules derived from clinical knowledge.
Building roof plane extraction from LIDAR data
- Awrangjeb, Mohammad, Lu, Guojun
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
- Full Text:
- Reviewed:
- Description: This paper presents a new segmentation technique to use LIDAR point cloud data for automatic extraction of building roof planes. The raw LIDAR points are first classified into two major groups: ground and non-ground points. The ground points are used to generate a 'building mask' in which the black areas represent the ground where there are no laser returns below a certain height. The non-ground points are segmented to extract the planar roof segments. First, the building mask is divided into small grid cells. The cells containing the black pixels are clustered such that each cluster represents an individual building or tree. Second, the non-ground points within a cluster are segmented based on their coplanarity and neighbourhood relations. Third, the planar segments are refined using a rule-based procedure that assigns the common points among the planar segments to the appropriate segments. Finally, another rule-based procedure is applied to remove tree planes which are generally small in size and randomly oriented. Experimental results on three Australian sites have shown that the proposed method offers high building detection and roof plane extraction rates.
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
- Full Text:
- Reviewed:
- Description: This paper presents a new segmentation technique to use LIDAR point cloud data for automatic extraction of building roof planes. The raw LIDAR points are first classified into two major groups: ground and non-ground points. The ground points are used to generate a 'building mask' in which the black areas represent the ground where there are no laser returns below a certain height. The non-ground points are segmented to extract the planar roof segments. First, the building mask is divided into small grid cells. The cells containing the black pixels are clustered such that each cluster represents an individual building or tree. Second, the non-ground points within a cluster are segmented based on their coplanarity and neighbourhood relations. Third, the planar segments are refined using a rule-based procedure that assigns the common points among the planar segments to the appropriate segments. Finally, another rule-based procedure is applied to remove tree planes which are generally small in size and randomly oriented. Experimental results on three Australian sites have shown that the proposed method offers high building detection and roof plane extraction rates.
A performance review of recent corner detectors
- Awrangjeb, Mohammad, Lu, Guojun
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: International Conference on Digital Image Computing: Techniques and Applications, 26 November 2013 to 28 November 2013 p. 157-164
- Full Text:
- Reviewed:
- Description: Contour-based corner detectors directly or indirectly estimate a significance measure (eg, curvature) on the points of a planar curve and select the curvature extrema points as corners. A number of promising contour-based corner detectors have recently been proposed. They mainly differ in how the curvature is estimated on each point of the given curve. As the curvature on a digital curve can only be approximated, it is important to estimate a curvature that remains stable against significant noises, for example, geometric transformations and compression, on the curve. Moreover, in many applications, for instance, in content-based image retrieval, a fast corner detector is a prerequisite. So, it is also a primary characteristic that how much time a corner detector takes for corner detection in a given image. In addition, different authors evaluated their detectors on different platforms using different evaluation systems. Evaluation systems that depend on human judgements and visual identification of corners are manual and too subjective. Application of a manual system on a large test database will be expensive. Therefore, it is important to evaluate the detectors on a common platform using an automatic evaluation system. This paper first reviews six most recent and highly performed corner detectors and analyse their theoretical running time. Then it uses an automatic evaluation system to analyse their performance. Both the robustness to noise and efficiency are estimated to rank the detectors.
- Authors: Awrangjeb, Mohammad , Lu, Guojun
- Date: 2013
- Type: Text , Conference paper
- Relation: International Conference on Digital Image Computing: Techniques and Applications, 26 November 2013 to 28 November 2013 p. 157-164
- Full Text:
- Reviewed:
- Description: Contour-based corner detectors directly or indirectly estimate a significance measure (eg, curvature) on the points of a planar curve and select the curvature extrema points as corners. A number of promising contour-based corner detectors have recently been proposed. They mainly differ in how the curvature is estimated on each point of the given curve. As the curvature on a digital curve can only be approximated, it is important to estimate a curvature that remains stable against significant noises, for example, geometric transformations and compression, on the curve. Moreover, in many applications, for instance, in content-based image retrieval, a fast corner detector is a prerequisite. So, it is also a primary characteristic that how much time a corner detector takes for corner detection in a given image. In addition, different authors evaluated their detectors on different platforms using different evaluation systems. Evaluation systems that depend on human judgements and visual identification of corners are manual and too subjective. Application of a manual system on a large test database will be expensive. Therefore, it is important to evaluate the detectors on a common platform using an automatic evaluation system. This paper first reviews six most recent and highly performed corner detectors and analyse their theoretical running time. Then it uses an automatic evaluation system to analyse their performance. Both the robustness to noise and efficiency are estimated to rank the detectors.
An L-2-Boosting Algorithm for Estimation of a Regression Function
- Bagirov, Adil, Clausen, Conny, Kohler, Michael
- Authors: Bagirov, Adil , Clausen, Conny , Kohler, Michael
- Date: 2010
- Type: Text , Journal article
- Relation: IEEE Transactions on Information Theory Vol. 56, no. 3 (2010), p. 1417-1429
- Full Text:
- Reviewed:
- Description: An L-2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to the residuals of the data by least squares and by defining the estimate as a linear combination of the resulting least squares estimates. Splitting of the sample is used to decide after how many iterations of smoothing of the residuals the algorithm terminates. The rate of convergence of the algorithm is analyzed in case of an unbounded response variable. The method is used to fit a sum of maxima of minima of linear functions to a given data set, and is compared with other nonparametric regression estimates using simulated data.
- Authors: Bagirov, Adil , Clausen, Conny , Kohler, Michael
- Date: 2010
- Type: Text , Journal article
- Relation: IEEE Transactions on Information Theory Vol. 56, no. 3 (2010), p. 1417-1429
- Full Text:
- Reviewed:
- Description: An L-2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to the residuals of the data by least squares and by defining the estimate as a linear combination of the resulting least squares estimates. Splitting of the sample is used to decide after how many iterations of smoothing of the residuals the algorithm terminates. The rate of convergence of the algorithm is analyzed in case of an unbounded response variable. The method is used to fit a sum of maxima of minima of linear functions to a given data set, and is compared with other nonparametric regression estimates using simulated data.
Educational big data : predictions, applications and challenges
- Bai, Xiaomei, Zhang, Fuli, Li, Jinzhou, Guo, Teng, Xia, Feng
- Authors: Bai, Xiaomei , Zhang, Fuli , Li, Jinzhou , Guo, Teng , Xia, Feng
- Date: 2021
- Type: Text , Journal article , Review
- Relation: Big Data Research Vol. 26, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Educational big data is becoming a strategic educational asset, exceptionally significant in advancing educational reform. The term educational big data stems from the rapidly growing educational data development, including students' inherent attributes, learning behavior, and psychological state. Educational big data has many applications that can be used for educational administration, teaching innovation, and research management. The representative examples of such applications are student academic performance prediction, employment recommendation, and financial support for low-income students. Different empirical studies have shown that it is possible to predict student performance in the courses during the next term. Predictive research for the higher education stage has become an attractive area of study since it allowed us to predict student behavior. In this survey, we will review predictive research, its applications, and its challenges. We first introduce the significance and background of educational big data. Second, we review the students' academic performance prediction research, such as factors influencing students' academic performance, predicting models, evaluating indices. Third, we introduce the applications of educational big data such as prediction, recommendation, and evaluation. Finally, we investigate challenging research issues in this area. This discussion aims to provide a comprehensive overview of educational big data. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Feng Xia” is provided in this record**
- Authors: Bai, Xiaomei , Zhang, Fuli , Li, Jinzhou , Guo, Teng , Xia, Feng
- Date: 2021
- Type: Text , Journal article , Review
- Relation: Big Data Research Vol. 26, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Educational big data is becoming a strategic educational asset, exceptionally significant in advancing educational reform. The term educational big data stems from the rapidly growing educational data development, including students' inherent attributes, learning behavior, and psychological state. Educational big data has many applications that can be used for educational administration, teaching innovation, and research management. The representative examples of such applications are student academic performance prediction, employment recommendation, and financial support for low-income students. Different empirical studies have shown that it is possible to predict student performance in the courses during the next term. Predictive research for the higher education stage has become an attractive area of study since it allowed us to predict student behavior. In this survey, we will review predictive research, its applications, and its challenges. We first introduce the significance and background of educational big data. Second, we review the students' academic performance prediction research, such as factors influencing students' academic performance, predicting models, evaluating indices. Third, we introduce the applications of educational big data such as prediction, recommendation, and evaluation. Finally, we investigate challenging research issues in this area. This discussion aims to provide a comprehensive overview of educational big data. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Feng Xia” is provided in this record**
Local contrast as an effective means to robust clustering against varying densities
- Chen, Bo, Ting, Kaiming, Washio, Takashi, Zhu, Ye
- Authors: Chen, Bo , Ting, Kaiming , Washio, Takashi , Zhu, Ye
- Date: 2018
- Type: Text , Journal article
- Relation: Machine Learning Vol. 107, no. 8-10 (2018), p. 1621-1645
- Full Text:
- Reviewed:
- Description: Most density-based clustering methods have difficulties detecting clusters of hugely different densities in a dataset. A recent density-based clustering CFSFDP appears to have mitigated the issue. However, through formalising the condition under which it fails, we reveal that CFSFDP still has the same issue. To address this issue, we propose a new measure called Local Contrast, as an alternative to density, to find cluster centers and detect clusters. We then apply Local Contrast to CFSFDP, and create a new clustering method called LC-CFSFDP which is robust in the presence of varying densities. Our empirical evaluation shows that LC-CFSFDP outperforms CFSFDP and three other state-of-the-art variants of CFSFDP. © 2018, The Author(s).
- Authors: Chen, Bo , Ting, Kaiming , Washio, Takashi , Zhu, Ye
- Date: 2018
- Type: Text , Journal article
- Relation: Machine Learning Vol. 107, no. 8-10 (2018), p. 1621-1645
- Full Text:
- Reviewed:
- Description: Most density-based clustering methods have difficulties detecting clusters of hugely different densities in a dataset. A recent density-based clustering CFSFDP appears to have mitigated the issue. However, through formalising the condition under which it fails, we reveal that CFSFDP still has the same issue. To address this issue, we propose a new measure called Local Contrast, as an alternative to density, to find cluster centers and detect clusters. We then apply Local Contrast to CFSFDP, and create a new clustering method called LC-CFSFDP which is robust in the presence of varying densities. Our empirical evaluation shows that LC-CFSFDP outperforms CFSFDP and three other state-of-the-art variants of CFSFDP. © 2018, The Author(s).
Assessing cohesion of the rocks proposing a new intelligent technique namely group method of data handling
- Chen, Wusi, Khandelwal, Manoj, Murlidhar, Bhatawdekar, Bui, Dieu, Tahir, Mahmood, Katebi, Javad
- Authors: Chen, Wusi , Khandelwal, Manoj , Murlidhar, Bhatawdekar , Bui, Dieu , Tahir, Mahmood , Katebi, Javad
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 36, no. 2 (2020), p. 783-793
- Full Text:
- Reviewed:
- Description: In this study, evaluation and prediction of rock cohesion is assessed using multiple regression as well as group method of data handling (GMDH). It is a well-known fact that cohesion is the most crucial rock shear strength parameter, which is a key parameter for the stability evaluation of some geotechnical structures such as rock slope. To fulfill the aim of this study, a database of three model input parameters, i.e., p wave velocity, uniaxial compressive strength and Brazilian tensile strength and one model output, which is cohesion of limestone samples was prepared and utilized by GMDH. Different GMDH models with neurons and layers and selection pressure were tested and assessed. It was found that GMDH model number 4 (with 8 layers) shows the best performance among all of tested models between the input and output parameters for the prediction and assessment of rock cohesion with coefficient of determination (R2) values of 0.928 and 0.929, root mean square error values of 0.3545 and 0.3154 for training and testing datasets, respectively. Multiple regression analysis was also performed on the same database and R2 values were obtained as 0.8173 and 0.8313 between input and output parameters for the training and testing of the models, respectively. The GMDH technique developed in this study is introduced as a new model in field of rock shear strength parameters. © 2019, Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Chen, Wusi , Khandelwal, Manoj , Murlidhar, Bhatawdekar , Bui, Dieu , Tahir, Mahmood , Katebi, Javad
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 36, no. 2 (2020), p. 783-793
- Full Text:
- Reviewed:
- Description: In this study, evaluation and prediction of rock cohesion is assessed using multiple regression as well as group method of data handling (GMDH). It is a well-known fact that cohesion is the most crucial rock shear strength parameter, which is a key parameter for the stability evaluation of some geotechnical structures such as rock slope. To fulfill the aim of this study, a database of three model input parameters, i.e., p wave velocity, uniaxial compressive strength and Brazilian tensile strength and one model output, which is cohesion of limestone samples was prepared and utilized by GMDH. Different GMDH models with neurons and layers and selection pressure were tested and assessed. It was found that GMDH model number 4 (with 8 layers) shows the best performance among all of tested models between the input and output parameters for the prediction and assessment of rock cohesion with coefficient of determination (R2) values of 0.928 and 0.929, root mean square error values of 0.3545 and 0.3154 for training and testing datasets, respectively. Multiple regression analysis was also performed on the same database and R2 values were obtained as 0.8173 and 0.8313 between input and output parameters for the training and testing of the models, respectively. The GMDH technique developed in this study is introduced as a new model in field of rock shear strength parameters. © 2019, Springer-Verlag London Ltd., part of Springer Nature.
An expert system methodology for SMEs and NPOs
- Authors: Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 11th Australian Conference on Knowledge Management and Intelligent Decision Support, ACKMIDS 2008, Ballarat, Victoria : 8th-10th December 2008
- Full Text:
- Description: Traditionally Expert Systems (ES) require a full analysis of the business problem by a Knowledge Engineer (KE) to develop a solution. This inherently makes ES technology very expensive and beyond the affordability of the majority of Small and Medium sized Enterprises (SMEs) and Non-Profit Organisations (NPOs). Therefore, SMEs and NPOs tend to only have access to off-the-shelf solutions to generic problems, which rarely meet the full extent of an organisation’s requirements. One existing methodological stream of research, Ripple-Down Rules (RDR) goes some of the way to being suitable to SMEs and NPOs as it removes the need for a knowledge engineer. This group of methodologies provide an environment where a company can develop large knowledge based systems themselves, specifically tailored to the company’s individual situation. These methods, however, require constant supervision by the expert during development, which is still a significant burden on the organisation. This paper discusses an extension to an RDR method, known as Rated MCRDR (RM) and a feature called prudence analysis. This enhanced methodology to ES development is particularly well suited to the development of ES in restricted environments such as SMEs and NPOs.
- Description: 2003006507
- Authors: Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 11th Australian Conference on Knowledge Management and Intelligent Decision Support, ACKMIDS 2008, Ballarat, Victoria : 8th-10th December 2008
- Full Text:
- Description: Traditionally Expert Systems (ES) require a full analysis of the business problem by a Knowledge Engineer (KE) to develop a solution. This inherently makes ES technology very expensive and beyond the affordability of the majority of Small and Medium sized Enterprises (SMEs) and Non-Profit Organisations (NPOs). Therefore, SMEs and NPOs tend to only have access to off-the-shelf solutions to generic problems, which rarely meet the full extent of an organisation’s requirements. One existing methodological stream of research, Ripple-Down Rules (RDR) goes some of the way to being suitable to SMEs and NPOs as it removes the need for a knowledge engineer. This group of methodologies provide an environment where a company can develop large knowledge based systems themselves, specifically tailored to the company’s individual situation. These methods, however, require constant supervision by the expert during development, which is still a significant burden on the organisation. This paper discusses an extension to an RDR method, known as Rated MCRDR (RM) and a feature called prudence analysis. This enhanced methodology to ES development is particularly well suited to the development of ES in restricted environments such as SMEs and NPOs.
- Description: 2003006507
Levels of explainable artificial intelligence for human-aligned conversational explanations
- Dazeley, Richard, Vamplew, Peter, Foale, Cameron, Young, Cameron, Aryal, Sunil, Cruz, Francisco
- Authors: Dazeley, Richard , Vamplew, Peter , Foale, Cameron , Young, Cameron , Aryal, Sunil , Cruz, Francisco
- Date: 2021
- Type: Text , Journal article
- Relation: Artificial Intelligence Vol. 299, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level ‘narrow’ explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level ‘strong’ explanations. © 2021 Elsevier B.V.
- Authors: Dazeley, Richard , Vamplew, Peter , Foale, Cameron , Young, Cameron , Aryal, Sunil , Cruz, Francisco
- Date: 2021
- Type: Text , Journal article
- Relation: Artificial Intelligence Vol. 299, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level ‘narrow’ explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level ‘strong’ explanations. © 2021 Elsevier B.V.
A survey on context awareness in big data analytics for business applications
- Dinh, Loan, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
Intelligent energy prediction techniques for fog computing networks
- Farooq, Umar, Shabir, Muhammad, Javed, Muhammad, Imran, Muhammad
- Authors: Farooq, Umar , Shabir, Muhammad , Javed, Muhammad , Imran, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 111, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Energy Efficiency is a key concern for future fog-enabled Internet of Things (IoT). Since Fog Nodes (FNs) are energy-constrained devices, task offloading techniques must consider the energy consumption of the FNs to maximize the performance of IoT applications. In this context, accurate energy prediction can enable the development of intelligent energy-aware task offloading techniques. In this paper, we present two energy prediction techniques, the first one is based on the Recursive Least Square (RLS) filter and the second one uses the Artificial Neural Network (ANN). Both techniques use inputs such as the number of tasks and size of the tasks to predict the energy consumption at different fog nodes. Simulation results show that both techniques have a root mean square error of less than 3%. However, the ANN-based technique shows up to 20% less root mean square error as compared to the RLS-based technique. © 2021 Elsevier B.V.
- Authors: Farooq, Umar , Shabir, Muhammad , Javed, Muhammad , Imran, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 111, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Energy Efficiency is a key concern for future fog-enabled Internet of Things (IoT). Since Fog Nodes (FNs) are energy-constrained devices, task offloading techniques must consider the energy consumption of the FNs to maximize the performance of IoT applications. In this context, accurate energy prediction can enable the development of intelligent energy-aware task offloading techniques. In this paper, we present two energy prediction techniques, the first one is based on the Recursive Least Square (RLS) filter and the second one uses the Artificial Neural Network (ANN). Both techniques use inputs such as the number of tasks and size of the tasks to predict the energy consumption at different fog nodes. Simulation results show that both techniques have a root mean square error of less than 3%. However, the ANN-based technique shows up to 20% less root mean square error as compared to the RLS-based technique. © 2021 Elsevier B.V.
Rhythmic and sustained oscillations in metabolism and gene expression of Cyanothece sp. ATCC 51142 under constant light
- Gaudana, Sandeep, Krishnakumar, S., Alagesan, Swathi, Digmurti, Madhuri, Viswanathan, Ganesh, Chetty, Madhu, Wangikar, Pramod
- Authors: Gaudana, Sandeep , Krishnakumar, S. , Alagesan, Swathi , Digmurti, Madhuri , Viswanathan, Ganesh , Chetty, Madhu , Wangikar, Pramod
- Date: 2013
- Type: Text , Journal article
- Relation: Frontiers in Microbiology Vol. 4, no. Article 374 (2013), p. 1-11
- Full Text:
- Reviewed:
- Description: Cyanobacteria, a group of photosynthetic prokaryotes, oscillate between day and night time metabolisms with concomitant oscillations in gene expression in response to light/dark cycles (LD). The oscillations in gene expression have been shown to sustain in constant light (LL) with a free running period of 24 h in a model cyanobacterium Synechococcus elongatus PCC 7942. However, equivalent oscillations in metabolism are not reported under LL in this non-nitrogen fixing cyanobacterium. Here we focus on Cyanothece sp. ATCC 51142, a unicellular, nitrogen-fixing cyanobacterium known to temporally separate the processes of oxygenic photosynthesis and oxygen-sensitive nitrogen fixation. In a recent report, metabolism of Cyanothece 51142 has been shown to oscillate between photosynthetic and respiratory phases under LL with free running periods that are temperature dependent but significantly shorter than the circadian period. Further, the oscillations shift to circadian pattern at moderate cell densities that are concomitant with slower growth rates. Here we take this understanding forward and demonstrate that the ultradian rhythm under LL sustains at much higher cell densities when grown under turbulent regimes that simulate flashing light effect. Our results suggest that the ultradian rhythm in metabolism may be needed to support higher carbon and nitrogen requirements of rapidly growing cells under LL. With a comprehensive Real time PCR based gene expression analysis we account for key regulatory interactions and demonstrate the interplay between clock genes and the genes of key metabolic pathways. Further, we observe that several genes that peak at dusk in Synechococcus peak at dawn in Cyanothece and vice versa. The circadian rhythm of this organism appears to be more robust with peaking of genes in anticipation of the ensuing photosynthetic and respiratory metabolic phases.
- Authors: Gaudana, Sandeep , Krishnakumar, S. , Alagesan, Swathi , Digmurti, Madhuri , Viswanathan, Ganesh , Chetty, Madhu , Wangikar, Pramod
- Date: 2013
- Type: Text , Journal article
- Relation: Frontiers in Microbiology Vol. 4, no. Article 374 (2013), p. 1-11
- Full Text:
- Reviewed:
- Description: Cyanobacteria, a group of photosynthetic prokaryotes, oscillate between day and night time metabolisms with concomitant oscillations in gene expression in response to light/dark cycles (LD). The oscillations in gene expression have been shown to sustain in constant light (LL) with a free running period of 24 h in a model cyanobacterium Synechococcus elongatus PCC 7942. However, equivalent oscillations in metabolism are not reported under LL in this non-nitrogen fixing cyanobacterium. Here we focus on Cyanothece sp. ATCC 51142, a unicellular, nitrogen-fixing cyanobacterium known to temporally separate the processes of oxygenic photosynthesis and oxygen-sensitive nitrogen fixation. In a recent report, metabolism of Cyanothece 51142 has been shown to oscillate between photosynthetic and respiratory phases under LL with free running periods that are temperature dependent but significantly shorter than the circadian period. Further, the oscillations shift to circadian pattern at moderate cell densities that are concomitant with slower growth rates. Here we take this understanding forward and demonstrate that the ultradian rhythm under LL sustains at much higher cell densities when grown under turbulent regimes that simulate flashing light effect. Our results suggest that the ultradian rhythm in metabolism may be needed to support higher carbon and nitrogen requirements of rapidly growing cells under LL. With a comprehensive Real time PCR based gene expression analysis we account for key regulatory interactions and demonstrate the interplay between clock genes and the genes of key metabolic pathways. Further, we observe that several genes that peak at dusk in Synechococcus peak at dawn in Cyanothece and vice versa. The circadian rhythm of this organism appears to be more robust with peaking of genes in anticipation of the ensuing photosynthetic and respiratory metabolic phases.
Sustaining the future through virtual worlds
- Gregory, Sue, Gregory, Brent, Hillier, Mathew, Miller, Charlynn, Meredith, Grant
- Authors: Gregory, Sue , Gregory, Brent , Hillier, Mathew , Miller, Charlynn , Meredith, Grant
- Date: 2012
- Type: Text , Conference paper
- Relation: Future Challenges, Sustainable Futures p. 361-368
- Full Text:
- Reviewed:
- Description: Virtual worlds (VWs) continue to be used extensively in Australia and New Zealand higher education institutions although the tendency towards making unrealistic claims of efficacy and popularity appears to be over. Some educators at higher education institutions continue to use VWs in the same way as they have done in the past; others are exploring a range of different VWs or using them in new ways; whilst some are opting out altogether. This paper presents an overview of how 46 educators from some 26 institutions see VWs as an opportunity to sustain higher education. The positives and negatives of using VWs are discussed.
- Authors: Gregory, Sue , Gregory, Brent , Hillier, Mathew , Miller, Charlynn , Meredith, Grant
- Date: 2012
- Type: Text , Conference paper
- Relation: Future Challenges, Sustainable Futures p. 361-368
- Full Text:
- Reviewed:
- Description: Virtual worlds (VWs) continue to be used extensively in Australia and New Zealand higher education institutions although the tendency towards making unrealistic claims of efficacy and popularity appears to be over. Some educators at higher education institutions continue to use VWs in the same way as they have done in the past; others are exploring a range of different VWs or using them in new ways; whilst some are opting out altogether. This paper presents an overview of how 46 educators from some 26 institutions see VWs as an opportunity to sustain higher education. The positives and negatives of using VWs are discussed.
DFS based partial pathways in GA for protein structure prediction
- Hoque, Md Tamjidul, Chetty, Madhu, Lewis, Andrew, Sattar, Abdul
- Authors: Hoque, Md Tamjidul , Chetty, Madhu , Lewis, Andrew , Sattar, Abdul
- Date: 2008
- Type: Text , Conference paper
- Relation: Third IAPR International Conference, PRIB 2008
- Full Text:
- Reviewed:
- Description: Nondeterministic conformational search techniques, such as Genetic Algorithms (GAs) are promising for solving protein structure prediction (PSP) problem. The crossover operator of a GA can underpin the formation of potential conformations by exchanging and sharing potential sub-conformations, which is promising for solving PSP. However, the usual nature of an optimum PSP conformation being compact can produce many invalid conformations (by having non-self-avoiding-walk) using crossover. While a crossover-based converging conformation suffers from limited pathways, combining it with depth-first search (DFS) can partially reveal potential pathways. DFS generates random conformations increasingly quickly with increasing length of the protein sequences compared to random-move-only-based conformation generation. Random conformations are frequently applied for maintaining diversity as well as for initialization in many GA variations.
- Authors: Hoque, Md Tamjidul , Chetty, Madhu , Lewis, Andrew , Sattar, Abdul
- Date: 2008
- Type: Text , Conference paper
- Relation: Third IAPR International Conference, PRIB 2008
- Full Text:
- Reviewed:
- Description: Nondeterministic conformational search techniques, such as Genetic Algorithms (GAs) are promising for solving protein structure prediction (PSP) problem. The crossover operator of a GA can underpin the formation of potential conformations by exchanging and sharing potential sub-conformations, which is promising for solving PSP. However, the usual nature of an optimum PSP conformation being compact can produce many invalid conformations (by having non-self-avoiding-walk) using crossover. While a crossover-based converging conformation suffers from limited pathways, combining it with depth-first search (DFS) can partially reveal potential pathways. DFS generates random conformations increasingly quickly with increasing length of the protein sequences compared to random-move-only-based conformation generation. Random conformations are frequently applied for maintaining diversity as well as for initialization in many GA variations.
Clustered memetic algorithm for protein structure prediction
- Authors: Islam, M. D. , Chetty, Madhu
- Date: 2010
- Type: Text , Conference paper
- Relation: Evolutionary Computation (CEC), 2010 IEEE Congress
- Full Text:
- Reviewed:
- Authors: Islam, M. D. , Chetty, Madhu
- Date: 2010
- Type: Text , Conference paper
- Relation: Evolutionary Computation (CEC), 2010 IEEE Congress
- Full Text:
- Reviewed:
Diagnostic with incomplete nominal/discrete data
- Jelinek, Herbert, Yatsko, Andrew, Stranieri, Andrew, Venkatraman, Sitalakshmi, Bagirov, Adil
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Extraction and processing of real time strain of embedded FBG sensors using a fixed filter FBG circuit and an artificial neural network
- Kahandawa, Gayan, Epaarachchi, Jayantha, Wang, Hao, Canning, John, Lau, Alan
- Authors: Kahandawa, Gayan , Epaarachchi, Jayantha , Wang, Hao , Canning, John , Lau, Alan
- Date: 2013
- Type: Text , Journal article
- Relation: Measurement: Journal of the International Measurement Confederation Vol. 46, no. 10 (2013), p. 4045-4051
- Full Text:
- Reviewed:
- Description: Fibre Bragg Grating (FBG) sensors have been used in the development of structural health monitoring (SHM) and damage detection systems for advanced composite structures over several decades. Unfortunately, to date only a handful of appropriate configurations and algorithm sare available for using in SHM systems have been developed. This paper reveals a novel configuration of FBG sensors to acquire strain reading and an integrated statistical approach to analyse data in real time. The proposed configuration has proven its capability to overcome practical constraints and the engineering challenges associated with FBG-based SHM systems. A fixed filter decoding system and an integrated artificial neural network algorithm for extracting strain from embedded FBG sensor were proposed and experimentally proved. Furthermore, the laboratory level experimental data was used to verify the accuracy of the system and it was found that the error levels were less than 0.3% in predictions. The developed SMH system using this technology has been submitted to US patent office and will be available for use of aerospace applications in due course. © 2013 Elsevier Ltd. All rights reserved.
- Authors: Kahandawa, Gayan , Epaarachchi, Jayantha , Wang, Hao , Canning, John , Lau, Alan
- Date: 2013
- Type: Text , Journal article
- Relation: Measurement: Journal of the International Measurement Confederation Vol. 46, no. 10 (2013), p. 4045-4051
- Full Text:
- Reviewed:
- Description: Fibre Bragg Grating (FBG) sensors have been used in the development of structural health monitoring (SHM) and damage detection systems for advanced composite structures over several decades. Unfortunately, to date only a handful of appropriate configurations and algorithm sare available for using in SHM systems have been developed. This paper reveals a novel configuration of FBG sensors to acquire strain reading and an integrated statistical approach to analyse data in real time. The proposed configuration has proven its capability to overcome practical constraints and the engineering challenges associated with FBG-based SHM systems. A fixed filter decoding system and an integrated artificial neural network algorithm for extracting strain from embedded FBG sensor were proposed and experimentally proved. Furthermore, the laboratory level experimental data was used to verify the accuracy of the system and it was found that the error levels were less than 0.3% in predictions. The developed SMH system using this technology has been submitted to US patent office and will be available for use of aerospace applications in due course. © 2013 Elsevier Ltd. All rights reserved.
A comparison of bidding strategies for online auctions using fuzzy reasoning and negotiation decision functions
- Kaur, Preetinder, Goyal, Madhu, Lu, Jie
- Authors: Kaur, Preetinder , Goyal, Madhu , Lu, Jie
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Transactions on Fuzzy Systems Vol. 25, no. 2 (2017), p. 425-438
- Full Text:
- Reviewed:
- Description: Bidders often feel challenged when looking for the best bidding strategies to excel in the competitive environment of multiple and simultaneous online auctions for same or similar items. Bidders face complicated issues for deciding which auction to participate in, whether to bid early or late, and how much to bid. In this paper, we present the design of bidding strategies, which aim to forecast the bid amounts for buyers at a particular moment in time based on their bidding behavior and their valuation of an auctioned item. The agent develops a comprehensive methodology for final price estimation, which designs bidding strategies to address buyers' different bidding behaviors using two approaches: Mamdani method with regression analysis and negotiation decision functions. The experimental results show that the agents who follow fuzzy reasoning with a regression approach outperform other existing agents in most settings in terms of their success rate and expected utility.
- Authors: Kaur, Preetinder , Goyal, Madhu , Lu, Jie
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Transactions on Fuzzy Systems Vol. 25, no. 2 (2017), p. 425-438
- Full Text:
- Reviewed:
- Description: Bidders often feel challenged when looking for the best bidding strategies to excel in the competitive environment of multiple and simultaneous online auctions for same or similar items. Bidders face complicated issues for deciding which auction to participate in, whether to bid early or late, and how much to bid. In this paper, we present the design of bidding strategies, which aim to forecast the bid amounts for buyers at a particular moment in time based on their bidding behavior and their valuation of an auctioned item. The agent develops a comprehensive methodology for final price estimation, which designs bidding strategies to address buyers' different bidding behaviors using two approaches: Mamdani method with regression analysis and negotiation decision functions. The experimental results show that the agents who follow fuzzy reasoning with a regression approach outperform other existing agents in most settings in terms of their success rate and expected utility.
- Keogh, Kathleen, Sonenberg, Elizabeth
- Authors: Keogh, Kathleen , Sonenberg, Elizabeth
- Date: 2007
- Type: Text , Journal article
- Relation: Cognitive Systems Research Vol. 8, no. 4 (2007), p. 249-261
- Full Text:
- Reviewed:
- Description: We have analysed rich, dynamic data about the behaviour of anaesthetists during the management of a simulated critical incident in the operating theatre. We use a paper based analysis and a partial implementation to further the development of a computational cognitive model for disturbance management in anaesthesia. We suggest that our data analysis pattern may be used for the analysis of behavioural data describing cognitive and observable events in other complex dynamic domains. © 2007 Elsevier B.V. All rights reserved.
- Description: C1
- Description: 2003005060
The evolution of Turing Award Collaboration Network : bibliometric-level and network-level metrics
- Kong, Xiangjie, Shi, Yajie, Wang, Wei, Ma, Kai, Wan, Liangtian, Xia, Feng
- Authors: Kong, Xiangjie , Shi, Yajie , Wang, Wei , Ma, Kai , Wan, Liangtian , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 6, no. 6 (2019), p. 1318-1328
- Full Text:
- Reviewed:
- Description: The year of 2017 for the 50th anniversary of the Turing Award, which represents the top-level award in the computer science field, is a milestone. We study the long-term evolution of the Turing Award Collaboration Network, and it can be considered as a microcosm of the computer science field from 1974 to 2016. First, scholars tend to publish articles by themselves at the early stages, and they began to focus on tight collaboration since the late 1980s. Second, compared with the same scale random network, although the Turing Award Collaboration Network has small-world properties, it is not a scale-free network. The reason may be that the number of collaborators per scholar is limited. It is impossible for scholars to connect to others freely (preferential attachment) as the scale-free network. Third, to measure how far a scholar is from the Turing Award, we propose a metric called the Turing Number (TN) and find that the TN decreases gradually over time. Meanwhile, we discover the phenomenon that scholars prefer to gather into groups to do research with the development of computer science. This article presents a new way to explore the evolution of academic collaboration network in the field of computer science by building and analyzing the Turing Award Collaboration Network for decades. © 2014 IEEE.
- Authors: Kong, Xiangjie , Shi, Yajie , Wang, Wei , Ma, Kai , Wan, Liangtian , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 6, no. 6 (2019), p. 1318-1328
- Full Text:
- Reviewed:
- Description: The year of 2017 for the 50th anniversary of the Turing Award, which represents the top-level award in the computer science field, is a milestone. We study the long-term evolution of the Turing Award Collaboration Network, and it can be considered as a microcosm of the computer science field from 1974 to 2016. First, scholars tend to publish articles by themselves at the early stages, and they began to focus on tight collaboration since the late 1980s. Second, compared with the same scale random network, although the Turing Award Collaboration Network has small-world properties, it is not a scale-free network. The reason may be that the number of collaborators per scholar is limited. It is impossible for scholars to connect to others freely (preferential attachment) as the scale-free network. Third, to measure how far a scholar is from the Turing Award, we propose a metric called the Turing Number (TN) and find that the TN decreases gradually over time. Meanwhile, we discover the phenomenon that scholars prefer to gather into groups to do research with the development of computer science. This article presents a new way to explore the evolution of academic collaboration network in the field of computer science by building and analyzing the Turing Award Collaboration Network for decades. © 2014 IEEE.