A computable theory for learning Bayesian networks based on MAP-MDL principles
- Pan, Heping, McMichael, Daniel
- Authors: Pan, Heping , McMichael, Daniel
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at Workshop on Learning Algorithms for Pattern Recognition in conjunction with the 18th Australian Joint Conference on Artificial Intelligence AI'05, Sydney : 5th - 9th December, 2005 p. 769-776
- Full Text:
- Reviewed:
- Description: E1
- Description: 2003001442
- Authors: Pan, Heping , McMichael, Daniel
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at Workshop on Learning Algorithms for Pattern Recognition in conjunction with the 18th Australian Joint Conference on Artificial Intelligence AI'05, Sydney : 5th - 9th December, 2005 p. 769-776
- Full Text:
- Reviewed:
- Description: E1
- Description: 2003001442
A new supervised term ranking method for text categorization
- Mammadov, Musa, Yearwood, John, Zhao, Lei
- Authors: Mammadov, Musa , Yearwood, John , Zhao, Lei
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 23rd Australasian Joint Conference on Artificial Intelligence, AI 2010 Vol. 6464 LNAI, p. 102-111
- Full Text:
- Reviewed:
- Description: In text categorization, different supervised term weighting methods have been applied to improve classification performance by weighting terms with respect to different categories, for example, Information Gain, χ2 statistic, and Odds Ratio. From the literature there are three term ranking methods to summarize term weights of different categories for multi-class text categorization. They are Summation, Average, and Maximum methods. In this paper we present a new term ranking method to summarize term weights, i.e. Maximum Gap. Using two different methods of information gain and χ2 statistic, we setup controlled experiments for different term ranking methods. Reuter-21578 text corpus is used as the dataset. Two popular classification algorithms SVM and Boostexter are adopted to evaluate the performance of different term ranking methods. Experimental results show that the new term ranking method performs better. © 2010 Springer-Verlag.
- Authors: Mammadov, Musa , Yearwood, John , Zhao, Lei
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 23rd Australasian Joint Conference on Artificial Intelligence, AI 2010 Vol. 6464 LNAI, p. 102-111
- Full Text:
- Reviewed:
- Description: In text categorization, different supervised term weighting methods have been applied to improve classification performance by weighting terms with respect to different categories, for example, Information Gain, χ2 statistic, and Odds Ratio. From the literature there are three term ranking methods to summarize term weights of different categories for multi-class text categorization. They are Summation, Average, and Maximum methods. In this paper we present a new term ranking method to summarize term weights, i.e. Maximum Gap. Using two different methods of information gain and χ2 statistic, we setup controlled experiments for different term ranking methods. Reuter-21578 text corpus is used as the dataset. Two popular classification algorithms SVM and Boostexter are adopted to evaluate the performance of different term ranking methods. Experimental results show that the new term ranking method performs better. © 2010 Springer-Verlag.
Applying reinforcement learning in playing Robosoccer using the AIBO
- Authors: Mukherjee, Subhasis
- Date: 2010
- Type: Text , Thesis , Masters
- Full Text:
- Description: "Robosoccer is a popular test bed for AI programs around the world in which AIBO entertainments robots take part in the middle sized soccer event. These robots need a variety of skills to perform in a semi-real environment like this. The three key challenges are manoeuvrability, image recognition and decision making skills. This research is focussed on the decision making skills ... The work focuses on whether reinforcement learning as a form of semi supervised learning can effectively contribute to the goal keeper's decision making when a shot is taken." -
- Description: Master of Computing (by research)
- Authors: Mukherjee, Subhasis
- Date: 2010
- Type: Text , Thesis , Masters
- Full Text:
- Description: "Robosoccer is a popular test bed for AI programs around the world in which AIBO entertainments robots take part in the middle sized soccer event. These robots need a variety of skills to perform in a semi-real environment like this. The three key challenges are manoeuvrability, image recognition and decision making skills. This research is focussed on the decision making skills ... The work focuses on whether reinforcement learning as a form of semi supervised learning can effectively contribute to the goal keeper's decision making when a shot is taken." -
- Description: Master of Computing (by research)
Efficient piecewise linear classifiers and applications
- Authors: Webb, Dean
- Date: 2011
- Type: Text , Thesis , PhD
- Full Text:
- Description: Supervised learning has become an essential part of data mining for industry, military, science and academia. Classification, a type of supervised learning allows a machine to learn from data to then predict certain behaviours, variables or outcomes. Classification can be used to solve many problems including the detection of malignant cancers, potentially bad creditors and even enabling autonomy in robots. The ability to collect and store large amounts of data has increased significantly over the past few decades. However, the ability of classification techniques to deal with large scale data has not been matched. Many data transformation and reduction schemes have been tried with mixed success. This problem is further exacerbated when dealing with real time classification in embedded systems. The real time classifier must classify using only limited processing, memory and power resources. Piecewise linear boundaries are known to provide efficient real time classifiers. They have low memory requirements, require little processing effort, are parameterless and classify in real time. Piecewise linear functions are used to approximate non-linear decision boundaries between pattern classes. Finding these piecewise linear boundaries is a difficult optimization problem that can require a long training time. Multiple optimization approaches have been used for real time classification, but can lead to suboptimal piecewise linear boundaries. This thesis develops three real time piecewise linear classifiers that deal with large scale data. Each classifier uses a single optimization algorithm in conjunction with an incremental approach that reduces the number of points as the decision boundaries are built. Two of the classifiers further reduce complexity by augmenting the incremental approach with additional schemes. One scheme uses hyperboxes to identify points inside the so-called “indeterminate” regions. The other uses a polyhedral conic set to identify data points lying on or close to the boundary. All other points are excluded from the process of building the decision boundaries. The three classifiers are applied to real time data classification problems and the results of numerical experiments on real world data sets are reported. These results demonstrate that the new classifiers require a reasonable training time and their test set accuracy is consistently good on most data sets compared with current state of the art classifiers.
- Description: Doctor of Philosophy
- Authors: Webb, Dean
- Date: 2011
- Type: Text , Thesis , PhD
- Full Text:
- Description: Supervised learning has become an essential part of data mining for industry, military, science and academia. Classification, a type of supervised learning allows a machine to learn from data to then predict certain behaviours, variables or outcomes. Classification can be used to solve many problems including the detection of malignant cancers, potentially bad creditors and even enabling autonomy in robots. The ability to collect and store large amounts of data has increased significantly over the past few decades. However, the ability of classification techniques to deal with large scale data has not been matched. Many data transformation and reduction schemes have been tried with mixed success. This problem is further exacerbated when dealing with real time classification in embedded systems. The real time classifier must classify using only limited processing, memory and power resources. Piecewise linear boundaries are known to provide efficient real time classifiers. They have low memory requirements, require little processing effort, are parameterless and classify in real time. Piecewise linear functions are used to approximate non-linear decision boundaries between pattern classes. Finding these piecewise linear boundaries is a difficult optimization problem that can require a long training time. Multiple optimization approaches have been used for real time classification, but can lead to suboptimal piecewise linear boundaries. This thesis develops three real time piecewise linear classifiers that deal with large scale data. Each classifier uses a single optimization algorithm in conjunction with an incremental approach that reduces the number of points as the decision boundaries are built. Two of the classifiers further reduce complexity by augmenting the incremental approach with additional schemes. One scheme uses hyperboxes to identify points inside the so-called “indeterminate” regions. The other uses a polyhedral conic set to identify data points lying on or close to the boundary. All other points are excluded from the process of building the decision boundaries. The three classifiers are applied to real time data classification problems and the results of numerical experiments on real world data sets are reported. These results demonstrate that the new classifiers require a reasonable training time and their test set accuracy is consistently good on most data sets compared with current state of the art classifiers.
- Description: Doctor of Philosophy
A new hybrid method combining genetic algorithm and coordinate search method
- Authors: Long, Qiang , Huang, Junjian
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This paper proposed a new hybrid method combining genetic algorithm(GA) and coordinate search method (CSM). Genetic algorithm is good at global exploration but bad at accuracy and local search. Whereas, coordinate search method is good at local exploitation, and its accuracy is reliable when searching in a local area. Thus we combine those two methods in this paper to design a hybrid method called genetic algorithm with coordinate search (GACS). Experimental tests shows that this method are good at both global search and local accuracy. © 2012 IEEE.
- Description: 2003010808
- Authors: Long, Qiang , Huang, Junjian
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This paper proposed a new hybrid method combining genetic algorithm(GA) and coordinate search method (CSM). Genetic algorithm is good at global exploration but bad at accuracy and local search. Whereas, coordinate search method is good at local exploitation, and its accuracy is reliable when searching in a local area. Thus we combine those two methods in this paper to design a hybrid method called genetic algorithm with coordinate search (GACS). Experimental tests shows that this method are good at both global search and local accuracy. © 2012 IEEE.
- Description: 2003010808
Structure learning of Bayesian Networks using global optimization with applications in data classification
- Taheri, Sona, Mammadov, Musa
- Authors: Taheri, Sona , Mammadov, Musa
- Date: 2014
- Type: Text , Journal article
- Relation: Optimization Letters Vol. 9, no. 5 (2014), p. 931-948
- Full Text:
- Reviewed:
- Description: Bayesian Networks are increasingly popular methods of modeling uncertainty in artificial intelligence and machine learning. A Bayesian Network consists of a directed acyclic graph in which each node represents a variable and each arc represents probabilistic dependency between two variables. Constructing a Bayesian Network from data is a learning process that consists of two steps: learning structure and learning parameter. Learning a network structure from data is the most difficult task in this process. This paper presents a new algorithm for constructing an optimal structure for Bayesian Networks based on optimization. The algorithm has two major parts. First, we define an optimization model to find the better network graphs. Then, we apply an optimization approach for removing possible cycles from the directed graphs obtained in the first part which is the first of its kind in the literature. The main advantage of the proposed method is that the maximal number of parents for variables is not fixed a priory and it is defined during the optimization procedure. It also considers all networks including cyclic ones and then choose a best structure by applying a global optimization method. To show the efficiency of the algorithm, several closely related algorithms including unrestricted dependency Bayesian Network algorithm, as well as, benchmarks algorithms SVM and C4.5 are employed for comparison. We apply these algorithms on data classification; data sets are taken from the UCI machine learning repository and the LIBSVM. © 2014, Springer-Verlag Berlin Heidelberg.
- Authors: Taheri, Sona , Mammadov, Musa
- Date: 2014
- Type: Text , Journal article
- Relation: Optimization Letters Vol. 9, no. 5 (2014), p. 931-948
- Full Text:
- Reviewed:
- Description: Bayesian Networks are increasingly popular methods of modeling uncertainty in artificial intelligence and machine learning. A Bayesian Network consists of a directed acyclic graph in which each node represents a variable and each arc represents probabilistic dependency between two variables. Constructing a Bayesian Network from data is a learning process that consists of two steps: learning structure and learning parameter. Learning a network structure from data is the most difficult task in this process. This paper presents a new algorithm for constructing an optimal structure for Bayesian Networks based on optimization. The algorithm has two major parts. First, we define an optimization model to find the better network graphs. Then, we apply an optimization approach for removing possible cycles from the directed graphs obtained in the first part which is the first of its kind in the literature. The main advantage of the proposed method is that the maximal number of parents for variables is not fixed a priory and it is defined during the optimization procedure. It also considers all networks including cyclic ones and then choose a best structure by applying a global optimization method. To show the efficiency of the algorithm, several closely related algorithms including unrestricted dependency Bayesian Network algorithm, as well as, benchmarks algorithms SVM and C4.5 are employed for comparison. We apply these algorithms on data classification; data sets are taken from the UCI machine learning repository and the LIBSVM. © 2014, Springer-Verlag Berlin Heidelberg.
An application of high-dimensional statistics to predictive modeling of grade variability
- Hinz, Juri, Grigoryev, Igor, Novikov, Alexander
- Authors: Hinz, Juri , Grigoryev, Igor , Novikov, Alexander
- Date: 2020
- Type: Text , Journal article
- Relation: Geosciences (Switzerland) Vol. 10, no. 4 (2020), p.
- Full Text:
- Reviewed:
- Description: The economic viability of a mining project depends on its efficient exploration, which requires a prediction of worthwhile ore in a mine deposit. In this work, we apply the so-called LASSO methodology to estimate mineral concentration within unexplored areas. Our methodology outperforms traditional techniques not only in terms of logical consistency, but potentially also in costs reduction. Our approach is illustrated by a full source code listing and a detailed discussion of the advantages and limitations of our approach. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Hinz, Juri , Grigoryev, Igor , Novikov, Alexander
- Date: 2020
- Type: Text , Journal article
- Relation: Geosciences (Switzerland) Vol. 10, no. 4 (2020), p.
- Full Text:
- Reviewed:
- Description: The economic viability of a mining project depends on its efficient exploration, which requires a prediction of worthwhile ore in a mine deposit. In this work, we apply the so-called LASSO methodology to estimate mineral concentration within unexplored areas. Our methodology outperforms traditional techniques not only in terms of logical consistency, but potentially also in costs reduction. Our approach is illustrated by a full source code listing and a detailed discussion of the advantages and limitations of our approach. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
An adaptive and flexible brain energized full body exoskeleton with IoT edge for assisting the paralyzed patients
- Jacob, Sunil, Alagirisamy, Mukil, Menon, Varun, Kumar, B. Manoj, Balasubramanian, Venki
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Menon, Varun , Kumar, B. Manoj , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 100721-100731
- Full Text:
- Reviewed:
- Description: The paralyzed population is increasing worldwide due to stroke, spinal code injury, post-polio, and other related diseases. Different assistive technologies are used to improve the physical and mental health of the affected patients. Exoskeletons have emerged as one of the most promising technology to provide movement and rehabilitation for the paralyzed. But exoskeletons are limited by the constraints of weight, flexibility, and adaptability. To resolve these issues, we propose an adaptive and flexible Brain Energized Full Body Exoskeleton (BFBE) for assisting the paralyzed people. This paper describes the design, control, and testing of BFBE with 15 degrees of freedom (DoF) for assisting the users in their daily activities. The flexibility is incorporated into the system by a modular design approach. The brain signals captured by the Electroencephalogram (EEG) sensors are used for controlling the movements of BFBE. The processing happens at the edge, reducing delay in decision making and the system is further integrated with an IoT module that helps to send an alert message to multiple caregivers in case of an emergency. The potential energy harvesting is used in the system to solve the power issues related to the exoskeleton. The stability in the gait cycle is ensured by using adaptive sensory feedback. The system validation is done by using six natural movements on ten different paralyzed persons. The system recognizes human intensions with an accuracy of 85%. The result shows that BFBE can be an efficient method for providing assistance and rehabilitation for paralyzed patients. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Menon, Varun , Kumar, B. Manoj , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 100721-100731
- Full Text:
- Reviewed:
- Description: The paralyzed population is increasing worldwide due to stroke, spinal code injury, post-polio, and other related diseases. Different assistive technologies are used to improve the physical and mental health of the affected patients. Exoskeletons have emerged as one of the most promising technology to provide movement and rehabilitation for the paralyzed. But exoskeletons are limited by the constraints of weight, flexibility, and adaptability. To resolve these issues, we propose an adaptive and flexible Brain Energized Full Body Exoskeleton (BFBE) for assisting the paralyzed people. This paper describes the design, control, and testing of BFBE with 15 degrees of freedom (DoF) for assisting the users in their daily activities. The flexibility is incorporated into the system by a modular design approach. The brain signals captured by the Electroencephalogram (EEG) sensors are used for controlling the movements of BFBE. The processing happens at the edge, reducing delay in decision making and the system is further integrated with an IoT module that helps to send an alert message to multiple caregivers in case of an emergency. The potential energy harvesting is used in the system to solve the power issues related to the exoskeleton. The stability in the gait cycle is ensured by using adaptive sensory feedback. The system validation is done by using six natural movements on ten different paralyzed persons. The system recognizes human intensions with an accuracy of 85%. The result shows that BFBE can be an efficient method for providing assistance and rehabilitation for paralyzed patients. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
AI and IoT-Enabled smart exoskeleton system for rehabilitation of paralyzed people in connected communities
- Jacob, Sunil, Alagirisamy, Mukil, Xi, Chen, Balasubramanian, Venki, Srinivasan, Ram
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Xi, Chen , Balasubramanian, Venki , Srinivasan, Ram
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 80340-80350
- Full Text:
- Reviewed:
- Description: In recent years, the number of cases of spinal cord injuries, stroke and other nervous impairments have led to an increase in the number of paralyzed patients worldwide. Rehabilitation that can aid and enhance the lives of such patients is the need of the hour. Exoskeletons have been found as one of the popular means of rehabilitation. The existing exoskeletons use techniques that impose limitations on adaptability, instant response and continuous control. Also most of them are expensive, bulky, and requires high level of training. To overcome all the above limitations, this paper introduces an Artificial Intelligence (AI) powered Smart and light weight Exoskeleton System (AI-IoT-SES) which receives data from various sensors, classifies them intelligently and generates the desired commands via Internet of Things (IoT) for rendering rehabilitation and support with the help of caretakers for paralyzed patients in smart and connected communities. In the proposed system, the signals collected from the exoskeleton sensors are processed using AI-assisted navigation module, and helps the caretakers in guiding, communicating and controlling the movements of the exoskeleton integrated to the patients. The navigation module uses AI and IoT enabled Simultaneous Localization and Mapping (SLAM). The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker to monitor the real time movement and navigation of the exoskeleton. The automated exoskeleton detects and take decisions on navigation thereby improving the life conditions of such patients. The experimental results simulated using MATLAB shows that the proposed system is the ideal method for rendering rehabilitation and support for paralyzed patients in smart communities. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Xi, Chen , Balasubramanian, Venki , Srinivasan, Ram
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 80340-80350
- Full Text:
- Reviewed:
- Description: In recent years, the number of cases of spinal cord injuries, stroke and other nervous impairments have led to an increase in the number of paralyzed patients worldwide. Rehabilitation that can aid and enhance the lives of such patients is the need of the hour. Exoskeletons have been found as one of the popular means of rehabilitation. The existing exoskeletons use techniques that impose limitations on adaptability, instant response and continuous control. Also most of them are expensive, bulky, and requires high level of training. To overcome all the above limitations, this paper introduces an Artificial Intelligence (AI) powered Smart and light weight Exoskeleton System (AI-IoT-SES) which receives data from various sensors, classifies them intelligently and generates the desired commands via Internet of Things (IoT) for rendering rehabilitation and support with the help of caretakers for paralyzed patients in smart and connected communities. In the proposed system, the signals collected from the exoskeleton sensors are processed using AI-assisted navigation module, and helps the caretakers in guiding, communicating and controlling the movements of the exoskeleton integrated to the patients. The navigation module uses AI and IoT enabled Simultaneous Localization and Mapping (SLAM). The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker to monitor the real time movement and navigation of the exoskeleton. The automated exoskeleton detects and take decisions on navigation thereby improving the life conditions of such patients. The experimental results simulated using MATLAB shows that the proposed system is the ideal method for rendering rehabilitation and support for paralyzed patients in smart communities. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
Timeless principles of taxpayer protection: how they adapt to digital disruption
- Authors: Bentley, Duncan
- Date: 2019
- Type: Text , Journal article
- Relation: eJournal of Tax Research Vol. 16, no. 3 (2019), p. 679-713
- Full Text:
- Reviewed:
- Description: Digital transformation will pose growing challenges to tax revenues and systems of taxation that were designed for another century. The tax rules may hasten slowly, but the record of response to the challenges of electronic commerce, and of base erosion and profit shifting, shows that tax administration is more adaptable. This article identifies the detailed nature of technological changes in electronics and systems; big data, automation and artificial intelligence; and security, including blockchain; as those changes affect tax administration. It highlights the critical taxpayer rights issues and applies accepted taxpayer rights frameworks. The article concludes that taxpayer rights principles are both highly adaptable to a digital world, and provide useful guidance to where urgent action and further research are required. © 2019 UNSW Business School™.
- Authors: Bentley, Duncan
- Date: 2019
- Type: Text , Journal article
- Relation: eJournal of Tax Research Vol. 16, no. 3 (2019), p. 679-713
- Full Text:
- Reviewed:
- Description: Digital transformation will pose growing challenges to tax revenues and systems of taxation that were designed for another century. The tax rules may hasten slowly, but the record of response to the challenges of electronic commerce, and of base erosion and profit shifting, shows that tax administration is more adaptable. This article identifies the detailed nature of technological changes in electronics and systems; big data, automation and artificial intelligence; and security, including blockchain; as those changes affect tax administration. It highlights the critical taxpayer rights issues and applies accepted taxpayer rights frameworks. The article concludes that taxpayer rights principles are both highly adaptable to a digital world, and provide useful guidance to where urgent action and further research are required. © 2019 UNSW Business School™.
Flip-OFDM for optical wireless communications
- Fernando, Nirmal, Hong, Yi, Viterbo, Emanuele
- Authors: Fernando, Nirmal , Hong, Yi , Viterbo, Emanuele
- Date: 2011
- Type: Text , Conference paper
- Relation: 2011 IEEE Information Theory Workshop, ITW 2011; Paraty; Brazil; 16th October- 20th October 2011; p. 5-9
- Full Text:
- Reviewed:
- Description: We consider two uniploar OFDM techniques for optical wireless communications: asymmetric clipped optical OFDM (ACO-OFDM) and Flip-OFDM. Both techniques can be used to compensate multipath distortion effects in optical wireless channels. However, ACO-OFDM has been widely studied in the literature, while the performance of Flip-OFDM has never been investigated. In this paper, we conduct the performance analysis of Flip-OFDM and propose additional modification to the original scheme in order to compare the performance of both techniques. Finally, it is shown by simulation that both techniques have the same performance but different hardware complexities. In particular, for slow fading channels, Flip-OFDM offers 50% saving in hardware complexity over ACO-OFDM at the receiver. © 2011 IEEE.
- Description: 2011 IEEE Information Theory Workshop, ITW 2011
- Authors: Fernando, Nirmal , Hong, Yi , Viterbo, Emanuele
- Date: 2011
- Type: Text , Conference paper
- Relation: 2011 IEEE Information Theory Workshop, ITW 2011; Paraty; Brazil; 16th October- 20th October 2011; p. 5-9
- Full Text:
- Reviewed:
- Description: We consider two uniploar OFDM techniques for optical wireless communications: asymmetric clipped optical OFDM (ACO-OFDM) and Flip-OFDM. Both techniques can be used to compensate multipath distortion effects in optical wireless channels. However, ACO-OFDM has been widely studied in the literature, while the performance of Flip-OFDM has never been investigated. In this paper, we conduct the performance analysis of Flip-OFDM and propose additional modification to the original scheme in order to compare the performance of both techniques. Finally, it is shown by simulation that both techniques have the same performance but different hardware complexities. In particular, for slow fading channels, Flip-OFDM offers 50% saving in hardware complexity over ACO-OFDM at the receiver. © 2011 IEEE.
- Description: 2011 IEEE Information Theory Workshop, ITW 2011
The spectrum of big data analytics
- Authors: Sun, Zhaohao , Huo, Yanxia
- Date: 2021
- Type: Text , Journal article
- Relation: Journal of Computer Information Systems Vol. 61, no. 2 (2021), p. 154-162
- Full Text:
- Reviewed:
- Description: Big data analytics is playing a pivotal role in big data, artificial intelligence, management, governance, and society with the dramatic development of big data, analytics, artificial intelligence. However, what is the spectrum of big data analytics and how to develop the spectrum are still a fundamental issue in the academic community. This article addresses these issues by presenting a big data derived small data approach. It then uses the proposed approach to analyze the top 150 profiles of Google Scholar, including big data analytics as one research field and proposes a spectrum of big data analytics. The spectrum of big data analytics mainly includes data mining, machine learning, data science and systems, artificial intelligence, distributed computing and systems, and cloud computing, taking into account degree of importance. The proposed approach and findings will generalize to other researchers and practitioners of big data analytics, machine learning, artificial intelligence, and data science. © 2019 International Association for Computer Information Systems.
- Authors: Sun, Zhaohao , Huo, Yanxia
- Date: 2021
- Type: Text , Journal article
- Relation: Journal of Computer Information Systems Vol. 61, no. 2 (2021), p. 154-162
- Full Text:
- Reviewed:
- Description: Big data analytics is playing a pivotal role in big data, artificial intelligence, management, governance, and society with the dramatic development of big data, analytics, artificial intelligence. However, what is the spectrum of big data analytics and how to develop the spectrum are still a fundamental issue in the academic community. This article addresses these issues by presenting a big data derived small data approach. It then uses the proposed approach to analyze the top 150 profiles of Google Scholar, including big data analytics as one research field and proposes a spectrum of big data analytics. The spectrum of big data analytics mainly includes data mining, machine learning, data science and systems, artificial intelligence, distributed computing and systems, and cloud computing, taking into account degree of importance. The proposed approach and findings will generalize to other researchers and practitioners of big data analytics, machine learning, artificial intelligence, and data science. © 2019 International Association for Computer Information Systems.
Development of pedotransfer functions by machine learning for prediction of soil electrical conductivity and organic carbon content
- Benke, Kurt, Norng, Sorn, Robinson, Nathan, Chia, K., Rees, David, Hopley, J.
- Authors: Benke, Kurt , Norng, Sorn , Robinson, Nathan , Chia, K. , Rees, David , Hopley, J.
- Date: 2020
- Type: Text , Journal article
- Relation: Geoderma Vol. 366, no. (2020), p.
- Full Text:
- Reviewed:
- Description: The pedotransfer function is a mathematical model used to convert direct soil measurements into known and unknown soil properties. It provides information for modelling and simulation in soil research, hydrology, environmental science and climate change impacts, including investigating the carbon cycle and the exchange of carbon between soils and the atmosphere to support carbon farming. In particular, the pedotransfer function can provide input parameters for landscape design, soil quality assessment and economic optimisation. The objective of the study was to investigate the feasibility of using a generalised pedotransfer function derived with a machine learning method to predict soil electrical conductivity (EC) and soil organic carbon content (OC) for different regional locations in the state of Victoria, Australia. This strategy supports a unified approach to the interpolation and population of a single regional soils database, in contrast to a range of pedotransfer functions derived from local databases with measurement sets that may have limited transferability. The pedotransfer function generation was based on a machine learning algorithm incorporating the Generalized Linear Mixed Model with interactions and nested terms, with Residual Maximum Likelihood estimation, and a predictor-frequency ranking system with step-wise reduction of predictors to evaluate the predictive errors in reduced models. The source of the data was the Victorian Soil Information System (VSIS), which is a database administered for soil information and mapping purposes. The database contains soil measurements and information from locations across Victoria and is a repository of historical data, including monitoring studies. In total, data from 93 projects were available for inputs to modelling and analysis, with 5158 samples used to derive predictors for EC and 1954 samples used to derive predictors for OC. Over 500 models were tested by systematically reducing the number of predictors from the full model. Five-fold cross-validation was used for estimation of model mean-squared prediction error (MSPE) and mean-absolute percentage error (MAPE). The results were statistically significant with only a gradual reduction in error for the top-ranked 50 models. The prediction errors (MSPE and MAPE) of the top ranked model for EC are 0.686 and 0.635, and 0.413 and 0.474 for OC respectively. The four most frequently occurring predictors both for EC and OC prediction across the full set of models were found to be soil depth, pH, particle size distribution and geomorphological mapping unit. The possible advantages and disadvantages of this approach were discussed with respect to other machine learning approaches. © 2020 Elsevier B.V.
- Authors: Benke, Kurt , Norng, Sorn , Robinson, Nathan , Chia, K. , Rees, David , Hopley, J.
- Date: 2020
- Type: Text , Journal article
- Relation: Geoderma Vol. 366, no. (2020), p.
- Full Text:
- Reviewed:
- Description: The pedotransfer function is a mathematical model used to convert direct soil measurements into known and unknown soil properties. It provides information for modelling and simulation in soil research, hydrology, environmental science and climate change impacts, including investigating the carbon cycle and the exchange of carbon between soils and the atmosphere to support carbon farming. In particular, the pedotransfer function can provide input parameters for landscape design, soil quality assessment and economic optimisation. The objective of the study was to investigate the feasibility of using a generalised pedotransfer function derived with a machine learning method to predict soil electrical conductivity (EC) and soil organic carbon content (OC) for different regional locations in the state of Victoria, Australia. This strategy supports a unified approach to the interpolation and population of a single regional soils database, in contrast to a range of pedotransfer functions derived from local databases with measurement sets that may have limited transferability. The pedotransfer function generation was based on a machine learning algorithm incorporating the Generalized Linear Mixed Model with interactions and nested terms, with Residual Maximum Likelihood estimation, and a predictor-frequency ranking system with step-wise reduction of predictors to evaluate the predictive errors in reduced models. The source of the data was the Victorian Soil Information System (VSIS), which is a database administered for soil information and mapping purposes. The database contains soil measurements and information from locations across Victoria and is a repository of historical data, including monitoring studies. In total, data from 93 projects were available for inputs to modelling and analysis, with 5158 samples used to derive predictors for EC and 1954 samples used to derive predictors for OC. Over 500 models were tested by systematically reducing the number of predictors from the full model. Five-fold cross-validation was used for estimation of model mean-squared prediction error (MSPE) and mean-absolute percentage error (MAPE). The results were statistically significant with only a gradual reduction in error for the top-ranked 50 models. The prediction errors (MSPE and MAPE) of the top ranked model for EC are 0.686 and 0.635, and 0.413 and 0.474 for OC respectively. The four most frequently occurring predictors both for EC and OC prediction across the full set of models were found to be soil depth, pH, particle size distribution and geomorphological mapping unit. The possible advantages and disadvantages of this approach were discussed with respect to other machine learning approaches. © 2020 Elsevier B.V.
Algorithm development for the non-destructive testing of structural damage
- Noori Hoshyar, Azadeh, Rashidi, Maria, Liyanapathirana, Ranjith, Samali, Bijan
- Authors: Noori Hoshyar, Azadeh , Rashidi, Maria , Liyanapathirana, Ranjith , Samali, Bijan
- Date: 2019
- Type: Text , Journal article
- Relation: Applied sciences Vol. 9, no. 14 (2019), p. 2810
- Full Text:
- Reviewed:
- Description: Monitoring of structures to identify types of damages that occur under loading is essential in practical applications of civil infrastructure. In this paper, we detect and visualize damage based on several non-destructive testing (NDT) methods. A machine learning (ML) approach based on the Support Vector Machine (SVM) method is developed to prevent misdirection of the event interpretation of what is happening in the material. The objective is to identify cracks in the early stages, to reduce the risk of failure in structures. Theoretical and experimental analyses are derived by computing the performance indicators on the smart aggregate (SA)-based sensor data for concrete and reinforced-concrete (RC) beams. Validity assessment of the proposed indices was addressed through a comparative analysis with traditional SVM. The developed ML algorithms are shown to recognize cracks with a higher accuracy than the traditional SVM. Additionally, we propose different algorithms for microwave- or millimeter-wave imaging of steel plates, composite materials, and metal plates, to identify and visualize cracks. The proposed algorithm for steel plates is based on the gradient magnitude in four directions of an image, and is followed by the edge detection technique. Three algorithms were proposed for each of composite materials and metal plates, and are based on 2D fast Fourier transform (FFT) and hybrid fuzzy c-mean techniques, respectively. The proposed algorithms were able to recognize and visualize the cracking incurred in the structure more efficiently than the traditional techniques. The reported results are expected to be beneficial for NDT-based applications, particularly in civil engineering.
- Authors: Noori Hoshyar, Azadeh , Rashidi, Maria , Liyanapathirana, Ranjith , Samali, Bijan
- Date: 2019
- Type: Text , Journal article
- Relation: Applied sciences Vol. 9, no. 14 (2019), p. 2810
- Full Text:
- Reviewed:
- Description: Monitoring of structures to identify types of damages that occur under loading is essential in practical applications of civil infrastructure. In this paper, we detect and visualize damage based on several non-destructive testing (NDT) methods. A machine learning (ML) approach based on the Support Vector Machine (SVM) method is developed to prevent misdirection of the event interpretation of what is happening in the material. The objective is to identify cracks in the early stages, to reduce the risk of failure in structures. Theoretical and experimental analyses are derived by computing the performance indicators on the smart aggregate (SA)-based sensor data for concrete and reinforced-concrete (RC) beams. Validity assessment of the proposed indices was addressed through a comparative analysis with traditional SVM. The developed ML algorithms are shown to recognize cracks with a higher accuracy than the traditional SVM. Additionally, we propose different algorithms for microwave- or millimeter-wave imaging of steel plates, composite materials, and metal plates, to identify and visualize cracks. The proposed algorithm for steel plates is based on the gradient magnitude in four directions of an image, and is followed by the edge detection technique. Three algorithms were proposed for each of composite materials and metal plates, and are based on 2D fast Fourier transform (FFT) and hybrid fuzzy c-mean techniques, respectively. The proposed algorithms were able to recognize and visualize the cracking incurred in the structure more efficiently than the traditional techniques. The reported results are expected to be beneficial for NDT-based applications, particularly in civil engineering.
Precision medicine : an optimal approach to patient care in renal cell carcinoma
- Sharma, Revati, Kannourakis, George, Prithviraj, Prashanth, Ahmed, Nuzhat
- Authors: Sharma, Revati , Kannourakis, George , Prithviraj, Prashanth , Ahmed, Nuzhat
- Date: 2022
- Type: Text , Journal article , Review
- Relation: Frontiers in Medicine Vol. 9, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Renal cell cancer (RCC) is a heterogeneous tumor that shows both intra- and inter-heterogeneity. Heterogeneity is displayed not only in different patients but also among RCC cells in the same tumor, which makes treatment difficult because of varying degrees of responses generated in RCC heterogeneous tumor cells even with targeted treatment. In that context, precision medicine (PM), in terms of individualized treatment catered for a specific patient or groups of patients, can shift the paradigm of treatment in the clinical management of RCC. Recent progress in the biochemical, molecular, and histological characteristics of RCC has thrown light on many deregulated pathways involved in the pathogenesis of RCC. As PM-based therapies are rapidly evolving and few are already in current clinical practice in oncology, one can expect that PM will expand its way toward the robust treatment of patients with RCC. This article provides a comprehensive background on recent strategies and breakthroughs of PM in oncology and provides an overview of the potential applicability of PM in RCC. The article also highlights the drawbacks of PM and provides a holistic approach that goes beyond the involvement of clinicians and encompasses appropriate legislative and administrative care imparted by the healthcare system and insurance providers. It is anticipated that combined efforts from all sectors involved will make PM accessible to RCC and other patients with cancer, making a tremendous positive leap on individualized treatment strategies. This will subsequently enhance the quality of life of patients. Copyright © 2022 Sharma, Kannourakis, Prithviraj and Ahmed.
- Authors: Sharma, Revati , Kannourakis, George , Prithviraj, Prashanth , Ahmed, Nuzhat
- Date: 2022
- Type: Text , Journal article , Review
- Relation: Frontiers in Medicine Vol. 9, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Renal cell cancer (RCC) is a heterogeneous tumor that shows both intra- and inter-heterogeneity. Heterogeneity is displayed not only in different patients but also among RCC cells in the same tumor, which makes treatment difficult because of varying degrees of responses generated in RCC heterogeneous tumor cells even with targeted treatment. In that context, precision medicine (PM), in terms of individualized treatment catered for a specific patient or groups of patients, can shift the paradigm of treatment in the clinical management of RCC. Recent progress in the biochemical, molecular, and histological characteristics of RCC has thrown light on many deregulated pathways involved in the pathogenesis of RCC. As PM-based therapies are rapidly evolving and few are already in current clinical practice in oncology, one can expect that PM will expand its way toward the robust treatment of patients with RCC. This article provides a comprehensive background on recent strategies and breakthroughs of PM in oncology and provides an overview of the potential applicability of PM in RCC. The article also highlights the drawbacks of PM and provides a holistic approach that goes beyond the involvement of clinicians and encompasses appropriate legislative and administrative care imparted by the healthcare system and insurance providers. It is anticipated that combined efforts from all sectors involved will make PM accessible to RCC and other patients with cancer, making a tremendous positive leap on individualized treatment strategies. This will subsequently enhance the quality of life of patients. Copyright © 2022 Sharma, Kannourakis, Prithviraj and Ahmed.
Prediction of blast-induced ground vibration at a limestone quarry : an artificial intelligence approach
- Arthur, Clement, Bhatawdekar, Ramesh, Mohamad, Edy, Sabri, Mohanad, Bohra, Manish, Khandelwal, Manoj, Kwon, Sangki
- Authors: Arthur, Clement , Bhatawdekar, Ramesh , Mohamad, Edy , Sabri, Mohanad , Bohra, Manish , Khandelwal, Manoj , Kwon, Sangki
- Date: 2022
- Type: Text , Journal article
- Relation: Applied Sciences (Switzerland) Vol. 12, no. 18 (2022), p.
- Full Text:
- Reviewed:
- Description: Ground vibration is one of the most unfavourable environmental effects of blasting activities, which can cause serious damage to neighboring homes and structures. As a result, effective forecasting of their severity is critical to controlling and reducing their recurrence. There are several conventional vibration predictor equations available proposed by different researchers but most of them are based on only two parameters, i.e., explosive charge used per delay and distance between blast face to the monitoring point. It is a well-known fact that blasting results are influenced by a number of blast design parameters, such as burden, spacing, powder factor, etc. but these are not being considered in any of the available conventional predictors and due to that they show a high error in predicting blast vibrations. Nowadays, artificial intelligence has been widely used in blast engineering. Thus, three artificial intelligence approaches, namely Gaussian process regression (GPR), extreme learning machine (ELM) and backpropagation neural network (BPNN) were used in this study to estimate ground vibration caused by blasting in Shree Cement Ras Limestone Mine in India. To achieve that aim, 101 blasting datasets with powder factor, average depth, distance, spacing, burden, charge weight, and stemming length as input parameters were collected from the mine site. For comparison purposes, a simple multivariate regression analysis (MVRA) model as well as, a nonparametric regression-based technique known as multivariate adaptive regression splines (MARS) was also constructed using the same datasets. This study serves as a foundational study for the comparison of GPR, BPNN, ELM, MARS and MVRA to ascertain their respective predictive performances. Eighty-one (81) datasets representing 80% of the total blasting datasets were used to construct and train the various predictive models while 20 data samples (20%) were utilized for evaluating the predictive capabilities of the developed predictive models. Using the testing datasets, major indicators of performance, namely mean squared error (MSE), variance accounted for (VAF), correlation coefficient (R) and coefficient of determination (R2) were compared as statistical evaluators of model performance. This study revealed that the GPR model exhibited superior predictive capability in comparison to the MARS, BPNN, ELM and MVRA. The GPR model showed the highest VAF, R and R2 values of 99.1728%, 0.9985 and 0.9971 respectively and the lowest MSE of 0.0903. As a result, the blast engineer can employ GPR as an effective and appropriate method for forecasting blast-induced ground vibration. © 2022 by the authors.
- Authors: Arthur, Clement , Bhatawdekar, Ramesh , Mohamad, Edy , Sabri, Mohanad , Bohra, Manish , Khandelwal, Manoj , Kwon, Sangki
- Date: 2022
- Type: Text , Journal article
- Relation: Applied Sciences (Switzerland) Vol. 12, no. 18 (2022), p.
- Full Text:
- Reviewed:
- Description: Ground vibration is one of the most unfavourable environmental effects of blasting activities, which can cause serious damage to neighboring homes and structures. As a result, effective forecasting of their severity is critical to controlling and reducing their recurrence. There are several conventional vibration predictor equations available proposed by different researchers but most of them are based on only two parameters, i.e., explosive charge used per delay and distance between blast face to the monitoring point. It is a well-known fact that blasting results are influenced by a number of blast design parameters, such as burden, spacing, powder factor, etc. but these are not being considered in any of the available conventional predictors and due to that they show a high error in predicting blast vibrations. Nowadays, artificial intelligence has been widely used in blast engineering. Thus, three artificial intelligence approaches, namely Gaussian process regression (GPR), extreme learning machine (ELM) and backpropagation neural network (BPNN) were used in this study to estimate ground vibration caused by blasting in Shree Cement Ras Limestone Mine in India. To achieve that aim, 101 blasting datasets with powder factor, average depth, distance, spacing, burden, charge weight, and stemming length as input parameters were collected from the mine site. For comparison purposes, a simple multivariate regression analysis (MVRA) model as well as, a nonparametric regression-based technique known as multivariate adaptive regression splines (MARS) was also constructed using the same datasets. This study serves as a foundational study for the comparison of GPR, BPNN, ELM, MARS and MVRA to ascertain their respective predictive performances. Eighty-one (81) datasets representing 80% of the total blasting datasets were used to construct and train the various predictive models while 20 data samples (20%) were utilized for evaluating the predictive capabilities of the developed predictive models. Using the testing datasets, major indicators of performance, namely mean squared error (MSE), variance accounted for (VAF), correlation coefficient (R) and coefficient of determination (R2) were compared as statistical evaluators of model performance. This study revealed that the GPR model exhibited superior predictive capability in comparison to the MARS, BPNN, ELM and MVRA. The GPR model showed the highest VAF, R and R2 values of 99.1728%, 0.9985 and 0.9971 respectively and the lowest MSE of 0.0903. As a result, the blast engineer can employ GPR as an effective and appropriate method for forecasting blast-induced ground vibration. © 2022 by the authors.
COVID-19 datasets : a brief overview
- Sun, Ke, Li, Wuyang, Saikrishna, Vidya, Chadhar, Mehmood, Xia, Feng
- Authors: Sun, Ke , Li, Wuyang , Saikrishna, Vidya , Chadhar, Mehmood , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Computer Science and Information Systems Vol. 19, no. 3 (2022), p. 1115-1132
- Full Text:
- Reviewed:
- Description: The outbreak of the COVID-19 pandemic affects lives and social-economic development around the world. The affecting of the pandemic has motivated researchers from different domains to find effective solutions to diagnose, prevent, and estimate the pandemic and relieve its adverse effects. Numerous COVID-19 datasets are built from these studies and are available to the public. These datasets can be used for disease diagnosis and case prediction, speeding up solving problems caused by the pandemic. To meet the needs of researchers to understand various COVID-19 datasets, we examine and provide an overview of them. We organise the majority of these datasets into three categories based on the category of ap-plications, i.e., time-series, knowledge base, and media-based datasets. Organising COVID-19 datasets into appropriate categories can help researchers hold their focus on methodology rather than the datasets. In addition, applications and COVID-19 datasets suffer from a series of problems, such as privacy and quality. We discuss these issues as well as potentials of COVID-19 datasets. © 2022, ComSIS Consortium. All rights reserved.
- Authors: Sun, Ke , Li, Wuyang , Saikrishna, Vidya , Chadhar, Mehmood , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Computer Science and Information Systems Vol. 19, no. 3 (2022), p. 1115-1132
- Full Text:
- Reviewed:
- Description: The outbreak of the COVID-19 pandemic affects lives and social-economic development around the world. The affecting of the pandemic has motivated researchers from different domains to find effective solutions to diagnose, prevent, and estimate the pandemic and relieve its adverse effects. Numerous COVID-19 datasets are built from these studies and are available to the public. These datasets can be used for disease diagnosis and case prediction, speeding up solving problems caused by the pandemic. To meet the needs of researchers to understand various COVID-19 datasets, we examine and provide an overview of them. We organise the majority of these datasets into three categories based on the category of ap-plications, i.e., time-series, knowledge base, and media-based datasets. Organising COVID-19 datasets into appropriate categories can help researchers hold their focus on methodology rather than the datasets. In addition, applications and COVID-19 datasets suffer from a series of problems, such as privacy and quality. We discuss these issues as well as potentials of COVID-19 datasets. © 2022, ComSIS Consortium. All rights reserved.
Efficient future waste management : a learning-based approach with deep neural networks for smart system (LADS)
- Chauhan, Ritu, Shighra, Sahil, Madkhali, Hatim, Nguyen, Linh, Prasad, Mukesh
- Authors: Chauhan, Ritu , Shighra, Sahil , Madkhali, Hatim , Nguyen, Linh , Prasad, Mukesh
- Date: 2023
- Type: Text , Journal article
- Relation: Applied Sciences (Switzerland) Vol. 13, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Waste segregation, management, transportation, and disposal must be carefully managed to reduce the danger to patients, the public, and risks to the environment’s health and safety. The previous method of monitoring trash in strategically placed garbage bins is a time-consuming and inefficient method that wastes time, human effort, and money, and is also incompatible with smart city needs. So, the goal is to reduce individual decision-making and increase the productivity of the waste categorization process. Using a convolutional neural network (CNN), the study sought to create an image classifier that recognizes items and classifies trash material. This paper provides an overview of trash monitoring methods, garbage disposal strategies, and the technology used in establishing a waste management system. Finally, an efficient system and waste disposal approach is provided that may be employed in the future to improve performance and cost effectiveness. One of the most significant barriers to efficient waste management can now be overcome with the aid of a deep learning technique. The proposed method outperformed the alternative AlexNet, VGG16, and ResNet34 methods. © 2023 by the authors.
- Authors: Chauhan, Ritu , Shighra, Sahil , Madkhali, Hatim , Nguyen, Linh , Prasad, Mukesh
- Date: 2023
- Type: Text , Journal article
- Relation: Applied Sciences (Switzerland) Vol. 13, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Waste segregation, management, transportation, and disposal must be carefully managed to reduce the danger to patients, the public, and risks to the environment’s health and safety. The previous method of monitoring trash in strategically placed garbage bins is a time-consuming and inefficient method that wastes time, human effort, and money, and is also incompatible with smart city needs. So, the goal is to reduce individual decision-making and increase the productivity of the waste categorization process. Using a convolutional neural network (CNN), the study sought to create an image classifier that recognizes items and classifies trash material. This paper provides an overview of trash monitoring methods, garbage disposal strategies, and the technology used in establishing a waste management system. Finally, an efficient system and waste disposal approach is provided that may be employed in the future to improve performance and cost effectiveness. One of the most significant barriers to efficient waste management can now be overcome with the aid of a deep learning technique. The proposed method outperformed the alternative AlexNet, VGG16, and ResNet34 methods. © 2023 by the authors.
A feature agnostic approach for glaucoma detection in OCT volumes
- Maetschke, Stefan, Antony, Bhavna, Ishikawa, Hiroshi, Wollstein, Gadi, Schuman, Joel, Garnavi, Rahil
- Authors: Maetschke, Stefan , Antony, Bhavna , Ishikawa, Hiroshi , Wollstein, Gadi , Schuman, Joel , Garnavi, Rahil
- Date: 2019
- Type: Text , Journal article
- Relation: PLoS One Vol. 14, no. 7 (2019), p. e0219126
- Full Text:
- Reviewed:
- Description: Optical coherence tomography (OCT) based measurements of retinal layer thickness, such as the retinal nerve fibre layer (RNFL) and the ganglion cell with inner plexiform layer (GCIPL) are commonly employed for the diagnosis and monitoring of glaucoma. Previously, machine learning techniques have relied on segmentation-based imaging features such as the peripapillary RNFL thickness and the cup-to-disc ratio. Here, we propose a deep learning technique that classifies eyes as healthy or glaucomatous directly from raw, unsegmented OCT volumes of the optic nerve head (ONH) using a 3D Convolutional Neural Network (CNN). We compared the accuracy of this technique with various feature-based machine learning algorithms and demonstrated the superiority of the proposed deep learning based method. Logistic regression was found to be the best performing classical machine learning technique with an AUC of 0.89. In direct comparison, the deep learning approach achieved a substantially higher AUC of 0.94 with the additional advantage of providing insight into which regions of an OCT volume are important for glaucoma detection. Computing Class Activation Maps (CAM), we found that the CNN identified neuroretinal rim and optic disc cupping as well as the lamina cribrosa (LC) and its surrounding areas as the regions significantly associated with the glaucoma classification. These regions anatomically correspond to the well established and commonly used clinical markers for glaucoma diagnosis such as increased cup volume, cup diameter, and neuroretinal rim thinning at the superior and inferior segments.
- Authors: Maetschke, Stefan , Antony, Bhavna , Ishikawa, Hiroshi , Wollstein, Gadi , Schuman, Joel , Garnavi, Rahil
- Date: 2019
- Type: Text , Journal article
- Relation: PLoS One Vol. 14, no. 7 (2019), p. e0219126
- Full Text:
- Reviewed:
- Description: Optical coherence tomography (OCT) based measurements of retinal layer thickness, such as the retinal nerve fibre layer (RNFL) and the ganglion cell with inner plexiform layer (GCIPL) are commonly employed for the diagnosis and monitoring of glaucoma. Previously, machine learning techniques have relied on segmentation-based imaging features such as the peripapillary RNFL thickness and the cup-to-disc ratio. Here, we propose a deep learning technique that classifies eyes as healthy or glaucomatous directly from raw, unsegmented OCT volumes of the optic nerve head (ONH) using a 3D Convolutional Neural Network (CNN). We compared the accuracy of this technique with various feature-based machine learning algorithms and demonstrated the superiority of the proposed deep learning based method. Logistic regression was found to be the best performing classical machine learning technique with an AUC of 0.89. In direct comparison, the deep learning approach achieved a substantially higher AUC of 0.94 with the additional advantage of providing insight into which regions of an OCT volume are important for glaucoma detection. Computing Class Activation Maps (CAM), we found that the CNN identified neuroretinal rim and optic disc cupping as well as the lamina cribrosa (LC) and its surrounding areas as the regions significantly associated with the glaucoma classification. These regions anatomically correspond to the well established and commonly used clinical markers for glaucoma diagnosis such as increased cup volume, cup diameter, and neuroretinal rim thinning at the superior and inferior segments.
Malignant and non-malignant oral lesions classification and diagnosis with deep neural networks
- Liyanage, V.iduni, Tao, Mengqiu, Park, Joon, Wang, Kate, Azimi, Somayyeh
- Authors: Liyanage, V.iduni , Tao, Mengqiu , Park, Joon , Wang, Kate , Azimi, Somayyeh
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Dentistry Vol. 137, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Objectives: Given the increasing incidence of oral cancer, it is essential to provide high-risk communities, especially in remote regions, with an affordable, user-friendly tool for visual lesion diagnosis. This proof-of-concept study explored the utility and feasibility of a smartphone application that can photograph and diagnose oral lesions. Methods: The images of oral lesions with confirmed diagnoses were sourced from oral and maxillofacial textbooks. In total, 342 images were extracted, encompassing lesions from various regions of the oral cavity such as the gingiva, palate, and labial mucosa. The lesions were segregated into three categories: Class 1 represented non-neoplastic lesions, Class 2 included benign neoplasms, and Class 3 contained premalignant/malignant lesions. The images were analysed using MobileNetV3 and EfficientNetV2 models, with the process producing an accuracy curve, confusion matrix, and receiver operating characteristic (ROC) curve. Results: The EfficientNetV2 model showed a steep increase in validation accuracy early in the iterations, plateauing at a score of 0.71. According to the confusion matrix, this model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions was 64% and 80% respectively. Conversely, the MobileNetV3 model exhibited a more gradual increase, reaching a plateau at a validation accuracy of 0.70. The MobileNetV3 model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions, according to the confusion matrix, was 64% and 82% respectively. Conclusions: Our proof-of-concept study effectively demonstrated the potential accuracy of AI software in distinguishing malignant lesions. This could play a vital role in remote screenings for populations with limited access to dental practitioners. However, the discrepancies between the classification of images and the results of "non-malignant lesions" calls for further refinement of the models and the classification system used. Clinical significance: The findings of this study indicate that AI software has the potential to aid in the identification or screening of malignant oral lesions. Further improvements are required to enhance accuracy in classifying non-malignant lesions. © 2023 The Author(s)
- Authors: Liyanage, V.iduni , Tao, Mengqiu , Park, Joon , Wang, Kate , Azimi, Somayyeh
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Dentistry Vol. 137, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Objectives: Given the increasing incidence of oral cancer, it is essential to provide high-risk communities, especially in remote regions, with an affordable, user-friendly tool for visual lesion diagnosis. This proof-of-concept study explored the utility and feasibility of a smartphone application that can photograph and diagnose oral lesions. Methods: The images of oral lesions with confirmed diagnoses were sourced from oral and maxillofacial textbooks. In total, 342 images were extracted, encompassing lesions from various regions of the oral cavity such as the gingiva, palate, and labial mucosa. The lesions were segregated into three categories: Class 1 represented non-neoplastic lesions, Class 2 included benign neoplasms, and Class 3 contained premalignant/malignant lesions. The images were analysed using MobileNetV3 and EfficientNetV2 models, with the process producing an accuracy curve, confusion matrix, and receiver operating characteristic (ROC) curve. Results: The EfficientNetV2 model showed a steep increase in validation accuracy early in the iterations, plateauing at a score of 0.71. According to the confusion matrix, this model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions was 64% and 80% respectively. Conversely, the MobileNetV3 model exhibited a more gradual increase, reaching a plateau at a validation accuracy of 0.70. The MobileNetV3 model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions, according to the confusion matrix, was 64% and 82% respectively. Conclusions: Our proof-of-concept study effectively demonstrated the potential accuracy of AI software in distinguishing malignant lesions. This could play a vital role in remote screenings for populations with limited access to dental practitioners. However, the discrepancies between the classification of images and the results of "non-malignant lesions" calls for further refinement of the models and the classification system used. Clinical significance: The findings of this study indicate that AI software has the potential to aid in the identification or screening of malignant oral lesions. Further improvements are required to enhance accuracy in classifying non-malignant lesions. © 2023 The Author(s)