Adaptation of a real-time deep learning approach with an analog fault detection technique for reliability forecasting of capacitor banks used in mobile vehicles
- Rezaei, Mohammad, Fathollahi, Arman, Rezaei, Sajad, Hu, Jiefeng, Gheisarnejad, Meysam, Teimouri, Ali, Rituraj, Rituraj, Mosavi, Amir, Khooban, Mohammad-Hassan
- Authors: Rezaei, Mohammad , Fathollahi, Arman , Rezaei, Sajad , Hu, Jiefeng , Gheisarnejad, Meysam , Teimouri, Ali , Rituraj, Rituraj , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 132271-132287
- Full Text:
- Reviewed:
- Description: The DC-Link capacitor is defined as the essential electronics element which sources or sinks the respective currents. The reliability of DC-link capacitor-banks (CBs) encounters many challenges due to their usage in electric vehicles. Heavy shocks may damage the internal capacitors without shutting down the CB. The fundamental development obstacles of CBs are: lack of considering capacitor degradation in reliability assessment, the impact of unforeseen sudden internal capacitor faults in forecasting CB lifetime, and the faults consequence on CB degradation. The sudden faults change the CB capacitance, which leads to reliability change. To more accurately estimate the reliability, the type of the fault needs to be detected for predicting the correct post-fault capacitance. To address these practical problems, a new CB model and reliability assessment formula covering all fault types are first presented, then, a new analog fault-detection method is presented, and a combination of online-learning long short-term memory (LSTM) and fault-detection method is subsequently performed, which adapt the sudden internal CB faults with the LSTM to correctly predict the CB degradation. To confirm the correct LSTM operation, four capacitors degradation is practically recorded for 2000-hours, and the off-line faultless degradation values predicted by the LSTM are compared with the actual data. The experimental findings validate the applicability of the proposed method. The codes and data are provided. © 2013 IEEE.
- Authors: Rezaei, Mohammad , Fathollahi, Arman , Rezaei, Sajad , Hu, Jiefeng , Gheisarnejad, Meysam , Teimouri, Ali , Rituraj, Rituraj , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 132271-132287
- Full Text:
- Reviewed:
- Description: The DC-Link capacitor is defined as the essential electronics element which sources or sinks the respective currents. The reliability of DC-link capacitor-banks (CBs) encounters many challenges due to their usage in electric vehicles. Heavy shocks may damage the internal capacitors without shutting down the CB. The fundamental development obstacles of CBs are: lack of considering capacitor degradation in reliability assessment, the impact of unforeseen sudden internal capacitor faults in forecasting CB lifetime, and the faults consequence on CB degradation. The sudden faults change the CB capacitance, which leads to reliability change. To more accurately estimate the reliability, the type of the fault needs to be detected for predicting the correct post-fault capacitance. To address these practical problems, a new CB model and reliability assessment formula covering all fault types are first presented, then, a new analog fault-detection method is presented, and a combination of online-learning long short-term memory (LSTM) and fault-detection method is subsequently performed, which adapt the sudden internal CB faults with the LSTM to correctly predict the CB degradation. To confirm the correct LSTM operation, four capacitors degradation is practically recorded for 2000-hours, and the off-line faultless degradation values predicted by the LSTM are compared with the actual data. The experimental findings validate the applicability of the proposed method. The codes and data are provided. © 2013 IEEE.
A critical review of intrusion detection systems in the internet of things : techniques, deployment strategy, validation strategy, attacks, public datasets and challenges
- Khraisat, Ansam, Alazab, Ammar
- Authors: Khraisat, Ansam , Alazab, Ammar
- Date: 2021
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 4, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack on the end nodes. To this end, Numerous IoT intrusion detection Systems (IDS) have been proposed in the literature to tackle attacks on the IoT ecosystem, which can be broadly classified based on detection technique, validation strategy, and deployment strategy. This survey paper presents a comprehensive review of contemporary IoT IDS and an overview of techniques, deployment Strategy, validation strategy and datasets that are commonly applied for building IDS. We also review how existing IoT IDS detect intrusive attacks and secure communications on the IoT. It also presents the classification of IoT attacks and discusses future research challenges to counter such IoT attacks to make IoT more secure. These purposes help IoT security researchers by uniting, contrasting, and compiling scattered research efforts. Consequently, we provide a unique IoT IDS taxonomy, which sheds light on IoT IDS techniques, their advantages and disadvantages, IoT attacks that exploit IoT communication systems, corresponding advanced IDS and detection capabilities to detect IoT attacks. © 2021, The Author(s).
- Authors: Khraisat, Ansam , Alazab, Ammar
- Date: 2021
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 4, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack on the end nodes. To this end, Numerous IoT intrusion detection Systems (IDS) have been proposed in the literature to tackle attacks on the IoT ecosystem, which can be broadly classified based on detection technique, validation strategy, and deployment strategy. This survey paper presents a comprehensive review of contemporary IoT IDS and an overview of techniques, deployment Strategy, validation strategy and datasets that are commonly applied for building IDS. We also review how existing IoT IDS detect intrusive attacks and secure communications on the IoT. It also presents the classification of IoT attacks and discusses future research challenges to counter such IoT attacks to make IoT more secure. These purposes help IoT security researchers by uniting, contrasting, and compiling scattered research efforts. Consequently, we provide a unique IoT IDS taxonomy, which sheds light on IoT IDS techniques, their advantages and disadvantages, IoT attacks that exploit IoT communication systems, corresponding advanced IDS and detection capabilities to detect IoT attacks. © 2021, The Author(s).
Rapid health data repository allocation using predictive machine learning
- Uddin, Ashraf, Stranieri, Andrew, Gondal, Iqbal, Balasubramanian, Venki
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Health Informatics Journal Vol. 26, no. 4 (2020), p. 3009-3036
- Full Text:
- Reviewed:
- Description: Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used. © The Author(s) 2020.
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Health Informatics Journal Vol. 26, no. 4 (2020), p. 3009-3036
- Full Text:
- Reviewed:
- Description: Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used. © The Author(s) 2020.
Comparative analysis of machine and deep learning models for soil properties prediction from hyperspectral visual band
- Datta, Dristi, Paul, Manoranjan, Murshed, Manzur, Teng, Shyh, Schmidtke, Leigh
- Authors: Datta, Dristi , Paul, Manoranjan , Murshed, Manzur , Teng, Shyh , Schmidtke, Leigh
- Date: 2023
- Type: Text , Journal article
- Relation: Environments Vol. 10, no. 5 (2023), p. 77
- Full Text:
- Reviewed:
- Description: Estimating various properties of soil, including moisture, carbon, and nitrogen, is crucial for studying their correlation with plant health and food production. However, conventional methods such as oven-drying and chemical analysis are laborious, expensive, and only feasible for a limited land area. With the advent of remote sensing technologies like multi/hyperspectral imaging, it is now possible to predict soil properties non-invasive and cost-effectively for a large expanse of bare land. Recent research shows the possibility of predicting those soil contents from a wide range of hyperspectral data using good prediction algorithms. However, these kinds of hyperspectral sensors are expensive and not widely available. Therefore, this paper investigates different machine and deep learning techniques to predict soil nutrient properties using only the red (R), green (G), and blue (B) bands data to propose a suitable machine/deep learning model that can be used as a rapid soil test. Another objective of this research is to observe and compare the prediction accuracy in three cases i. hyperspectral band ii. full spectrum of the visual band, and iii. three-channel of RGB band and provide a guideline to the user on which spectrum information they should use to predict those soil properties. The outcome of this research helps to develop a mobile application that is easy to use for a quick soil test. This research also explores learning-based algorithms with significant feature combinations and their performance comparisons in predicting soil properties from visual band data. For this, we also explore the impact of dimensional reduction (i.e., principal component analysis) and transformations (i.e., empirical mode decomposition) of features. The results show that the proposed model can comparably predict the soil contents from the three-channel RGB data.
- Authors: Datta, Dristi , Paul, Manoranjan , Murshed, Manzur , Teng, Shyh , Schmidtke, Leigh
- Date: 2023
- Type: Text , Journal article
- Relation: Environments Vol. 10, no. 5 (2023), p. 77
- Full Text:
- Reviewed:
- Description: Estimating various properties of soil, including moisture, carbon, and nitrogen, is crucial for studying their correlation with plant health and food production. However, conventional methods such as oven-drying and chemical analysis are laborious, expensive, and only feasible for a limited land area. With the advent of remote sensing technologies like multi/hyperspectral imaging, it is now possible to predict soil properties non-invasive and cost-effectively for a large expanse of bare land. Recent research shows the possibility of predicting those soil contents from a wide range of hyperspectral data using good prediction algorithms. However, these kinds of hyperspectral sensors are expensive and not widely available. Therefore, this paper investigates different machine and deep learning techniques to predict soil nutrient properties using only the red (R), green (G), and blue (B) bands data to propose a suitable machine/deep learning model that can be used as a rapid soil test. Another objective of this research is to observe and compare the prediction accuracy in three cases i. hyperspectral band ii. full spectrum of the visual band, and iii. three-channel of RGB band and provide a guideline to the user on which spectrum information they should use to predict those soil properties. The outcome of this research helps to develop a mobile application that is easy to use for a quick soil test. This research also explores learning-based algorithms with significant feature combinations and their performance comparisons in predicting soil properties from visual band data. For this, we also explore the impact of dimensional reduction (i.e., principal component analysis) and transformations (i.e., empirical mode decomposition) of features. The results show that the proposed model can comparably predict the soil contents from the three-channel RGB data.
A smart healthcare framework for detection and monitoring of COVID-19 using IoT and cloud computing
- Nasser, Nidal, Emad-ul-Haq, Qazi, Imran, Muhammad, Ali, Asmaa, Razzak, Imran, Al-Helali, Abdulaziz
- Authors: Nasser, Nidal , Emad-ul-Haq, Qazi , Imran, Muhammad , Ali, Asmaa , Razzak, Imran , Al-Helali, Abdulaziz
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 19 (2023), p. 13775-13789
- Full Text:
- Reviewed:
- Description: Coronavirus (COVID-19) is a very contagious infection that has drawn the world’s attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data’s intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system’s robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Nasser, Nidal , Emad-ul-Haq, Qazi , Imran, Muhammad , Ali, Asmaa , Razzak, Imran , Al-Helali, Abdulaziz
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 19 (2023), p. 13775-13789
- Full Text:
- Reviewed:
- Description: Coronavirus (COVID-19) is a very contagious infection that has drawn the world’s attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data’s intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system’s robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Rashid, Md Mamunur, Kamruzzaman, Joarder, Mehedi Hassan, Mohammad, Imam, Tasadduq, Wibowo, Santoso, Gordon, Steven, Fortino, Giancarlo
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Mehedi Hassan, Mohammad , Imam, Tasadduq , Wibowo, Santoso , Gordon, Steven , Fortino, Giancarlo
- Date: 2022
- Type: Text , Journal article
- Relation: Computers and Security Vol. 120, no. (2022), p.
- Full Text: false
- Reviewed:
- Description: Intrusion Detection Systems (IDS) based on deep learning models can identify and mitigate cyberattacks in IoT applications in a resilient and systematic manner. These models, which support the IDS's decision, could be vulnerable to a cyberattack known as adversarial attack. In this type of attack, attackers create adversarial samples by introducing small perturbations to attack samples to trick a trained model into misclassifying them as benign applications. These attacks can cause substantial damage to IoT-based smart city models in terms of device malfunction, data leakage, operational outage and financial loss. To our knowledge, the impact of and defence against adversarial attacks on IDS models in relation to smart city applications have not been investigated yet. To address this research gap, in this work, we explore the effect of adversarial attacks on the deep learning and shallow machine learning models by using a recent IoT dataset and propose a method using adversarial retraining that can significantly improve IDS performance when confronting adversarial attacks. Simulation results demonstrate that the presence of adversarial samples deteriorates the detection accuracy significantly by above 70% while our proposed model can deliver detection accuracy above 99% against all types of attacks including adversarial attacks. This makes an IDS robust in protecting IoT-based smart city services. © 2022 Elsevier Ltd
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Speech based detection of Alzheimer’s disease : a survey of AI techniques, datasets and challenges
- Ding, Keweb, Chetty, Madhu, Noori Hoshyar, Azadeh, Bhattacharya, Tanusri, Klein, Britt
- Authors: Ding, Keweb , Chetty, Madhu , Noori Hoshyar, Azadeh , Bhattacharya, Tanusri , Klein, Britt
- Date: 2024
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 57, no. 12 (2024), p.
- Full Text:
- Reviewed:
- Description: Alzheimer’s disease (AD) is a growing global concern, exacerbated by an aging population and the high costs associated with traditional detection methods. Recent research has identified speech data as valuable clinical information for AD detection, given its association with the progressive degeneration of brain cells and subsequent impacts on memory, cognition, and language abilities. The ongoing demographic shift toward an aging global population underscores the critical need for affordable and easily available methods for early AD detection and intervention. To address this major challenge, substantial research has recently focused on investigating speech data, aiming to develop efficient and affordable diagnostic tools that align with the demands of our aging society. This paper presents an in-depth review of studies from 2018–2023 utilizing speech for AD detection. Following the PRISMA protocol and a two-stage selection process, we identified 85 publications for analysis. In contrast to previous literature reviews, this paper places a strong emphasis on conducting a rigorous comparative analysis of various Artificial Intelligence (AI) based techniques, categorizing them meticulously based on underlying algorithms. We perform an exhaustive evaluation of research papers leveraging common benchmark datasets, specifically ADReSS and ADReSSo, to assess their performance. In contrast to previous literature reviews, this work makes a significant contribution by overcoming the limitations posed by the absence of standardized tasks and commonly accepted benchmark datasets for comparing different studies. The analysis reveals the dominance of deep learning models, particularly those leveraging pre-trained models like BERT, in AD detection. The integration of acoustic and linguistic features often achieves accuracies above 85%. Despite these advancements, challenges persist in data scarcity, standardization, privacy, and model interpretability. Future directions include improving multilingual recognition, exploring emerging multimodal approaches, and enhancing ASR systems for AD patients. By identifying these key challenges and suggesting future research directions, our review serves as a valuable resource for advancing AD detection techniques and their practical implementation. © The Author(s) 2024.
- Authors: Ding, Keweb , Chetty, Madhu , Noori Hoshyar, Azadeh , Bhattacharya, Tanusri , Klein, Britt
- Date: 2024
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 57, no. 12 (2024), p.
- Full Text:
- Reviewed:
- Description: Alzheimer’s disease (AD) is a growing global concern, exacerbated by an aging population and the high costs associated with traditional detection methods. Recent research has identified speech data as valuable clinical information for AD detection, given its association with the progressive degeneration of brain cells and subsequent impacts on memory, cognition, and language abilities. The ongoing demographic shift toward an aging global population underscores the critical need for affordable and easily available methods for early AD detection and intervention. To address this major challenge, substantial research has recently focused on investigating speech data, aiming to develop efficient and affordable diagnostic tools that align with the demands of our aging society. This paper presents an in-depth review of studies from 2018–2023 utilizing speech for AD detection. Following the PRISMA protocol and a two-stage selection process, we identified 85 publications for analysis. In contrast to previous literature reviews, this paper places a strong emphasis on conducting a rigorous comparative analysis of various Artificial Intelligence (AI) based techniques, categorizing them meticulously based on underlying algorithms. We perform an exhaustive evaluation of research papers leveraging common benchmark datasets, specifically ADReSS and ADReSSo, to assess their performance. In contrast to previous literature reviews, this work makes a significant contribution by overcoming the limitations posed by the absence of standardized tasks and commonly accepted benchmark datasets for comparing different studies. The analysis reveals the dominance of deep learning models, particularly those leveraging pre-trained models like BERT, in AD detection. The integration of acoustic and linguistic features often achieves accuracies above 85%. Despite these advancements, challenges persist in data scarcity, standardization, privacy, and model interpretability. Future directions include improving multilingual recognition, exploring emerging multimodal approaches, and enhancing ASR systems for AD patients. By identifying these key challenges and suggesting future research directions, our review serves as a valuable resource for advancing AD detection techniques and their practical implementation. © The Author(s) 2024.
Coral reef surveillance with machine learning : a review of datasets, techniques, and challenges
- Chowdhury, Abdullahi, Jahan, Musfera, Kaisar, Shahriar, Khoda, Mahbub, Rajin, S., Naha, Ranesh
- Authors: Chowdhury, Abdullahi , Jahan, Musfera , Kaisar, Shahriar , Khoda, Mahbub , Rajin, S. , Naha, Ranesh
- Date: 2024
- Type: Text , Journal article , Review
- Relation: Electronics (Switzerland) Vol. 13, no. 24 (2024), p.
- Full Text:
- Reviewed:
- Description: Climate change poses a significant threat to our planet, particularly affecting intricate marine ecosystems like coral reefs. These ecosystems are crucial for biodiversity and serve as indicators of the overall health of our oceans. To better understand and predict these changes, this paper discusses a multidisciplinary technical approach incorporating machine learning, artificial intelligence (AI), geographic information systems (GIS), and remote sensing techniques. We focus primarily on the changes that occur in coral reefs over time, taking into account biological components, geographical considerations, and challenges stemming from climate change. We investigate the application of GIS technology in coral reef studies, analyze publicly available datasets from various organisations such as the National Oceanic and Atmospheric Administration (NOAA), the Monterey Bay Aquarium Research Institute, and the Hawaii Undersea Research Laboratory, and present the use of machine and deep learning models in coral reef surveillance. This article examines the application of GIS in coral reef studies across various contexts, identifying key research gaps, particularly the lack of a comprehensive catalogue of publicly available datasets. Additionally, it reviews the existing literature on machine and deep learning techniques for coral reef surveillance, critically evaluating their contributions and limitations. The insights provided in this work aim to guide future research, fostering advancements in coral reef monitoring and conservation. © 2024 by the authors.
- Authors: Chowdhury, Abdullahi , Jahan, Musfera , Kaisar, Shahriar , Khoda, Mahbub , Rajin, S. , Naha, Ranesh
- Date: 2024
- Type: Text , Journal article , Review
- Relation: Electronics (Switzerland) Vol. 13, no. 24 (2024), p.
- Full Text:
- Reviewed:
- Description: Climate change poses a significant threat to our planet, particularly affecting intricate marine ecosystems like coral reefs. These ecosystems are crucial for biodiversity and serve as indicators of the overall health of our oceans. To better understand and predict these changes, this paper discusses a multidisciplinary technical approach incorporating machine learning, artificial intelligence (AI), geographic information systems (GIS), and remote sensing techniques. We focus primarily on the changes that occur in coral reefs over time, taking into account biological components, geographical considerations, and challenges stemming from climate change. We investigate the application of GIS technology in coral reef studies, analyze publicly available datasets from various organisations such as the National Oceanic and Atmospheric Administration (NOAA), the Monterey Bay Aquarium Research Institute, and the Hawaii Undersea Research Laboratory, and present the use of machine and deep learning models in coral reef surveillance. This article examines the application of GIS in coral reef studies across various contexts, identifying key research gaps, particularly the lack of a comprehensive catalogue of publicly available datasets. Additionally, it reviews the existing literature on machine and deep learning techniques for coral reef surveillance, critically evaluating their contributions and limitations. The insights provided in this work aim to guide future research, fostering advancements in coral reef monitoring and conservation. © 2024 by the authors.
Malignant and non-malignant oral lesions classification and diagnosis with deep neural networks
- Liyanage, Viduni, Tao, Mengqiu, Park, Joon, Wang, Kate, Azimi, Somayyeh
- Authors: Liyanage, Viduni , Tao, Mengqiu , Park, Joon , Wang, Kate , Azimi, Somayyeh
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Dentistry Vol. 137, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Objectives: Given the increasing incidence of oral cancer, it is essential to provide high-risk communities, especially in remote regions, with an affordable, user-friendly tool for visual lesion diagnosis. This proof-of-concept study explored the utility and feasibility of a smartphone application that can photograph and diagnose oral lesions. Methods: The images of oral lesions with confirmed diagnoses were sourced from oral and maxillofacial textbooks. In total, 342 images were extracted, encompassing lesions from various regions of the oral cavity such as the gingiva, palate, and labial mucosa. The lesions were segregated into three categories: Class 1 represented non-neoplastic lesions, Class 2 included benign neoplasms, and Class 3 contained premalignant/malignant lesions. The images were analysed using MobileNetV3 and EfficientNetV2 models, with the process producing an accuracy curve, confusion matrix, and receiver operating characteristic (ROC) curve. Results: The EfficientNetV2 model showed a steep increase in validation accuracy early in the iterations, plateauing at a score of 0.71. According to the confusion matrix, this model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions was 64% and 80% respectively. Conversely, the MobileNetV3 model exhibited a more gradual increase, reaching a plateau at a validation accuracy of 0.70. The MobileNetV3 model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions, according to the confusion matrix, was 64% and 82% respectively. Conclusions: Our proof-of-concept study effectively demonstrated the potential accuracy of AI software in distinguishing malignant lesions. This could play a vital role in remote screenings for populations with limited access to dental practitioners. However, the discrepancies between the classification of images and the results of "non-malignant lesions" calls for further refinement of the models and the classification system used. Clinical significance: The findings of this study indicate that AI software has the potential to aid in the identification or screening of malignant oral lesions. Further improvements are required to enhance accuracy in classifying non-malignant lesions. © 2023 The Author(s)
- Authors: Liyanage, Viduni , Tao, Mengqiu , Park, Joon , Wang, Kate , Azimi, Somayyeh
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Dentistry Vol. 137, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Objectives: Given the increasing incidence of oral cancer, it is essential to provide high-risk communities, especially in remote regions, with an affordable, user-friendly tool for visual lesion diagnosis. This proof-of-concept study explored the utility and feasibility of a smartphone application that can photograph and diagnose oral lesions. Methods: The images of oral lesions with confirmed diagnoses were sourced from oral and maxillofacial textbooks. In total, 342 images were extracted, encompassing lesions from various regions of the oral cavity such as the gingiva, palate, and labial mucosa. The lesions were segregated into three categories: Class 1 represented non-neoplastic lesions, Class 2 included benign neoplasms, and Class 3 contained premalignant/malignant lesions. The images were analysed using MobileNetV3 and EfficientNetV2 models, with the process producing an accuracy curve, confusion matrix, and receiver operating characteristic (ROC) curve. Results: The EfficientNetV2 model showed a steep increase in validation accuracy early in the iterations, plateauing at a score of 0.71. According to the confusion matrix, this model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions was 64% and 80% respectively. Conversely, the MobileNetV3 model exhibited a more gradual increase, reaching a plateau at a validation accuracy of 0.70. The MobileNetV3 model's testing accuracy for diagnosing non-neoplastic and premalignant/malignant lesions, according to the confusion matrix, was 64% and 82% respectively. Conclusions: Our proof-of-concept study effectively demonstrated the potential accuracy of AI software in distinguishing malignant lesions. This could play a vital role in remote screenings for populations with limited access to dental practitioners. However, the discrepancies between the classification of images and the results of "non-malignant lesions" calls for further refinement of the models and the classification system used. Clinical significance: The findings of this study indicate that AI software has the potential to aid in the identification or screening of malignant oral lesions. Further improvements are required to enhance accuracy in classifying non-malignant lesions. © 2023 The Author(s)
- «
- ‹
- 1
- ›
- »