Rapid health data repository allocation using predictive machine learning
- Uddin, Ashraf, Stranieri, Andrew, Gondal, Iqbal, Balasubramanian, Venki
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Health Informatics Journal Vol. 26, no. 4 (2020), p. 3009-3036
- Full Text:
- Reviewed:
- Description: Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used. © The Author(s) 2020.
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Health Informatics Journal Vol. 26, no. 4 (2020), p. 3009-3036
- Full Text:
- Reviewed:
- Description: Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used. © The Author(s) 2020.
Deep matrix factorization for trust-aware recommendation in social networks
- Wan, Liangtian, Xia, Feng, Kong, Xiangjie, Hsu, Ching-Hsien, Huang, Runhe, Ma, Jianhua
- Authors: Wan, Liangtian , Xia, Feng , Kong, Xiangjie , Hsu, Ching-Hsien , Huang, Runhe , Ma, Jianhua
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Network Science and Engineering Vol. 8, no. 1 (2021), p. 511-528
- Full Text:
- Reviewed:
- Description: Recent years have witnessed remarkable information overload in online social networks, and social network based approaches for recommender systems have been widely studied. The trust information in social networks among users is an important factor for improving recommendation performance. Many successful recommendation tasks are treated as the matrix factorization problems. However, the prediction performance of matrix factorization based methods largely depends on the matrixes initialization of users and items. To address this challenge, we develop a novel trust-aware approach based on deep learning to alleviate the initialization dependence. First, we propose two deep matrix factorization (DMF) techniques, i.e., linear DMF and non-linear DMF to extract features from the user-item rating matrix for improving the initialization accuracy. The trust relationship is integrated into the DMF model according to the preference similarity and the derivations of users on items. Second, we exploit deep marginalized Denoising Autoencoder (Deep-MDAE) to extract the latent representation in the hidden layer from the trust relationship matrix to approximate the user factor matrix factorized from the user-item rating matrix. The community regularization is integrated in the joint optimization function to take neighbours' effects into consideration. The results of DMF are applied to initialize the updating variables of Deep-MDAE in order to further improve the recommendation performance. Finally, we validate that the proposed approach outperforms state-of-the-art baselines for recommendation, especially for the cold-start users. © 2013 IEEE.
- Authors: Wan, Liangtian , Xia, Feng , Kong, Xiangjie , Hsu, Ching-Hsien , Huang, Runhe , Ma, Jianhua
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Network Science and Engineering Vol. 8, no. 1 (2021), p. 511-528
- Full Text:
- Reviewed:
- Description: Recent years have witnessed remarkable information overload in online social networks, and social network based approaches for recommender systems have been widely studied. The trust information in social networks among users is an important factor for improving recommendation performance. Many successful recommendation tasks are treated as the matrix factorization problems. However, the prediction performance of matrix factorization based methods largely depends on the matrixes initialization of users and items. To address this challenge, we develop a novel trust-aware approach based on deep learning to alleviate the initialization dependence. First, we propose two deep matrix factorization (DMF) techniques, i.e., linear DMF and non-linear DMF to extract features from the user-item rating matrix for improving the initialization accuracy. The trust relationship is integrated into the DMF model according to the preference similarity and the derivations of users on items. Second, we exploit deep marginalized Denoising Autoencoder (Deep-MDAE) to extract the latent representation in the hidden layer from the trust relationship matrix to approximate the user factor matrix factorized from the user-item rating matrix. The community regularization is integrated in the joint optimization function to take neighbours' effects into consideration. The results of DMF are applied to initialize the updating variables of Deep-MDAE in order to further improve the recommendation performance. Finally, we validate that the proposed approach outperforms state-of-the-art baselines for recommendation, especially for the cold-start users. © 2013 IEEE.
A new data driven long-term solar yield analysis model of photovoltaic power plants
- Ray, Biplob, Shah, Rakibuzzaman, Islam, Md Rabiul, Islam, Syed
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
AI and IoT-Enabled smart exoskeleton system for rehabilitation of paralyzed people in connected communities
- Jacob, Sunil, Alagirisamy, Mukil, Xi, Chen, Balasubramanian, Venki, Srinivasan, Ram
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Xi, Chen , Balasubramanian, Venki , Srinivasan, Ram
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 80340-80350
- Full Text:
- Reviewed:
- Description: In recent years, the number of cases of spinal cord injuries, stroke and other nervous impairments have led to an increase in the number of paralyzed patients worldwide. Rehabilitation that can aid and enhance the lives of such patients is the need of the hour. Exoskeletons have been found as one of the popular means of rehabilitation. The existing exoskeletons use techniques that impose limitations on adaptability, instant response and continuous control. Also most of them are expensive, bulky, and requires high level of training. To overcome all the above limitations, this paper introduces an Artificial Intelligence (AI) powered Smart and light weight Exoskeleton System (AI-IoT-SES) which receives data from various sensors, classifies them intelligently and generates the desired commands via Internet of Things (IoT) for rendering rehabilitation and support with the help of caretakers for paralyzed patients in smart and connected communities. In the proposed system, the signals collected from the exoskeleton sensors are processed using AI-assisted navigation module, and helps the caretakers in guiding, communicating and controlling the movements of the exoskeleton integrated to the patients. The navigation module uses AI and IoT enabled Simultaneous Localization and Mapping (SLAM). The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker to monitor the real time movement and navigation of the exoskeleton. The automated exoskeleton detects and take decisions on navigation thereby improving the life conditions of such patients. The experimental results simulated using MATLAB shows that the proposed system is the ideal method for rendering rehabilitation and support for paralyzed patients in smart communities. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Xi, Chen , Balasubramanian, Venki , Srinivasan, Ram
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 80340-80350
- Full Text:
- Reviewed:
- Description: In recent years, the number of cases of spinal cord injuries, stroke and other nervous impairments have led to an increase in the number of paralyzed patients worldwide. Rehabilitation that can aid and enhance the lives of such patients is the need of the hour. Exoskeletons have been found as one of the popular means of rehabilitation. The existing exoskeletons use techniques that impose limitations on adaptability, instant response and continuous control. Also most of them are expensive, bulky, and requires high level of training. To overcome all the above limitations, this paper introduces an Artificial Intelligence (AI) powered Smart and light weight Exoskeleton System (AI-IoT-SES) which receives data from various sensors, classifies them intelligently and generates the desired commands via Internet of Things (IoT) for rendering rehabilitation and support with the help of caretakers for paralyzed patients in smart and connected communities. In the proposed system, the signals collected from the exoskeleton sensors are processed using AI-assisted navigation module, and helps the caretakers in guiding, communicating and controlling the movements of the exoskeleton integrated to the patients. The navigation module uses AI and IoT enabled Simultaneous Localization and Mapping (SLAM). The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker to monitor the real time movement and navigation of the exoskeleton. The automated exoskeleton detects and take decisions on navigation thereby improving the life conditions of such patients. The experimental results simulated using MATLAB shows that the proposed system is the ideal method for rendering rehabilitation and support for paralyzed patients in smart communities. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
A critical review of intrusion detection systems in the internet of things : techniques, deployment strategy, validation strategy, attacks, public datasets and challenges
- Khraisat, Ansam, Alazab, Ammar
- Authors: Khraisat, Ansam , Alazab, Ammar
- Date: 2021
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 4, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack on the end nodes. To this end, Numerous IoT intrusion detection Systems (IDS) have been proposed in the literature to tackle attacks on the IoT ecosystem, which can be broadly classified based on detection technique, validation strategy, and deployment strategy. This survey paper presents a comprehensive review of contemporary IoT IDS and an overview of techniques, deployment Strategy, validation strategy and datasets that are commonly applied for building IDS. We also review how existing IoT IDS detect intrusive attacks and secure communications on the IoT. It also presents the classification of IoT attacks and discusses future research challenges to counter such IoT attacks to make IoT more secure. These purposes help IoT security researchers by uniting, contrasting, and compiling scattered research efforts. Consequently, we provide a unique IoT IDS taxonomy, which sheds light on IoT IDS techniques, their advantages and disadvantages, IoT attacks that exploit IoT communication systems, corresponding advanced IDS and detection capabilities to detect IoT attacks. © 2021, The Author(s).
- Authors: Khraisat, Ansam , Alazab, Ammar
- Date: 2021
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 4, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack on the end nodes. To this end, Numerous IoT intrusion detection Systems (IDS) have been proposed in the literature to tackle attacks on the IoT ecosystem, which can be broadly classified based on detection technique, validation strategy, and deployment strategy. This survey paper presents a comprehensive review of contemporary IoT IDS and an overview of techniques, deployment Strategy, validation strategy and datasets that are commonly applied for building IDS. We also review how existing IoT IDS detect intrusive attacks and secure communications on the IoT. It also presents the classification of IoT attacks and discusses future research challenges to counter such IoT attacks to make IoT more secure. These purposes help IoT security researchers by uniting, contrasting, and compiling scattered research efforts. Consequently, we provide a unique IoT IDS taxonomy, which sheds light on IoT IDS techniques, their advantages and disadvantages, IoT attacks that exploit IoT communication systems, corresponding advanced IDS and detection capabilities to detect IoT attacks. © 2021, The Author(s).
Tracing the Pace of COVID-19 research : topic modeling and evolution
- Liu, Jiaying, Nie, Hansong, Li, Shihao, Ren, Jing, Xia, Feng
- Authors: Liu, Jiaying , Nie, Hansong , Li, Shihao , Ren, Jing , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: Big Data Research Vol. 25, no. (2021), p.
- Full Text:
- Reviewed:
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren and Feng Xia" is provided in this record**
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc.
- Authors: Liu, Jiaying , Nie, Hansong , Li, Shihao , Ren, Jing , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: Big Data Research Vol. 25, no. (2021), p.
- Full Text:
- Reviewed:
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren and Feng Xia" is provided in this record**
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc.
MODEL : motif-based deep feature learning for link prediction
- Wang, Lei, Ren, Jing, Xu, Bo, Li, Jianxin, Luo, Wei, Xia, Feng
- Authors: Wang, Lei , Ren, Jing , Xu, Bo , Li, Jianxin , Luo, Wei , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 7, no. 2 (2020), p. 503-516
- Full Text:
- Reviewed:
- Description: Link prediction plays an important role in network analysis and applications. Recently, approaches for link prediction have evolved from traditional similarity-based algorithms into embedding-based algorithms. However, most existing approaches fail to exploit the fact that real-world networks are different from random networks. In particular, real-world networks are known to contain motifs, natural network building blocks reflecting the underlying network-generating processes. In this article, we propose a novel embedding algorithm that incorporates network motifs to capture higher order structures in the network. To evaluate its effectiveness for link prediction, experiments were conducted on three types of networks: social networks, biological networks, and academic networks. The results demonstrate that our algorithm outperforms both the traditional similarity-based algorithms (by 20%) and the state-of-the-art embedding-based algorithms (by 19%). © 2014 IEEE.
- Authors: Wang, Lei , Ren, Jing , Xu, Bo , Li, Jianxin , Luo, Wei , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 7, no. 2 (2020), p. 503-516
- Full Text:
- Reviewed:
- Description: Link prediction plays an important role in network analysis and applications. Recently, approaches for link prediction have evolved from traditional similarity-based algorithms into embedding-based algorithms. However, most existing approaches fail to exploit the fact that real-world networks are different from random networks. In particular, real-world networks are known to contain motifs, natural network building blocks reflecting the underlying network-generating processes. In this article, we propose a novel embedding algorithm that incorporates network motifs to capture higher order structures in the network. To evaluate its effectiveness for link prediction, experiments were conducted on three types of networks: social networks, biological networks, and academic networks. The results demonstrate that our algorithm outperforms both the traditional similarity-based algorithms (by 20%) and the state-of-the-art embedding-based algorithms (by 19%). © 2014 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
Network representation learning: From traditional feature learning to deep learning
- Sun, Ke, Wang, Lei, Xu, Bo, Zhao, Wenhong, Teng, Shyh, Xia, Feng
- Authors: Sun, Ke , Wang, Lei , Xu, Bo , Zhao, Wenhong , Teng, Shyh , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 205600-205617
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) is an effective graph analytics technique and promotes users to deeply understand the hidden characteristics of graph data. It has been successfully applied in many real-world tasks related to network science, such as social network data processing, biological information processing, and recommender systems. Deep Learning is a powerful tool to learn data features. However, it is non-trivial to generalize deep learning to graph-structured data since it is different from the regular data such as pictures having spatial information and sounds having temporal information. Recently, researchers proposed many deep learning-based methods in the area of NRL. In this survey, we investigate classical NRL from traditional feature learning method to the deep learning-based model, analyze relationships between them, and summarize the latest progress. Finally, we discuss open issues considering NRL and point out the future directions in this field. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Sun, Ke , Wang, Lei , Xu, Bo , Zhao, Wenhong , Teng, Shyh , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 205600-205617
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) is an effective graph analytics technique and promotes users to deeply understand the hidden characteristics of graph data. It has been successfully applied in many real-world tasks related to network science, such as social network data processing, biological information processing, and recommender systems. Deep Learning is a powerful tool to learn data features. However, it is non-trivial to generalize deep learning to graph-structured data since it is different from the regular data such as pictures having spatial information and sounds having temporal information. Recently, researchers proposed many deep learning-based methods in the area of NRL. In this survey, we investigate classical NRL from traditional feature learning method to the deep learning-based model, analyze relationships between them, and summarize the latest progress. Finally, we discuss open issues considering NRL and point out the future directions in this field. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Ali, Farman, El-Sappagh, Shaker, Islam, S., Kwak, Daehan, Ali, Amjad, Imran, Muhammad, Kwak, Kyung-Sup
- Authors: Ali, Farman , El-Sappagh, Shaker , Islam, S. , Kwak, Daehan , Ali, Amjad , Imran, Muhammad , Kwak, Kyung-Sup
- Date: 2020
- Type: Text , Journal article
- Relation: Information Fusion Vol. 63, no. (2020), p. 208-222
- Full Text: false
- Reviewed:
- Description: The accurate prediction of heart disease is essential to efficiently treating cardiac patients before a heart attack occurs. This goal can be achieved using an optimal machine learning model with rich healthcare data on heart diseases. Various systems based on machine learning have been presented recently to predict and diagnose heart disease. However, these systems cannot handle high-dimensional datasets due to the lack of a smart framework that can use different sources of data for heart disease prediction. In addition, the existing systems utilize conventional techniques to select features from a dataset and compute a general weight for them based on their significance. These methods have also failed to enhance the performance of heart disease diagnosis. In this paper, a smart healthcare system is proposed for heart disease prediction using ensemble deep learning and feature fusion approaches. First, the feature fusion method combines the extracted features from both sensor data and electronic medical records to generate valuable healthcare data. Second, the information gain technique eliminates irrelevant and redundant features, and selects the important ones, which decreases the computational burden and enhances the system performance. In addition, the conditional probability approach computes a specific feature weight for each class, which further improves system performance. Finally, the ensemble deep learning model is trained for heart disease prediction. The proposed system is evaluated with heart disease data and compared with traditional classifiers based on feature fusion, feature selection, and weighting techniques. The proposed system obtains accuracy of 98.5%, which is higher than existing systems. This result shows that our system is more effective for the prediction of heart disease, in comparison to other state-of-the-art methods. © 2020
Assessing trust level of a driverless car using deep learning
- Karmakar, Gour, Chowdhury, Abdullahi, Das, Rajkumar, Kamruzzaman, Joarder, Islam, Syed
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Das, Rajkumar , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4457-4466
- Full Text: false
- Reviewed:
- Description: The increasing adoption of driverless cars already providing a shift to move away from traditional transportation systems to automated ones in many industrial and commercial applications. Recent research has justified that driverless vehicles will considerably reduce traffic congestions, accidents, carbon emissions, and enhance the accessibility of driving to wider cross-section of people and lifestyle choices. However, at present, people's main concerns are about its privacy and security. Since traditional protocol layers based security mechanisms are not so effective for a distributed system, trust value-based security mechanisms, a type of pervasive security, are appearing as popular and promising techniques. A few statistical non-learning based models for measuring the trust level of a driverless are available in the current literature. These are not so effective because of not being able to capture the extremely distributed, dynamic, and complex nature of the traffic systems. To bridge this research gap, in this paper, for the first time, we propose two deep learning-based models that measure the trustworthiness of a driverless car and its major On-Board Unit (OBU) components. The second model also determines its OBU components that were breached during the driving operation. Results produced using real and simulated traffic data demonstrate that our proposed DNN based deep learning models outperform other machine learning models in assessing the trustworthiness of individual car as well as its OBU components. The average precision of detection accuracies for the car, LiDAR, camera, and radar are 0.99, 0.96, 0.81, and 0.83, respectively, which indicates the potential real-life application of our models in assessing the trust level of a driverless car. © 2000-2011 IEEE.
Deep learning and big data technologies for IoT security
- Amanullah, Mohamed, Habeeb, Riyaz, Nasaruddin, Fariza, Gani, Abdullah, Ahmed, Ejaz, Nainar, Abdul, Akim, Nazihah, Imran, Muhammad
- Authors: Amanullah, Mohamed , Habeeb, Riyaz , Nasaruddin, Fariza , Gani, Abdullah , Ahmed, Ejaz , Nainar, Abdul , Akim, Nazihah , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: Computer Communications Vol. 151, no. (2020), p. 495-517
- Full Text: false
- Reviewed:
- Description: Technology has become inevitable in human life, especially the growth of Internet of Things (IoT), which enables communication and interaction with various devices. However, IoT has been proven to be vulnerable to security breaches. Therefore, it is necessary to develop fool proof solutions by creating new technologies or combining existing technologies to address the security issues. Deep learning, a branch of machine learning has shown promising results in previous studies for detection of security breaches. Additionally, IoT devices generate large volumes, variety, and veracity of data. Thus, when big data technologies are incorporated, higher performance and better data handling can be achieved. Hence, we have conducted a comprehensive survey on state-of-the-art deep learning, IoT security, and big data technologies. Further, a comparative analysis and the relationship among deep learning, IoT security, and big data technologies have also been discussed. Further, we have derived a thematic taxonomy from the comparative analysis of technical studies of the three aforementioned domains. Finally, we have identified and discussed the challenges in incorporating deep learning for IoT security using big data technologies and have provided directions to future researchers on the IoT security aspects. © 2020 Elsevier B.V.
- Chiang, Christina, Wells, Paul, Fieger, Peter, Sharma, Divesh
- Authors: Chiang, Christina , Wells, Paul , Fieger, Peter , Sharma, Divesh
- Date: 2021
- Type: Text , Journal article
- Relation: Accounting and Finance Vol. 61, no. 1 (2021), p. 913-936
- Full Text: false
- Reviewed:
- Description: Arguably, the audit course is one of the most challenging as it links prior accounting knowledge with new audit knowledge that students are generally not exposed to. A mini-audit group project was implemented at a New Zealand university, and a learning approach and learning experience survey instrument was administered. Responses from 98 students suggest that they perceived the learning experience positively and were encouraged to adopt a deep approach to learning. The findings have implications for accounting educators in the design and development of learning and assessment strategies in an audit course. © 2020 Accounting and Finance Association of Australia and New Zealand
A prioritized objective actor-critic method for deep reinforcement learning
- Nguyen, Ngoc, Nguyen, Thanh, Vamplew, Peter, Dazeley, Richard, Nahavandi, Saeid
- Authors: Nguyen, Ngoc , Nguyen, Thanh , Vamplew, Peter , Dazeley, Richard , Nahavandi, Saeid
- Date: 2021
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 33, no. 16 (2021), p. 10335-10349
- Full Text: false
- Reviewed:
- Description: An increasing number of complex problems have naturally posed significant challenges in decision-making theory and reinforcement learning practices. These problems often involve multiple conflicting reward signals that inherently cause agents’ poor exploration in seeking a specific goal. In extreme cases, the agent gets stuck in a sub-optimal solution and starts behaving harmfully. To overcome such obstacles, we introduce two actor-critic deep reinforcement learning methods, namely Multi-Critic Single Policy (MCSP) and Single Critic Multi-Policy (SCMP), which can adjust agent behaviors to efficiently achieve a designated goal by adopting a weighted-sum scalarization of different objective functions. In particular, MCSP creates a human-centric policy that corresponds to a predefined priority weight of different objectives. Whereas, SCMP is capable of generating a mixed policy based on a set of priority weights, i.e., the generated policy uses the knowledge of different policies (each policy corresponds to a priority weight) to dynamically prioritize objectives in real time. We examine our methods by using the Asynchronous Advantage Actor-Critic (A3C) algorithm to utilize the multithreading mechanism for dynamically balancing training intensity of different policies into a single network. Finally, simulation results show that MCSP and SCMP significantly outperform A3C with respect to the mean of total rewards in two complex problems: Food Collector and Seaquest. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
A framework for cardiac arrhythmia detection from IoT-based ECGs
- He, Jinyuan, Rong, Jia, Sun, Le, Wang, Hua, Zhang, Yanchun, Ma, Jiangang
- Authors: He, Jinyuan , Rong, Jia , Sun, Le , Wang, Hua , Zhang, Yanchun , Ma, Jiangang
- Date: 2020
- Type: Text , Journal article
- Relation: World Wide Web Vol. 23, no. 5 (2020), p. 2835-2850
- Full Text:
- Reviewed:
- Description: Cardiac arrhythmia has been identified as a type of cardiovascular diseases (CVDs) that causes approximately 12% of all deaths globally. The development of Internet-of-Things has spawned novel ways for heart monitoring but also presented new challenges for manual arrhythmia detection. An automated method is highly demanded to provide support for physicians. Current attempts for automatic arrhythmia detection can roughly be divided as feature-engineering based and deep-learning based methods. Most of the feature-engineering based methods are suffering from adopting single classifier and use fixed features for classifying all five types of heartbeats. This introduces difficulties in identification of the problematic heartbeats and limits the overall classification performance. The deep-learning based methods are usually not evaluated in a realistic manner and report overoptimistic results which may hide potential limitations of the models. Moreover, the lack of consideration of frequency patterns and the heart rhythms can also limit the model performance. To fill in the gaps, we propose a framework for arrhythmia detection from IoT-based ECGs. The framework consists of two modules: a data cleaning module and a heartbeat classification module. Specifically, we propose two solutions for the heartbeat classification task, namely Dynamic Heartbeat Classification with Adjusted Features (DHCAF) and Multi-channel Heartbeat Convolution Neural Network (MCHCNN). DHCAF is a feature-engineering based approach, in which we introduce dynamic ensemble selection (DES) technique and develop a result regulator to improve classification performance. MCHCNN is deep-learning based solution that performs multi-channel convolutions to capture both temporal and frequency patterns from heartbeat to assist the classification. We evaluate the proposed framework with DHCAF and with MCHCNN on the well-known MIT-BIH-AR database, respectively. The results reported in this paper have proven the effectiveness of our framework. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
- Authors: He, Jinyuan , Rong, Jia , Sun, Le , Wang, Hua , Zhang, Yanchun , Ma, Jiangang
- Date: 2020
- Type: Text , Journal article
- Relation: World Wide Web Vol. 23, no. 5 (2020), p. 2835-2850
- Full Text:
- Reviewed:
- Description: Cardiac arrhythmia has been identified as a type of cardiovascular diseases (CVDs) that causes approximately 12% of all deaths globally. The development of Internet-of-Things has spawned novel ways for heart monitoring but also presented new challenges for manual arrhythmia detection. An automated method is highly demanded to provide support for physicians. Current attempts for automatic arrhythmia detection can roughly be divided as feature-engineering based and deep-learning based methods. Most of the feature-engineering based methods are suffering from adopting single classifier and use fixed features for classifying all five types of heartbeats. This introduces difficulties in identification of the problematic heartbeats and limits the overall classification performance. The deep-learning based methods are usually not evaluated in a realistic manner and report overoptimistic results which may hide potential limitations of the models. Moreover, the lack of consideration of frequency patterns and the heart rhythms can also limit the model performance. To fill in the gaps, we propose a framework for arrhythmia detection from IoT-based ECGs. The framework consists of two modules: a data cleaning module and a heartbeat classification module. Specifically, we propose two solutions for the heartbeat classification task, namely Dynamic Heartbeat Classification with Adjusted Features (DHCAF) and Multi-channel Heartbeat Convolution Neural Network (MCHCNN). DHCAF is a feature-engineering based approach, in which we introduce dynamic ensemble selection (DES) technique and develop a result regulator to improve classification performance. MCHCNN is deep-learning based solution that performs multi-channel convolutions to capture both temporal and frequency patterns from heartbeat to assist the classification. We evaluate the proposed framework with DHCAF and with MCHCNN on the well-known MIT-BIH-AR database, respectively. The results reported in this paper have proven the effectiveness of our framework. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
Automatic driver distraction detection using deep convolutional neural networks
- Hossain, Md Uzzol, Rahman, Md Ataur, Islam, Md Manowarul, Akhter, Arnisha, Uddin, Md Ashraf, Paul, Bikash
- Authors: Hossain, Md Uzzol , Rahman, Md Ataur , Islam, Md Manowarul , Akhter, Arnisha , Uddin, Md Ashraf , Paul, Bikash
- Date: 2022
- Type: Text , Journal article
- Relation: Intelligent Systems with Applications Vol. 14, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Recently, the number of road accidents has been increased worldwide due to the distraction of the drivers. This rapid road crush often leads to injuries, loss of properties, even deaths of the people. Therefore, it is essential to monitor and analyze the driver's behavior during the driving time to detect the distraction and mitigate the number of road accident. To detect various kinds of behavior like- using cell phone, talking to others, eating, sleeping or lack of concentration during driving; machine learning/deep learning can play significant role. However, this process may need high computational capacity to train the model by huge number of training dataset. In this paper, we made an effort to develop CNN based method to detect distracted driver and identify the cause of distractions like talking, sleeping or eating by means of face and hand localization. Four architectures namely CNN, VGG-16, ResNet50 and MobileNetV2 have been adopted for transfer learning. To verify the effectiveness, the proposed model is trained with thousands of images from a publicly available dataset containing ten different postures or conditions of a distracted driver and analyzed the results using various performance metrics. The performance results showed that the pre-trained MobileNetV2 model has the best classification efficiency. © 2022 The Author(s)
- Authors: Hossain, Md Uzzol , Rahman, Md Ataur , Islam, Md Manowarul , Akhter, Arnisha , Uddin, Md Ashraf , Paul, Bikash
- Date: 2022
- Type: Text , Journal article
- Relation: Intelligent Systems with Applications Vol. 14, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Recently, the number of road accidents has been increased worldwide due to the distraction of the drivers. This rapid road crush often leads to injuries, loss of properties, even deaths of the people. Therefore, it is essential to monitor and analyze the driver's behavior during the driving time to detect the distraction and mitigate the number of road accident. To detect various kinds of behavior like- using cell phone, talking to others, eating, sleeping or lack of concentration during driving; machine learning/deep learning can play significant role. However, this process may need high computational capacity to train the model by huge number of training dataset. In this paper, we made an effort to develop CNN based method to detect distracted driver and identify the cause of distractions like talking, sleeping or eating by means of face and hand localization. Four architectures namely CNN, VGG-16, ResNet50 and MobileNetV2 have been adopted for transfer learning. To verify the effectiveness, the proposed model is trained with thousands of images from a publicly available dataset containing ten different postures or conditions of a distracted driver and analyzed the results using various performance metrics. The performance results showed that the pre-trained MobileNetV2 model has the best classification efficiency. © 2022 The Author(s)
- Rashid, Md Mamunur, Kamruzzaman, Joarder, Mehedi Hassan, Mohammad, Imam, Tasadduq, Wibowo, Santoso, Gordon, Steven, Fortino, Giancarlo
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Mehedi Hassan, Mohammad , Imam, Tasadduq , Wibowo, Santoso , Gordon, Steven , Fortino, Giancarlo
- Date: 2022
- Type: Text , Journal article
- Relation: Computers and Security Vol. 120, no. (2022), p.
- Full Text: false
- Reviewed:
- Description: Intrusion Detection Systems (IDS) based on deep learning models can identify and mitigate cyberattacks in IoT applications in a resilient and systematic manner. These models, which support the IDS's decision, could be vulnerable to a cyberattack known as adversarial attack. In this type of attack, attackers create adversarial samples by introducing small perturbations to attack samples to trick a trained model into misclassifying them as benign applications. These attacks can cause substantial damage to IoT-based smart city models in terms of device malfunction, data leakage, operational outage and financial loss. To our knowledge, the impact of and defence against adversarial attacks on IDS models in relation to smart city applications have not been investigated yet. To address this research gap, in this work, we explore the effect of adversarial attacks on the deep learning and shallow machine learning models by using a recent IoT dataset and propose a method using adversarial retraining that can significantly improve IDS performance when confronting adversarial attacks. Simulation results demonstrate that the presence of adversarial samples deteriorates the detection accuracy significantly by above 70% while our proposed model can deliver detection accuracy above 99% against all types of attacks including adversarial attacks. This makes an IDS robust in protecting IoT-based smart city services. © 2022 Elsevier Ltd
- Ali, Sajid, El-Sappagh, Shaker, Ali, Farman, Imran, Muhammad, Abuhmed, Tamer
- Authors: Ali, Sajid , El-Sappagh, Shaker , Ali, Farman , Imran, Muhammad , Abuhmed, Tamer
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Journal of Biomedical and Health Informatics Vol. 26, no. 12 (2022), p. 5793-5804
- Full Text: false
- Reviewed:
- Description: In a hospital, accurate and rapid mortality prediction of Length of Stay (LOS) is essential since it is one of the essential measures in treating patients with severe diseases. When predictions of patient mortality and readmission are combined, these models gain a new level of significance. Therefore, the most expensive components of patient care are LOS and readmission rates. Several studies have assessed readmission to the hospital as a single-task issue. The performance, robustness, and stability of the model increase when many correlated tasks are optimized. This study develops multimodal multitasking Long Short-Term Memory (LSTM) Deep Learning (DL) model that can predict both LOS and readmission for patients using multi-sensory data from 47 patients. Continuous sensory data is divided into eight sections, each of which is recorded for an hour. The time steps are constructed using a dual 10-second window-based technique, resulting in six steps per hour. The 30 statistical features are computed by transforming the sensory input into the resulting vector. The proposed multitasking model predicts 30-day readmission as a binary classification problem and LOS as a regression task by constructing discrete time-step data based on the length of physical activity during a hospital stay. The proposed model is compared to a random forest for a single-task problem (classification or regression) because typical machine learning algorithms are unable to handle the multitasking challenge. In addition, sensory data combined with other cost-effective modalities such as demographics, laboratory tests, and comorbidities to construct reliable models for personalized, cost-effective, and medically acceptable prediction. With a high accuracy of 94.84%, the proposed multitask multimodal DL model classifies the patient's readmission status and determines the patient's LOS in hospital with a minimal Mean Square Error (MSE) of 0.025 and Root Mean Square Error (RMSE) of 0.077, which is promising, effective, and trustworthy. © 2013 IEEE.
Efficient anomaly recognition using surveillance videos
- Saleem, Gulshan, Bajwa, Usama, Raza, Rana, Alqahtani, Fayez, Tolba, Amr, Xia, Feng
- Authors: Saleem, Gulshan , Bajwa, Usama , Raza, Rana , Alqahtani, Fayez , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 8, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Smart surveillance is a difficult task that is gaining popularity due to its direct link to human safety. Today, many indoor and outdoor surveillance systems are in use at public places and smart cities. Because these systems are expensive to deploy, these are out of reach for the vast majority of the public and private sectors. Due to the lack of a precise definition of an anomaly, automated surveillance is a challenging task, especially when large amounts of data, such as 24/7 CCTV footage, must be processed. When implementing such systems in real-time environments, the high computational resource requirements for automated surveillance becomes a major bottleneck. Another challenge is to recognize anomalies accurately as achieving high accuracy while reducing computational cost is more challenging. To address these challenge, this research is based on the developing a system that is both efficient and cost effective. Although 3D convolutional neural networks have proven to be accurate, they are prohibitively expensive for practical use, particularly in real-time surveillance. In this article, we present two contributions: a resource-efficient framework for anomaly recognition problems and two-class and multi-class anomaly recognition on spatially augmented surveillance videos. This research aims to address the problem of computation overhead while maintaining recognition accuracy. The proposed Temporal based Anomaly Recognizer (TAR) framework combines a partial shift strategy with a 2D convolutional architecture-based model, namely MobileNetV2. Extensive experiments were carried out to evaluate the model's performance on the UCF Crime dataset, with MobileNetV2 as the baseline architecture; it achieved an accuracy of 88% which is 2.47% increased performance than available state-of-the-art. The proposed framework achieves 52.7% accuracy for multiclass anomaly recognition on the UCF Crime2Local dataset. The proposed model has been tested in real-time camera stream settings and can handle six streams simultaneously without the need for additional resources. © Copyright 2022 Saleem et al.
- Authors: Saleem, Gulshan , Bajwa, Usama , Raza, Rana , Alqahtani, Fayez , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 8, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Smart surveillance is a difficult task that is gaining popularity due to its direct link to human safety. Today, many indoor and outdoor surveillance systems are in use at public places and smart cities. Because these systems are expensive to deploy, these are out of reach for the vast majority of the public and private sectors. Due to the lack of a precise definition of an anomaly, automated surveillance is a challenging task, especially when large amounts of data, such as 24/7 CCTV footage, must be processed. When implementing such systems in real-time environments, the high computational resource requirements for automated surveillance becomes a major bottleneck. Another challenge is to recognize anomalies accurately as achieving high accuracy while reducing computational cost is more challenging. To address these challenge, this research is based on the developing a system that is both efficient and cost effective. Although 3D convolutional neural networks have proven to be accurate, they are prohibitively expensive for practical use, particularly in real-time surveillance. In this article, we present two contributions: a resource-efficient framework for anomaly recognition problems and two-class and multi-class anomaly recognition on spatially augmented surveillance videos. This research aims to address the problem of computation overhead while maintaining recognition accuracy. The proposed Temporal based Anomaly Recognizer (TAR) framework combines a partial shift strategy with a 2D convolutional architecture-based model, namely MobileNetV2. Extensive experiments were carried out to evaluate the model's performance on the UCF Crime dataset, with MobileNetV2 as the baseline architecture; it achieved an accuracy of 88% which is 2.47% increased performance than available state-of-the-art. The proposed framework achieves 52.7% accuracy for multiclass anomaly recognition on the UCF Crime2Local dataset. The proposed model has been tested in real-time camera stream settings and can handle six streams simultaneously without the need for additional resources. © Copyright 2022 Saleem et al.
Adaptation of a real-time deep learning approach with an analog fault detection technique for reliability forecasting of capacitor banks used in mobile vehicles
- Rezaei, Mohammad, Fathollahi, Arman, Rezaei, Sajad, Hu, Jiefeng, Gheisarnejad, Meysam, Teimouri, Ali, Rituraj, Rituraj, Mosavi, Amir, Khooban, Mohammad-Hassan
- Authors: Rezaei, Mohammad , Fathollahi, Arman , Rezaei, Sajad , Hu, Jiefeng , Gheisarnejad, Meysam , Teimouri, Ali , Rituraj, Rituraj , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 132271-132287
- Full Text:
- Reviewed:
- Description: The DC-Link capacitor is defined as the essential electronics element which sources or sinks the respective currents. The reliability of DC-link capacitor-banks (CBs) encounters many challenges due to their usage in electric vehicles. Heavy shocks may damage the internal capacitors without shutting down the CB. The fundamental development obstacles of CBs are: lack of considering capacitor degradation in reliability assessment, the impact of unforeseen sudden internal capacitor faults in forecasting CB lifetime, and the faults consequence on CB degradation. The sudden faults change the CB capacitance, which leads to reliability change. To more accurately estimate the reliability, the type of the fault needs to be detected for predicting the correct post-fault capacitance. To address these practical problems, a new CB model and reliability assessment formula covering all fault types are first presented, then, a new analog fault-detection method is presented, and a combination of online-learning long short-term memory (LSTM) and fault-detection method is subsequently performed, which adapt the sudden internal CB faults with the LSTM to correctly predict the CB degradation. To confirm the correct LSTM operation, four capacitors degradation is practically recorded for 2000-hours, and the off-line faultless degradation values predicted by the LSTM are compared with the actual data. The experimental findings validate the applicability of the proposed method. The codes and data are provided. © 2013 IEEE.
- Authors: Rezaei, Mohammad , Fathollahi, Arman , Rezaei, Sajad , Hu, Jiefeng , Gheisarnejad, Meysam , Teimouri, Ali , Rituraj, Rituraj , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 132271-132287
- Full Text:
- Reviewed:
- Description: The DC-Link capacitor is defined as the essential electronics element which sources or sinks the respective currents. The reliability of DC-link capacitor-banks (CBs) encounters many challenges due to their usage in electric vehicles. Heavy shocks may damage the internal capacitors without shutting down the CB. The fundamental development obstacles of CBs are: lack of considering capacitor degradation in reliability assessment, the impact of unforeseen sudden internal capacitor faults in forecasting CB lifetime, and the faults consequence on CB degradation. The sudden faults change the CB capacitance, which leads to reliability change. To more accurately estimate the reliability, the type of the fault needs to be detected for predicting the correct post-fault capacitance. To address these practical problems, a new CB model and reliability assessment formula covering all fault types are first presented, then, a new analog fault-detection method is presented, and a combination of online-learning long short-term memory (LSTM) and fault-detection method is subsequently performed, which adapt the sudden internal CB faults with the LSTM to correctly predict the CB degradation. To confirm the correct LSTM operation, four capacitors degradation is practically recorded for 2000-hours, and the off-line faultless degradation values predicted by the LSTM are compared with the actual data. The experimental findings validate the applicability of the proposed method. The codes and data are provided. © 2013 IEEE.