COVID-19 datasets : a brief overview
- Sun, Ke, Li, Wuyang, Saikrishna, Vidya, Chadhar, Mehmood, Xia, Feng
- Authors: Sun, Ke , Li, Wuyang , Saikrishna, Vidya , Chadhar, Mehmood , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Computer Science and Information Systems Vol. 19, no. 3 (2022), p. 1115-1132
- Full Text:
- Reviewed:
- Description: The outbreak of the COVID-19 pandemic affects lives and social-economic development around the world. The affecting of the pandemic has motivated researchers from different domains to find effective solutions to diagnose, prevent, and estimate the pandemic and relieve its adverse effects. Numerous COVID-19 datasets are built from these studies and are available to the public. These datasets can be used for disease diagnosis and case prediction, speeding up solving problems caused by the pandemic. To meet the needs of researchers to understand various COVID-19 datasets, we examine and provide an overview of them. We organise the majority of these datasets into three categories based on the category of ap-plications, i.e., time-series, knowledge base, and media-based datasets. Organising COVID-19 datasets into appropriate categories can help researchers hold their focus on methodology rather than the datasets. In addition, applications and COVID-19 datasets suffer from a series of problems, such as privacy and quality. We discuss these issues as well as potentials of COVID-19 datasets. © 2022, ComSIS Consortium. All rights reserved.
- Authors: Sun, Ke , Li, Wuyang , Saikrishna, Vidya , Chadhar, Mehmood , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Computer Science and Information Systems Vol. 19, no. 3 (2022), p. 1115-1132
- Full Text:
- Reviewed:
- Description: The outbreak of the COVID-19 pandemic affects lives and social-economic development around the world. The affecting of the pandemic has motivated researchers from different domains to find effective solutions to diagnose, prevent, and estimate the pandemic and relieve its adverse effects. Numerous COVID-19 datasets are built from these studies and are available to the public. These datasets can be used for disease diagnosis and case prediction, speeding up solving problems caused by the pandemic. To meet the needs of researchers to understand various COVID-19 datasets, we examine and provide an overview of them. We organise the majority of these datasets into three categories based on the category of ap-plications, i.e., time-series, knowledge base, and media-based datasets. Organising COVID-19 datasets into appropriate categories can help researchers hold their focus on methodology rather than the datasets. In addition, applications and COVID-19 datasets suffer from a series of problems, such as privacy and quality. We discuss these issues as well as potentials of COVID-19 datasets. © 2022, ComSIS Consortium. All rights reserved.
Implementation of evidence-based weekend service recommendations for allied health managers : a cluster randomised controlled trial protocol
- Sarkies, Mitchell, White, Jennifer, Morris, Meg, Taylor, Nicholas, Martin, Jennifer
- Authors: Sarkies, Mitchell , White, Jennifer , Morris, Meg , Taylor, Nicholas , Martin, Jennifer
- Date: 2018
- Type: Text , Journal article
- Relation: Implementation Science Vol. 13, no. 1 (2018), p.
- Full Text:
- Reviewed:
- Description: Background: It is widely acknowledged that health policy and practice do not always reflect current research evidence. Whether knowledge transfer from research to practice is more successful when specific implementation approaches are used remains unclear. A model to assist engagement of allied health managers and clinicians with research implementation could involve disseminating evidence-based policy recommendations, along with the use of knowledge brokers. We developed such a model to aid decision-making for the provision of weekend allied health services. This protocol outlines the design and methods for a multi-centre cluster randomised controlled trial to evaluate the success of research implementation strategies to promote evidence-informed weekend allied health resource allocation decisions, especially in hospital managers. Methods: This multi-centre study will be a three-group parallel cluster randomised controlled trial. Allied health managers from Australian and New Zealand hospitals will be randomised to receive either (1) an evidence-based policy recommendation document to guide weekend allied health resource allocation decisions, (2) the same policy recommendation document with support from a knowledge broker to help implement weekend allied health policy recommendations, or (3) a usual practice control group. The primary outcome will be alignment of weekend allied health service provision with policy recommendations. This will be measured by the number of allied health service events (occasions of service) occurring on weekends as a proportion of total allied health service events for the relevant hospital wards at baseline and 12-month follow-up. Discussion: Evidence-based policy recommendation documents communicate key research findings in an accessible format. This comparatively low-cost research implementation strategy could be combined with using a knowledge broker to work collaboratively with decision-makers to promote knowledge transfer. The results will assist managers to make decisions on resource allocation, based on evidence. More generally, the findings will inform the development of an allied health model for translating research into practice. © 2018 The Author(s). **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jennifer Martin” is provided in this record**
- Authors: Sarkies, Mitchell , White, Jennifer , Morris, Meg , Taylor, Nicholas , Martin, Jennifer
- Date: 2018
- Type: Text , Journal article
- Relation: Implementation Science Vol. 13, no. 1 (2018), p.
- Full Text:
- Reviewed:
- Description: Background: It is widely acknowledged that health policy and practice do not always reflect current research evidence. Whether knowledge transfer from research to practice is more successful when specific implementation approaches are used remains unclear. A model to assist engagement of allied health managers and clinicians with research implementation could involve disseminating evidence-based policy recommendations, along with the use of knowledge brokers. We developed such a model to aid decision-making for the provision of weekend allied health services. This protocol outlines the design and methods for a multi-centre cluster randomised controlled trial to evaluate the success of research implementation strategies to promote evidence-informed weekend allied health resource allocation decisions, especially in hospital managers. Methods: This multi-centre study will be a three-group parallel cluster randomised controlled trial. Allied health managers from Australian and New Zealand hospitals will be randomised to receive either (1) an evidence-based policy recommendation document to guide weekend allied health resource allocation decisions, (2) the same policy recommendation document with support from a knowledge broker to help implement weekend allied health policy recommendations, or (3) a usual practice control group. The primary outcome will be alignment of weekend allied health service provision with policy recommendations. This will be measured by the number of allied health service events (occasions of service) occurring on weekends as a proportion of total allied health service events for the relevant hospital wards at baseline and 12-month follow-up. Discussion: Evidence-based policy recommendation documents communicate key research findings in an accessible format. This comparatively low-cost research implementation strategy could be combined with using a knowledge broker to work collaboratively with decision-makers to promote knowledge transfer. The results will assist managers to make decisions on resource allocation, based on evidence. More generally, the findings will inform the development of an allied health model for translating research into practice. © 2018 The Author(s). **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jennifer Martin” is provided in this record**
Blending big data analytics : review on challenges and a recent study
- Amalina, Fairuz, Targio Hashem, Ibrahim, Azizul, Zati, Fong, Ang, Imran, Muhammad
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Li, Xiaomin, Wan, Jiafu, Dai, Hong-Ning, Imran, Muhammad, Xia, Min, Celesti, Antonio
- Authors: Li, Xiaomin , Wan, Jiafu , Dai, Hong-Ning , Imran, Muhammad , Xia, Min , Celesti, Antonio
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 15, no. 7 (2019), p. 4225-4234
- Full Text: false
- Reviewed:
- Description: At present, smart manufacturing computing framework has faced many challenges such as the lack of an effective framework of fusing computing historical heritages and resource scheduling strategy to guarantee the low-latency requirement. In this paper, we propose a hybrid computing framework and design an intelligent resource scheduling strategy to fulfill the real-time requirement in smart manufacturing with edge computing support. First, a four-layer computing system in a smart manufacturing environment is provided to support the artificial intelligence task operation with the network perspective. Then, a two-phase algorithm for scheduling the computing resources in the edge layer is designed based on greedy and threshold strategies with latency constraints. Finally, a prototype platform was developed. We conducted experiments on the prototype to evaluate the performance of the proposed framework with a comparison of the traditionally-used methods. The proposed strategies have demonstrated the excellent real-time, satisfaction degree (SD), and energy consumption performance of computing services in smart manufacturing with edge computing. © 2005-2012 IEEE.
DC fault identification in multiterminal HVDC systems based on reactor voltage gradient
- Hassan, Mehedi, Hossain, M., Shah, Rakibuzzaman
- Authors: Hassan, Mehedi , Hossain, M. , Shah, Rakibuzzaman
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 115855-115867
- Full Text:
- Reviewed:
- Description: With the increasing number of renewable generations, the prospects of long-distance bulk power transmission impels the expansion of point-to-point High Voltage Direct Current (HVDC) grid to an emerging Multi-terminal high-voltage Direct Current (MTDC) grid. The DC grid protection with faster selectivity enhances the operational continuity of the MTDC grid. Based on the reactor voltage gradient (RVG), this paper proposes a fast and reliable fault identification technique with precise discrimination of internal and external DC faults. Considering the voltage developed across the modular multilevel converter (MMC) reactor and DC terminal reactor, the RVG is formulated to characterise an internal and external DC fault. With a window of four RVG samples, the fault is detected and discriminated by the proposed main protection scheme amidst a period of five sampling intervals. Depending on the reactor current increment, a backup protection scheme is also proposed to enhance the protection reliability. The performance of the proposed scheme is validated in a four-terminal MTDC grid. The results under meaningful fault events show that the proposed scheme is capable to identify the DC fault within millisecond. Moreover, the evaluation of the protection sensitivity and robustness reveals that the proposed scheme is highly selective for a wide range of fault resistances and locations, higher sampling frequencies, and irrelevant transient events. Furthermore, the comparison results exhibit that the proposed RVG method improves the discrimination performance of the protection scheme and thereby, proves to be a better choice for future DC fault identification.
- Authors: Hassan, Mehedi , Hossain, M. , Shah, Rakibuzzaman
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 115855-115867
- Full Text:
- Reviewed:
- Description: With the increasing number of renewable generations, the prospects of long-distance bulk power transmission impels the expansion of point-to-point High Voltage Direct Current (HVDC) grid to an emerging Multi-terminal high-voltage Direct Current (MTDC) grid. The DC grid protection with faster selectivity enhances the operational continuity of the MTDC grid. Based on the reactor voltage gradient (RVG), this paper proposes a fast and reliable fault identification technique with precise discrimination of internal and external DC faults. Considering the voltage developed across the modular multilevel converter (MMC) reactor and DC terminal reactor, the RVG is formulated to characterise an internal and external DC fault. With a window of four RVG samples, the fault is detected and discriminated by the proposed main protection scheme amidst a period of five sampling intervals. Depending on the reactor current increment, a backup protection scheme is also proposed to enhance the protection reliability. The performance of the proposed scheme is validated in a four-terminal MTDC grid. The results under meaningful fault events show that the proposed scheme is capable to identify the DC fault within millisecond. Moreover, the evaluation of the protection sensitivity and robustness reveals that the proposed scheme is highly selective for a wide range of fault resistances and locations, higher sampling frequencies, and irrelevant transient events. Furthermore, the comparison results exhibit that the proposed RVG method improves the discrimination performance of the protection scheme and thereby, proves to be a better choice for future DC fault identification.
Performance analysis of priority-based IEEE 802.15.6 protocol in saturated traffic conditions
- Ullah, Sana, Tovar, Eduardo, Kim, Ki, Kim, Kyong, Imran, Muhammad
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
A blockchain-based solution for enhancing security and privacy in smart factory
- Wan, Jafu, Li, Jiapeng, Imran, Muhammad, Li, Di
- Authors: Wan, Jafu , Li, Jiapeng , Imran, Muhammad , Li, Di
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 15, no. 6 (2019), p. 3652-3660
- Full Text: false
- Reviewed:
- Description: Through the Industrial Internet of Things (IIoT), a smart factory has entered the booming period. However, as the number of nodes and network size become larger, the traditional IIoT architecture can no longer provide effective support for such enormous system. Therefore, we introduce the Blockchain architecture, which is an emerging scheme for constructing the distributed networks, to reshape the traditional IIoT architecture. First, the major problems of the traditional IIoT architecture are analyzed, and the existing improvements are summarized. Second, we introduce a security and privacy model to help design the Blockchain-based architecture. On this basis, we decompose and reorganize the original IIoT architecture to form a new multicenter partially decentralized architecture. Then, we introduce some relative security technologies to improve and optimize the new architecture. After that we design the data interaction process and the algorithms of the architecture. Finally, we use an automatic production platform to discuss the specific implementation. The experimental results show that the proposed architecture provides better security and privacy protection than the traditional architecture. Thus, the proposed architecture represents a significant improvement of the original architecture, which provides a new direction for the IIoT development. © 2005-2012 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
Molecular docking interaction of mycobacterium tuberculosis lipb enzyme with isoniazid, pyrazinamide and a structurally altered drug 2, 6 dimethoxyisonicotinohydrazide
- Authors: Namasivayam, Muthuraman
- Date: 2015
- Type: Text , Journal article
- Relation: Computational biology and bioinformatics (Print) Vol. 3, no. 4 (2015), p. 45
- Full Text:
- Reviewed:
- Description: Tuberculosis is an infectious airborne disease caused by a bacterial infection that affects the lungs and other parts of the body. Vaccination against tuberculosis is available but proved to be unsuccessful against emerging multi drug and extensive drug resistant bacterial strains. This in turn raises the pressure to speed up the research on developing new and more efficient anti-tuberculosis drugs. Lipoate biosynthesis protein B (LipB) is found to play vital role in the lipoylation process in Mycobacterium tuberculosis and thus making it a very promising drug target. The existing first line drugs such as Isoniazid, Pyrazinamide and Rifampicin etc shows only profound binding affinity with this target protein. Therefore, new or modified drugs with better docking approach that exhibit a closer and stronger binding affinity is essential. This current study opens up a novel approach towards anti-tuberculosis agents by determining drugs that share similar structures with some of the best available first line drug and also happen to possess better binding affinity. In this article, a computational method by which, pristine as well certain first line and structurally modified drugs were docked with the LipB protein target; where, structurally modified 2, 6 Dimethoxyisonicotinohydrazide show superior target docking.
- Authors: Namasivayam, Muthuraman
- Date: 2015
- Type: Text , Journal article
- Relation: Computational biology and bioinformatics (Print) Vol. 3, no. 4 (2015), p. 45
- Full Text:
- Reviewed:
- Description: Tuberculosis is an infectious airborne disease caused by a bacterial infection that affects the lungs and other parts of the body. Vaccination against tuberculosis is available but proved to be unsuccessful against emerging multi drug and extensive drug resistant bacterial strains. This in turn raises the pressure to speed up the research on developing new and more efficient anti-tuberculosis drugs. Lipoate biosynthesis protein B (LipB) is found to play vital role in the lipoylation process in Mycobacterium tuberculosis and thus making it a very promising drug target. The existing first line drugs such as Isoniazid, Pyrazinamide and Rifampicin etc shows only profound binding affinity with this target protein. Therefore, new or modified drugs with better docking approach that exhibit a closer and stronger binding affinity is essential. This current study opens up a novel approach towards anti-tuberculosis agents by determining drugs that share similar structures with some of the best available first line drug and also happen to possess better binding affinity. In this article, a computational method by which, pristine as well certain first line and structurally modified drugs were docked with the LipB protein target; where, structurally modified 2, 6 Dimethoxyisonicotinohydrazide show superior target docking.
Providing consistent state to distributed storage system
- Talluri, Laskhmi, Thirumalaisamy, Ragunathan, Kota, Ramgopal, Sadi, Ram, Kc, Ujjwal, Naha, Ranesh, Mahanti, Aniket
- Authors: Talluri, Laskhmi , Thirumalaisamy, Ragunathan , Kota, Ramgopal , Sadi, Ram , Kc, Ujjwal , Naha, Ranesh , Mahanti, Aniket
- Date: 2021
- Type: Text , Journal article
- Relation: Computers Vol. 10, no. 2 (2021), p. 23
- Full Text: false
- Reviewed:
- Description: In cloud storage systems, users must be able to shut down the application when not in use and restart it from the last consistent state when required. BlobSeer is a data storage application, specially designed for distributed systems, that was built as an alternative solution for the existing popular open-source storage system-Hadoop Distributed File System (HDFS). In a cloud model, all the components need to stop and restart from a consistent state when the user requires it. One of the limitations of BlobSeer DFS is the possibility of data loss when the system restarts. As such, it is important to provide a consistent start and stop state to BlobSeer components when used in a Cloud environment to prevent any data loss. In this paper, we investigate the possibility of BlobSeer providing a consistent state distributed data storage system with the integration of checkpointing restart functionality. To demonstrate the availability of a consistent state, we set up a cluster with multiple machines and deploy BlobSeer entities with checkpointing functionality on various machines. We consider uncoordinated checkpoint algorithms for their associated benefits over other alternatives while integrating the functionality to various BlobSeer components such as the Version Manager (VM) and the Data Provider. The experimental results show that with the integration of the checkpointing functionality, a consistent state can be ensured for a distributed storage system even when the system restarts, preventing any possible data loss after the system has encountered various system errors and failures.
Sequence-to-sequence learning-based conversion of pseudo-code to source code using neural translation approach
- Acharjee, Uzzal, Arefin, Minhazul, Hossen, Kazi, Uddin, Mohammed, Uddin, Md Ashraf, Islam, Linta
- Authors: Acharjee, Uzzal , Arefin, Minhazul , Hossen, Kazi , Uddin, Mohammed , Uddin, Md Ashraf , Islam, Linta
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 26730-26742
- Full Text:
- Reviewed:
- Description: Pseudo-code refers to an informal means of representing algorithms that do not require the exact syntax of a computer programming language. Pseudo-code helps developers and researchers represent their algorithms using human-readable language. Generally, researchers can convert the pseudo-code into computer source code using different conversion techniques. The efficiency of such conversion methods is measured based on the converted algorithm's correctness. Researchers have already explored diverse technologies to devise conversion methods with higher accuracy. This paper proposes a novel pseudo-code conversion learning method that includes natural language processing-based text preprocessing and a sequence-to-sequence deep learning-based model trained with the SPoC dataset. We conducted an extensive experiment on our designed algorithm using descriptive bilingual understudy scoring and compared our results with state-of-the-art techniques. Result analysis shows that our approach is more accurate and efficient than other existing conversion methods in terms of several performances metrics. Furthermore, the proposed method outperforms the existing approaches because our method utilizes two Long-Short-Term-Memory networks that might increase the accuracy. © 2013 IEEE.
- Authors: Acharjee, Uzzal , Arefin, Minhazul , Hossen, Kazi , Uddin, Mohammed , Uddin, Md Ashraf , Islam, Linta
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 26730-26742
- Full Text:
- Reviewed:
- Description: Pseudo-code refers to an informal means of representing algorithms that do not require the exact syntax of a computer programming language. Pseudo-code helps developers and researchers represent their algorithms using human-readable language. Generally, researchers can convert the pseudo-code into computer source code using different conversion techniques. The efficiency of such conversion methods is measured based on the converted algorithm's correctness. Researchers have already explored diverse technologies to devise conversion methods with higher accuracy. This paper proposes a novel pseudo-code conversion learning method that includes natural language processing-based text preprocessing and a sequence-to-sequence deep learning-based model trained with the SPoC dataset. We conducted an extensive experiment on our designed algorithm using descriptive bilingual understudy scoring and compared our results with state-of-the-art techniques. Result analysis shows that our approach is more accurate and efficient than other existing conversion methods in terms of several performances metrics. Furthermore, the proposed method outperforms the existing approaches because our method utilizes two Long-Short-Term-Memory networks that might increase the accuracy. © 2013 IEEE.
Adherence to antiplatelet therapy after coronary intervention among patients with myocardial infarction attending Vietnam National Heart Institute
- Luu, Ngoc, Dinh, Anh, Nguyen, Thi, Nguyen, Huy
- Authors: Luu, Ngoc , Dinh, Anh , Nguyen, Thi , Nguyen, Huy
- Date: 2019
- Type: Text , Journal article
- Relation: BioMed Research International Vol. 2019, no. (2019), p.
- Full Text:
- Reviewed:
- Description: Adherence to antiplatelet therapy is critical to successful treatment of cardiovascular conditions. However, little has been known about this issue in the context of constrained resources such as in Vietnam. The objective of this study was to examine the adherence to antiplatelet therapy among patients receiving acute myocardial infarction interventions and its associated factors. In a cross-sectional survey design, 175 adult patients revisiting Vietnam National Heart Institute diagnosed with acute myocardial infarction were approached for data collection from October 2014 to June 2015. Adherence to antiplatelet therapy was assessed by asking patients whether they took taking antiplatelet regularly as per medication (do not miss any dose at the specified time) for any type of antiplatelet (aspirin, clopidogrel, ticlopidine.) during the last month before the participants came back to take re-examinations. The results indicated that the adherence to antiplatelet therapy among patients was quite high at 1 month; it begins to decline by 6 months, 12 months, and more than 12 months (less than 1 month was 90.29%; from 1 to 6 months 88.0%, from 6 to 12 months 75.43%, and after 12 months only 46.29% of patients). Multivariable logistic regression was utilized to detect factors associated with the adherence to antiplatelet therapy. It showed that patients with average income per month of $300 or more (OR=2.92, 95% CI=1.24-6.89), distance to the hospital of less than 50km (OR=2.48, 95% CI: 1.12-5.52), taking medicine under doctor's instructions (OR=3.65; 95% CI=1.13-11.70), and timely re-examination (OR=3.99, 95% CI=1.08-14.73) were more likely to follow the therapy. In general, the study suggested that to increase the likelihood of adherence to antiplatelet therapy it is important to establish a continuous care system after discharging from hospital. © 2019 Ngoc Minh Luu et al.
- Authors: Luu, Ngoc , Dinh, Anh , Nguyen, Thi , Nguyen, Huy
- Date: 2019
- Type: Text , Journal article
- Relation: BioMed Research International Vol. 2019, no. (2019), p.
- Full Text:
- Reviewed:
- Description: Adherence to antiplatelet therapy is critical to successful treatment of cardiovascular conditions. However, little has been known about this issue in the context of constrained resources such as in Vietnam. The objective of this study was to examine the adherence to antiplatelet therapy among patients receiving acute myocardial infarction interventions and its associated factors. In a cross-sectional survey design, 175 adult patients revisiting Vietnam National Heart Institute diagnosed with acute myocardial infarction were approached for data collection from October 2014 to June 2015. Adherence to antiplatelet therapy was assessed by asking patients whether they took taking antiplatelet regularly as per medication (do not miss any dose at the specified time) for any type of antiplatelet (aspirin, clopidogrel, ticlopidine.) during the last month before the participants came back to take re-examinations. The results indicated that the adherence to antiplatelet therapy among patients was quite high at 1 month; it begins to decline by 6 months, 12 months, and more than 12 months (less than 1 month was 90.29%; from 1 to 6 months 88.0%, from 6 to 12 months 75.43%, and after 12 months only 46.29% of patients). Multivariable logistic regression was utilized to detect factors associated with the adherence to antiplatelet therapy. It showed that patients with average income per month of $300 or more (OR=2.92, 95% CI=1.24-6.89), distance to the hospital of less than 50km (OR=2.48, 95% CI: 1.12-5.52), taking medicine under doctor's instructions (OR=3.65; 95% CI=1.13-11.70), and timely re-examination (OR=3.99, 95% CI=1.08-14.73) were more likely to follow the therapy. In general, the study suggested that to increase the likelihood of adherence to antiplatelet therapy it is important to establish a continuous care system after discharging from hospital. © 2019 Ngoc Minh Luu et al.
Big networks : a survey
- Bedru, Hayat, Yu, Shuo, Xiao, Xinru, Zhang, Da, Xia, Feng
- Authors: Bedru, Hayat , Yu, Shuo , Xiao, Xinru , Zhang, Da , Xia, Feng
- Date: 2020
- Type: Text , Journal article , Review
- Relation: Computer Science Review Vol. 37, no. (2020), p.
- Full Text:
- Reviewed:
- Description: A network is a typical expressive form of representing complex systems in terms of vertices and links, in which the pattern of interactions amongst components of the network is intricate. The network can be static that does not change over time or dynamic that evolves through time. The complication of network analysis is different under the new circumstance of network size explosive increasing. In this paper, we introduce a new network science concept called a big network. A big networks is generally in large-scale with a complicated and higher-order inner structure. This paper proposes a guideline framework that gives an insight into the major topics in the area of network science from the viewpoint of a big network. We first introduce the structural characteristics of big networks from three levels, which are micro-level, meso-level, and macro-level. We then discuss some state-of-the-art advanced topics of big network analysis. Big network models and related approaches, including ranking methods, partition approaches, as well as network embedding algorithms are systematically introduced. Some typical applications in big networks are then reviewed, such as community detection, link prediction, recommendation, etc. Moreover, we also pinpoint some critical open issues that need to be investigated further. © 2020 Elsevier Inc.
- Authors: Bedru, Hayat , Yu, Shuo , Xiao, Xinru , Zhang, Da , Xia, Feng
- Date: 2020
- Type: Text , Journal article , Review
- Relation: Computer Science Review Vol. 37, no. (2020), p.
- Full Text:
- Reviewed:
- Description: A network is a typical expressive form of representing complex systems in terms of vertices and links, in which the pattern of interactions amongst components of the network is intricate. The network can be static that does not change over time or dynamic that evolves through time. The complication of network analysis is different under the new circumstance of network size explosive increasing. In this paper, we introduce a new network science concept called a big network. A big networks is generally in large-scale with a complicated and higher-order inner structure. This paper proposes a guideline framework that gives an insight into the major topics in the area of network science from the viewpoint of a big network. We first introduce the structural characteristics of big networks from three levels, which are micro-level, meso-level, and macro-level. We then discuss some state-of-the-art advanced topics of big network analysis. Big network models and related approaches, including ranking methods, partition approaches, as well as network embedding algorithms are systematically introduced. Some typical applications in big networks are then reviewed, such as community detection, link prediction, recommendation, etc. Moreover, we also pinpoint some critical open issues that need to be investigated further. © 2020 Elsevier Inc.
Emergency message dissemination schemes based on congestion avoidance in VANET and vehicular FoG computing
- Ullah, Ata, Yaqoob, Shumayla, Imran, Muhammad, Ning, Huansheng
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
Network embedding : taxonomies, frameworks and applications
- Hou, Mingliang, Ren, Jing, Zhang, Da, Kong, Xiangjie, Zhang, Dongyu, Xia, Feng
- Authors: Hou, Mingliang , Ren, Jing , Zhang, Da , Kong, Xiangjie , Zhang, Dongyu , Xia, Feng
- Date: 2020
- Type: Text , Journal article , Review
- Relation: Computer Science Review Vol. 38, no. (2020), p.
- Full Text:
- Reviewed:
- Description: Networks are a general language for describing complex systems of interacting entities. In the real world, a network always contains massive nodes, edges and additional complex information which leads to high complexity in computing and analyzing tasks. Network embedding aims at transforming one network into a low dimensional vector space which benefits the downstream network analysis tasks. In this survey, we provide a systematic overview of network embedding techniques in addressing challenges appearing in networks. We first introduce concepts and challenges in network embedding. Afterwards, we categorize network embedding methods using three categories, including static homogeneous network embedding methods, static heterogeneous network embedding methods and dynamic network embedding methods. Next, we summarize the datasets and evaluation tasks commonly used in network embedding. Finally, we discuss several future directions in this field. © 2020 Elsevier Inc.
- Authors: Hou, Mingliang , Ren, Jing , Zhang, Da , Kong, Xiangjie , Zhang, Dongyu , Xia, Feng
- Date: 2020
- Type: Text , Journal article , Review
- Relation: Computer Science Review Vol. 38, no. (2020), p.
- Full Text:
- Reviewed:
- Description: Networks are a general language for describing complex systems of interacting entities. In the real world, a network always contains massive nodes, edges and additional complex information which leads to high complexity in computing and analyzing tasks. Network embedding aims at transforming one network into a low dimensional vector space which benefits the downstream network analysis tasks. In this survey, we provide a systematic overview of network embedding techniques in addressing challenges appearing in networks. We first introduce concepts and challenges in network embedding. Afterwards, we categorize network embedding methods using three categories, including static homogeneous network embedding methods, static heterogeneous network embedding methods and dynamic network embedding methods. Next, we summarize the datasets and evaluation tasks commonly used in network embedding. Finally, we discuss several future directions in this field. © 2020 Elsevier Inc.
Reconfigurable smart factory for drug packing in healthcare industry 4.0
- Wan, Jiafu, Tang, Shenglong, Li, Di, Imran, Muhammad, Zhang, Chunhua
- Authors: Wan, Jiafu , Tang, Shenglong , Li, Di , Imran, Muhammad , Zhang, Chunhua
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 15, no. 1 (2019), p. 507-516
- Full Text: false
- Reviewed:
- Description: Industry 4.0, which exploits cyber-physical systems and represents digital transformation of manufacturing, is deeply affecting healthcare as well as other traditional production sector. To accommodate the increasing demand of agility, flexibility, and low cost in healthcare sector, a data-driven reconfigurable production mode of Smart Factory for pharmaceutical manufacturing is proposed in this paper. The architecture of the Smart Factory is consisted of three primary layers, namely perception layer, deployment layer, and executing layer. A Manufacturing's Semantics Ontology based knowledgebase is introduced in the perception layer, which is responsible for plan scheduling of pharmaceutical production. The reconfigurable plans are generated from the production demand of drugs as well as the information statement of low-level machine resources. To further functionality reconfiguration and low-level controlling, the IEC 61499 standard is also introduced for functionality modeling and machine controlling. We verify the proposed method with an experiment of demand-based drug packing production, which reflects the feasibility and adequate flexibility of the proposed method. © 2005-2012 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran" is provided in this record**
Extending the technology acceptance model for use of e-learning systems by digital learners
- Hanif, Aamer, Jamal, Faheem, Imran, Muhammad
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
BCT-CS : blockchain technology applications for cyber defense and cybersecurity : a survey and solutions
- Kshetri, Naresh, Bhushal, Chandra, Pandey, Purnendu, Vasudha,
- Authors: Kshetri, Naresh , Bhushal, Chandra , Pandey, Purnendu , Vasudha,
- Date: 2022
- Type: Text , Journal article
- Relation: International Journal of Advanced Computer Science and Applications Vol. 13, no. 11 (2022), p. 364-370
- Full Text:
- Reviewed:
- Description: Blockchain technology has now emerged as a ground-breaking technology with possible solutions to applications from securing smart cities to e-voting systems. Although it started as a digital currency or cryptocurrency, bitcoin, there is no doubt that blockchain is influencing and will influence business and society more in the near future. We present a comprehensive survey of how blockchain technology is applied to provide security over the web and to counter ongoing threats as well as increasing cybercrimes and cyber-attacks. During the review, we also investigate how blockchain can affect cyber data and information over the web. Our contributions included the following: (i) summarizing the Blockchain architecture and models for cybersecurity (ii) classifying and discussing recent and relevant works for cyber countermeasures using blockchain (iii) analyzing the main challenges and obstacles of blockchain technology in response to cyber defense and cybersecurity and (iv) recommendations for improvement and future research on the integration of blockchain with cyber defense. © 2022,International Journal of Advanced Computer Science and Applications. All Rights Reserved.
- Authors: Kshetri, Naresh , Bhushal, Chandra , Pandey, Purnendu , Vasudha,
- Date: 2022
- Type: Text , Journal article
- Relation: International Journal of Advanced Computer Science and Applications Vol. 13, no. 11 (2022), p. 364-370
- Full Text:
- Reviewed:
- Description: Blockchain technology has now emerged as a ground-breaking technology with possible solutions to applications from securing smart cities to e-voting systems. Although it started as a digital currency or cryptocurrency, bitcoin, there is no doubt that blockchain is influencing and will influence business and society more in the near future. We present a comprehensive survey of how blockchain technology is applied to provide security over the web and to counter ongoing threats as well as increasing cybercrimes and cyber-attacks. During the review, we also investigate how blockchain can affect cyber data and information over the web. Our contributions included the following: (i) summarizing the Blockchain architecture and models for cybersecurity (ii) classifying and discussing recent and relevant works for cyber countermeasures using blockchain (iii) analyzing the main challenges and obstacles of blockchain technology in response to cyber defense and cybersecurity and (iv) recommendations for improvement and future research on the integration of blockchain with cyber defense. © 2022,International Journal of Advanced Computer Science and Applications. All Rights Reserved.
CenGCN : centralized convolutional networks with vertex imbalance for scale-free graphs
- Xia, Feng, Wang, Lei, Tang, Tao, Chen, Xin, Kong, Xiangjie, Oatley, Giles, King, Irwin
- Authors: Xia, Feng , Wang, Lei , Tang, Tao , Chen, Xin , Kong, Xiangjie , Oatley, Giles , King, Irwin
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Knowledge and Data Engineering Vol. 35, no. 5 (2023), p. 4555-4569
- Full Text:
- Reviewed:
- Description: Graph Convolutional Networks (GCNs) have achieved impressive performance in a wide variety of areas, attracting considerable attention. The core step of GCNs is the information-passing framework that considers all information from neighbors to the central vertex to be equally important. Such equal importance, however, is inadequate for scale-free networks, where hub vertices propagate more dominant information due to vertex imbalance. In this paper, we propose a novel centrality-based framework named CenGCN to address the inequality of information. This framework first quantifies the similarity between hub vertices and their neighbors by label propagation with hub vertices. Based on this similarity and centrality indices, the framework transforms the graph by increasing or decreasing the weights of edges connecting hub vertices and adding self-connections to vertices. In each non-output layer of the GCN, this framework uses a hub attention mechanism to assign new weights to connected non-hub vertices based on their common information with hub vertices. We present two variants CenGCN_D and CenGCN_E, based on degree centrality and eigenvector centrality, respectively. We also conduct comprehensive experiments, including vertex classification, link prediction, vertex clustering, and network visualization. The results demonstrate that the two variants significantly outperform state-of-the-art baselines. © 1989-2012 IEEE.
- Authors: Xia, Feng , Wang, Lei , Tang, Tao , Chen, Xin , Kong, Xiangjie , Oatley, Giles , King, Irwin
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Knowledge and Data Engineering Vol. 35, no. 5 (2023), p. 4555-4569
- Full Text:
- Reviewed:
- Description: Graph Convolutional Networks (GCNs) have achieved impressive performance in a wide variety of areas, attracting considerable attention. The core step of GCNs is the information-passing framework that considers all information from neighbors to the central vertex to be equally important. Such equal importance, however, is inadequate for scale-free networks, where hub vertices propagate more dominant information due to vertex imbalance. In this paper, we propose a novel centrality-based framework named CenGCN to address the inequality of information. This framework first quantifies the similarity between hub vertices and their neighbors by label propagation with hub vertices. Based on this similarity and centrality indices, the framework transforms the graph by increasing or decreasing the weights of edges connecting hub vertices and adding self-connections to vertices. In each non-output layer of the GCN, this framework uses a hub attention mechanism to assign new weights to connected non-hub vertices based on their common information with hub vertices. We present two variants CenGCN_D and CenGCN_E, based on degree centrality and eigenvector centrality, respectively. We also conduct comprehensive experiments, including vertex classification, link prediction, vertex clustering, and network visualization. The results demonstrate that the two variants significantly outperform state-of-the-art baselines. © 1989-2012 IEEE.
Bio-inspired network security for 5G-enabled IoT applications
- Saleem, Kashif, Alabduljabbar, Ghadah, Alrowais, Nouf, Al-Muhtadi, Jalal, Imran, Muhammad, Rodrigues, Joel
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.