A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Khan, Zahoor, Amjad, Sana, Ahmed, Farwa, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
An agriprecision decision support system for weed management in pastures
- Chegini, Hossein, Naha, Ranesh, Mahanti, Aniket, Gong, Mingwei, Passi, Kalpdrum
- Authors: Chegini, Hossein , Naha, Ranesh , Mahanti, Aniket , Gong, Mingwei , Passi, Kalpdrum
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 92660-92675
- Full Text:
- Reviewed:
- Description: Pastures are a vital source of dairy products and cattle nutrition, and as such, play a significant role in New Zealand's agricultural economy. However, weeds can be a major problem for pastures, making it a challenge for dairy farmers to monitor and control them. Currently, most of the tasks for weed management are done manually, and farmers lack persistent technology for weed control. This motivated us to design, implement, and evaluate a Decision Support System (DSS) to detect weeds in pastures and provide decisions for the cleanup of weeds. Our proposed system uses two primary inputs: weeds and bare patches. We created a synthetic dataset to train a weed detection model and designed a fuzzy inference system to assess a pasture. We also used a neuro-fuzzy system in our DSS to evaluate our fuzzy model and tune its parameters for better functioning and accuracy. Our work aims to assist dairy farmers in better weed monitoring, as well as to provide 2D maps of weed density and yield score, which can be of significant value when no digital and meaningful images of pastures exist. The system can also support farmers in scheduling, recommending prohibitive tasks, and storing historical data for pasture analysis, collaborated by stakeholders. © 2013 IEEE.
- Authors: Chegini, Hossein , Naha, Ranesh , Mahanti, Aniket , Gong, Mingwei , Passi, Kalpdrum
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 92660-92675
- Full Text:
- Reviewed:
- Description: Pastures are a vital source of dairy products and cattle nutrition, and as such, play a significant role in New Zealand's agricultural economy. However, weeds can be a major problem for pastures, making it a challenge for dairy farmers to monitor and control them. Currently, most of the tasks for weed management are done manually, and farmers lack persistent technology for weed control. This motivated us to design, implement, and evaluate a Decision Support System (DSS) to detect weeds in pastures and provide decisions for the cleanup of weeds. Our proposed system uses two primary inputs: weeds and bare patches. We created a synthetic dataset to train a weed detection model and designed a fuzzy inference system to assess a pasture. We also used a neuro-fuzzy system in our DSS to evaluate our fuzzy model and tune its parameters for better functioning and accuracy. Our work aims to assist dairy farmers in better weed monitoring, as well as to provide 2D maps of weed density and yield score, which can be of significant value when no digital and meaningful images of pastures exist. The system can also support farmers in scheduling, recommending prohibitive tasks, and storing historical data for pasture analysis, collaborated by stakeholders. © 2013 IEEE.
Critical data detection for dynamically adjustable product quality in IIoT-enabled manufacturing
- Sen, Sachin, Karmakar, Gour, Pang, Shaoning
- Authors: Sen, Sachin , Karmakar, Gour , Pang, Shaoning
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 49464-49480
- Full Text:
- Reviewed:
- Description: The IIoT technologies, due to the widespread use of sensors, generate massive data that are key in providing innovative and efficient industrial management, operation, and product quality control processes. The significance of data has prompted relevant research communities and application developers how to harness the values of these data in secure manufacturing. Critical data analysis, identification of critical factors to improve the manufacturing process and critical data associated with product quality have been investigated in the current literature. However, the current works on product quality control are mainly based on static data analysis, where data may change, but there is no way to adjust them dynamically. Thus, they are not applicable for product quality control, at which point their adjustment is instantly required. However, many manufacturing systems exist, like beverages and food, where ingredients must be adjusted instantaneously to maintain product quality. To address this research gap, we introduce a method that identifies the critical data based on their ranking by exploiting three criticality assessment criteria that capture the instantaneous product quality change during manufacturing. These three criteria are - (1) correlation, (2) percentage quality change and (3) sensitivity for the assessment of data criticality. The product quality is estimated using polynomial regression (POLY), SVM, and DNN. The proposed method is validated using wine manufacturing data. Our proposed method accurately identifies critical data, where SVM produces the lowest average production quality prediction error (10.40%) compared with that of POLY (11%) and DNN (14.40%). © 2013 IEEE.
- Authors: Sen, Sachin , Karmakar, Gour , Pang, Shaoning
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 49464-49480
- Full Text:
- Reviewed:
- Description: The IIoT technologies, due to the widespread use of sensors, generate massive data that are key in providing innovative and efficient industrial management, operation, and product quality control processes. The significance of data has prompted relevant research communities and application developers how to harness the values of these data in secure manufacturing. Critical data analysis, identification of critical factors to improve the manufacturing process and critical data associated with product quality have been investigated in the current literature. However, the current works on product quality control are mainly based on static data analysis, where data may change, but there is no way to adjust them dynamically. Thus, they are not applicable for product quality control, at which point their adjustment is instantly required. However, many manufacturing systems exist, like beverages and food, where ingredients must be adjusted instantaneously to maintain product quality. To address this research gap, we introduce a method that identifies the critical data based on their ranking by exploiting three criticality assessment criteria that capture the instantaneous product quality change during manufacturing. These three criteria are - (1) correlation, (2) percentage quality change and (3) sensitivity for the assessment of data criticality. The product quality is estimated using polynomial regression (POLY), SVM, and DNN. The proposed method is validated using wine manufacturing data. Our proposed method accurately identifies critical data, where SVM produces the lowest average production quality prediction error (10.40%) compared with that of POLY (11%) and DNN (14.40%). © 2013 IEEE.
Device agent assisted blockchain leveraged framework for Internet of Things
- Nasrullah, Tarique, Islam, Md Manowarul, Uddin, Md Ashraf, Khan, Md Anisauzzaman, Layek, Md Abu, Stranieri, Andrew, Huh, Eui-Nam
- Authors: Nasrullah, Tarique , Islam, Md Manowarul , Uddin, Md Ashraf , Khan, Md Anisauzzaman , Layek, Md Abu , Stranieri, Andrew , Huh, Eui-Nam
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 1254-1268
- Full Text:
- Reviewed:
- Description: Blockchain (BC) is a burgeoning technology that has emerged as a promising solution to peer-to-peer communication security and privacy challenges. As a revolutionary technology, blockchain has drawn the attention of academics and researchers. Cryptocurrencies have already effectively utilized BC technology. Many researchers have sought to implement this technique in different sectors, including the Internet of Things. To store and manage IoT data, we present in this paper a lightweight BC-based architecture with a modified raft algorithm-based consensus protocol. We designed a Device Agent that executes a novel registration procedure to connect IoT devices to the blockchain. We implemented the framework on Docker using the Go programming language. We have simulated the framework on a Linux environment hosted in the cloud. We have conducted a detailed performance analysis using a variety of measures. The results demonstrate that our suggested solution is suitable for facilitating the management of IoT data with increased security and privacy. In terms of throughput and block generation time, the results indicate that our solution might be 40% to 45% faster than the existing blockchain. © 2013 IEEE.
- Authors: Nasrullah, Tarique , Islam, Md Manowarul , Uddin, Md Ashraf , Khan, Md Anisauzzaman , Layek, Md Abu , Stranieri, Andrew , Huh, Eui-Nam
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 1254-1268
- Full Text:
- Reviewed:
- Description: Blockchain (BC) is a burgeoning technology that has emerged as a promising solution to peer-to-peer communication security and privacy challenges. As a revolutionary technology, blockchain has drawn the attention of academics and researchers. Cryptocurrencies have already effectively utilized BC technology. Many researchers have sought to implement this technique in different sectors, including the Internet of Things. To store and manage IoT data, we present in this paper a lightweight BC-based architecture with a modified raft algorithm-based consensus protocol. We designed a Device Agent that executes a novel registration procedure to connect IoT devices to the blockchain. We implemented the framework on Docker using the Go programming language. We have simulated the framework on a Linux environment hosted in the cloud. We have conducted a detailed performance analysis using a variety of measures. The results demonstrate that our suggested solution is suitable for facilitating the management of IoT data with increased security and privacy. In terms of throughput and block generation time, the results indicate that our solution might be 40% to 45% faster than the existing blockchain. © 2013 IEEE.
Domestic load management with coordinated photovoltaics, battery storage and electric vehicle operation
- Das, Narottam, Haque, Akramul, Zaman, Hasneen, Morsalin, Sayidul, Islam, Syed
- Authors: Das, Narottam , Haque, Akramul , Zaman, Hasneen , Morsalin, Sayidul , Islam, Syed
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 12075-12087
- Full Text:
- Reviewed:
- Description: Coordinated power demand management at residential or domestic levels allows energy participants to efficiently manage load profiles, increase energy efficiency and reduce operational cost. In this paper, a hierarchical coordination framework to optimally manage domestic load using photovoltaic (PV) units, battery-energy-storage-systems (BESs) and electric vehicles (EVs) is presented. The bidirectional power flow of EV with vehicle to grid (V2G) operation manages real-time domestic load profile and takes appropriate coordinated action using its controller when necessary. The proposed system has been applied to a real power distribution network and tested with real load patterns and load dynamics. This also includes various test scenarios and prosumer's preferences e.g., with or without EVs, number of EV owners, number of households, and prosumer's daily activities. This is a combined hybrid system for hierarchical coordination that consists of PV units, BES systems and EVs. The system performance was analyzed with different commercial EV types with charging/ discharging constraints and the result shows that the domestic load demand on the distribution grid during the peak period has been reduced significantly. In the end, this proposed system's performance was compared with the prediction-based test techniques and the financial benefits were estimated. © 2013 IEEE.
- Authors: Das, Narottam , Haque, Akramul , Zaman, Hasneen , Morsalin, Sayidul , Islam, Syed
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 12075-12087
- Full Text:
- Reviewed:
- Description: Coordinated power demand management at residential or domestic levels allows energy participants to efficiently manage load profiles, increase energy efficiency and reduce operational cost. In this paper, a hierarchical coordination framework to optimally manage domestic load using photovoltaic (PV) units, battery-energy-storage-systems (BESs) and electric vehicles (EVs) is presented. The bidirectional power flow of EV with vehicle to grid (V2G) operation manages real-time domestic load profile and takes appropriate coordinated action using its controller when necessary. The proposed system has been applied to a real power distribution network and tested with real load patterns and load dynamics. This also includes various test scenarios and prosumer's preferences e.g., with or without EVs, number of EV owners, number of households, and prosumer's daily activities. This is a combined hybrid system for hierarchical coordination that consists of PV units, BES systems and EVs. The system performance was analyzed with different commercial EV types with charging/ discharging constraints and the result shows that the domestic load demand on the distribution grid during the peak period has been reduced significantly. In the end, this proposed system's performance was compared with the prediction-based test techniques and the financial benefits were estimated. © 2013 IEEE.
Malicious node detection using machine learning and distributed data storage using blockchain in WSNs
- Nouman, Muhammad, Qasim, Umar, Nasir, Hina, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
Mechanistic modelling of bubble growth in sodium pool boiling
- Iyer, Siddharth, Kumar, Apurv, Coventry, Joe, Lipiński, Wojciech
- Authors: Iyer, Siddharth , Kumar, Apurv , Coventry, Joe , Lipiński, Wojciech
- Date: 2023
- Type: Text , Journal article
- Relation: Applied Mathematical Modelling Vol. 117, no. (2023), p. 336-358
- Full Text: false
- Reviewed:
- Description: This work presents a mechanistic model to simulate the growth of a sodium bubble from nucleation to departure in sodium pool boiling. A previously developed and validated heat transfer sub-model is coupled to a force balance sub-model to predict the growth rate and departure radius of a sodium bubble. The model accounts for the change in the contact angle of a bubble as it grows, and the shrinkage of the bubble base prior to departure. The developed model is used to quantify and analyse the heat transfer from different regions, i.e. the microlayer, the macrolayer, the thermal boundary layer and the bulk liquid surrounding the bubble. In addition, bubble growth rate and departure radius are calculated for different values of wall superheat, rate of change of contact angle and bulk liquid temperature. It is found that the departure radius of a sodium bubble is on the order of a few centimetres and the wall superheat has a significant influence on the shape of a sodium bubble at departure. © 2022 Elsevier Inc.
Multi-slope path loss model-based performance assessment of heterogeneous cellular network in 5G
- Dahri, Safia, Shaikh, Muhammad, Alhussein, Musaed, Soomro, Muhammad, Aurangzeb, Khursheed, Imran, Muhammad
- Authors: Dahri, Safia , Shaikh, Muhammad , Alhussein, Musaed , Soomro, Muhammad , Aurangzeb, Khursheed , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 30473-30485
- Full Text:
- Reviewed:
- Description: The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol $\beta $. Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as $\beta $ crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE.
- Authors: Dahri, Safia , Shaikh, Muhammad , Alhussein, Musaed , Soomro, Muhammad , Aurangzeb, Khursheed , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 30473-30485
- Full Text:
- Reviewed:
- Description: The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol $\beta $. Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as $\beta $ crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE.
Sequence-to-sequence learning-based conversion of pseudo-code to source code using neural translation approach
- Acharjee, Uzzal, Arefin, Minhazul, Hossen, Kazi, Uddin, Mohammed, Uddin, Md Ashraf, Islam, Linta
- Authors: Acharjee, Uzzal , Arefin, Minhazul , Hossen, Kazi , Uddin, Mohammed , Uddin, Md Ashraf , Islam, Linta
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 26730-26742
- Full Text:
- Reviewed:
- Description: Pseudo-code refers to an informal means of representing algorithms that do not require the exact syntax of a computer programming language. Pseudo-code helps developers and researchers represent their algorithms using human-readable language. Generally, researchers can convert the pseudo-code into computer source code using different conversion techniques. The efficiency of such conversion methods is measured based on the converted algorithm's correctness. Researchers have already explored diverse technologies to devise conversion methods with higher accuracy. This paper proposes a novel pseudo-code conversion learning method that includes natural language processing-based text preprocessing and a sequence-to-sequence deep learning-based model trained with the SPoC dataset. We conducted an extensive experiment on our designed algorithm using descriptive bilingual understudy scoring and compared our results with state-of-the-art techniques. Result analysis shows that our approach is more accurate and efficient than other existing conversion methods in terms of several performances metrics. Furthermore, the proposed method outperforms the existing approaches because our method utilizes two Long-Short-Term-Memory networks that might increase the accuracy. © 2013 IEEE.
- Authors: Acharjee, Uzzal , Arefin, Minhazul , Hossen, Kazi , Uddin, Mohammed , Uddin, Md Ashraf , Islam, Linta
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 26730-26742
- Full Text:
- Reviewed:
- Description: Pseudo-code refers to an informal means of representing algorithms that do not require the exact syntax of a computer programming language. Pseudo-code helps developers and researchers represent their algorithms using human-readable language. Generally, researchers can convert the pseudo-code into computer source code using different conversion techniques. The efficiency of such conversion methods is measured based on the converted algorithm's correctness. Researchers have already explored diverse technologies to devise conversion methods with higher accuracy. This paper proposes a novel pseudo-code conversion learning method that includes natural language processing-based text preprocessing and a sequence-to-sequence deep learning-based model trained with the SPoC dataset. We conducted an extensive experiment on our designed algorithm using descriptive bilingual understudy scoring and compared our results with state-of-the-art techniques. Result analysis shows that our approach is more accurate and efficient than other existing conversion methods in terms of several performances metrics. Furthermore, the proposed method outperforms the existing approaches because our method utilizes two Long-Short-Term-Memory networks that might increase the accuracy. © 2013 IEEE.
- Li, Zilin, Hu, Jiefeng, Chan, Ka Wing
- Authors: Li, Zilin , Hu, Jiefeng , Chan, Ka Wing
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol. 57, no. 6 (2021), p. 6362-6374
- Full Text: false
- Reviewed:
- Description: Unlike a synchronous generator that could withstand a large overcurrent, an inverter-based distributed generation (DG) has low thermal inertia, and the inverter is likely damaged by overcurrents during grid faults. In this article, a new strategy, namely positive-And negative-sequence limiting with stability enhanced P-f droop control (PNSL-SEPFC), is proposed to limit the output currents and active power of droop-controlled inverters in islanded microgrids. This strategy is easy to implement in the inverter controller and does not require any fault detection. Inverter stability is analyzed mathematically, which gives guidelines to design the parameters of the PNSL-SEPFC strategy. PSCAD/EMTDC simulation based on a four-DG microgrid shows that the proposed PNSL-SEPFC can limit inverter output currents and powers with better performance under both symmetrical and asymmetrical faults. Furthermore, hardware experiments demonstrate that the proposed PNSL-SEPFC can ensure the inverters riding through grid faults safely and stably. (A video of experimental waveforms is attached.). © 1972-2012 IEEE.
DC fault identification in multiterminal HVDC systems based on reactor voltage gradient
- Hassan, Mehedi, Hossain, M., Shah, Rakibuzzaman
- Authors: Hassan, Mehedi , Hossain, M. , Shah, Rakibuzzaman
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 115855-115867
- Full Text:
- Reviewed:
- Description: With the increasing number of renewable generations, the prospects of long-distance bulk power transmission impels the expansion of point-to-point High Voltage Direct Current (HVDC) grid to an emerging Multi-terminal high-voltage Direct Current (MTDC) grid. The DC grid protection with faster selectivity enhances the operational continuity of the MTDC grid. Based on the reactor voltage gradient (RVG), this paper proposes a fast and reliable fault identification technique with precise discrimination of internal and external DC faults. Considering the voltage developed across the modular multilevel converter (MMC) reactor and DC terminal reactor, the RVG is formulated to characterise an internal and external DC fault. With a window of four RVG samples, the fault is detected and discriminated by the proposed main protection scheme amidst a period of five sampling intervals. Depending on the reactor current increment, a backup protection scheme is also proposed to enhance the protection reliability. The performance of the proposed scheme is validated in a four-terminal MTDC grid. The results under meaningful fault events show that the proposed scheme is capable to identify the DC fault within millisecond. Moreover, the evaluation of the protection sensitivity and robustness reveals that the proposed scheme is highly selective for a wide range of fault resistances and locations, higher sampling frequencies, and irrelevant transient events. Furthermore, the comparison results exhibit that the proposed RVG method improves the discrimination performance of the protection scheme and thereby, proves to be a better choice for future DC fault identification.
- Authors: Hassan, Mehedi , Hossain, M. , Shah, Rakibuzzaman
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 115855-115867
- Full Text:
- Reviewed:
- Description: With the increasing number of renewable generations, the prospects of long-distance bulk power transmission impels the expansion of point-to-point High Voltage Direct Current (HVDC) grid to an emerging Multi-terminal high-voltage Direct Current (MTDC) grid. The DC grid protection with faster selectivity enhances the operational continuity of the MTDC grid. Based on the reactor voltage gradient (RVG), this paper proposes a fast and reliable fault identification technique with precise discrimination of internal and external DC faults. Considering the voltage developed across the modular multilevel converter (MMC) reactor and DC terminal reactor, the RVG is formulated to characterise an internal and external DC fault. With a window of four RVG samples, the fault is detected and discriminated by the proposed main protection scheme amidst a period of five sampling intervals. Depending on the reactor current increment, a backup protection scheme is also proposed to enhance the protection reliability. The performance of the proposed scheme is validated in a four-terminal MTDC grid. The results under meaningful fault events show that the proposed scheme is capable to identify the DC fault within millisecond. Moreover, the evaluation of the protection sensitivity and robustness reveals that the proposed scheme is highly selective for a wide range of fault resistances and locations, higher sampling frequencies, and irrelevant transient events. Furthermore, the comparison results exhibit that the proposed RVG method improves the discrimination performance of the protection scheme and thereby, proves to be a better choice for future DC fault identification.
Providing consistent state to distributed storage system
- Talluri, Laskhmi, Thirumalaisamy, Ragunathan, Kota, Ramgopal, Sadi, Ram, Kc, Ujjwal, Naha, Ranesh, Mahanti, Aniket
- Authors: Talluri, Laskhmi , Thirumalaisamy, Ragunathan , Kota, Ramgopal , Sadi, Ram , Kc, Ujjwal , Naha, Ranesh , Mahanti, Aniket
- Date: 2021
- Type: Text , Journal article
- Relation: Computers Vol. 10, no. 2 (2021), p. 23
- Full Text: false
- Reviewed:
- Description: In cloud storage systems, users must be able to shut down the application when not in use and restart it from the last consistent state when required. BlobSeer is a data storage application, specially designed for distributed systems, that was built as an alternative solution for the existing popular open-source storage system-Hadoop Distributed File System (HDFS). In a cloud model, all the components need to stop and restart from a consistent state when the user requires it. One of the limitations of BlobSeer DFS is the possibility of data loss when the system restarts. As such, it is important to provide a consistent start and stop state to BlobSeer components when used in a Cloud environment to prevent any data loss. In this paper, we investigate the possibility of BlobSeer providing a consistent state distributed data storage system with the integration of checkpointing restart functionality. To demonstrate the availability of a consistent state, we set up a cluster with multiple machines and deploy BlobSeer entities with checkpointing functionality on various machines. We consider uncoordinated checkpoint algorithms for their associated benefits over other alternatives while integrating the functionality to various BlobSeer components such as the Version Manager (VM) and the Data Provider. The experimental results show that with the integration of the checkpointing functionality, a consistent state can be ensured for a distributed storage system even when the system restarts, preventing any possible data loss after the system has encountered various system errors and failures.
6G wireless systems : a vision, architectural elements, and future directions
- Khan, Latif, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
Bio-inspired network security for 5G-enabled IoT applications
- Saleem, Kashif, Alabduljabbar, Ghadah, Alrowais, Nouf, Al-Muhtadi, Jalal, Imran, Muhammad, Rodrigues, Joel
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
Blending big data analytics : review on challenges and a recent study
- Amalina, Fairuz, Targio Hashem, Ibrahim, Azizul, Zati, Fong, Ang, Imran, Muhammad
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
Exploring the Dynamic Voltage Signature of Renewable Rich Weak Power System
- Alzahrani, S., Shah, Rakibuzzaman, Mithulananthan, N.
- Authors: Alzahrani, S. , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 216529-216542
- Full Text:
- Reviewed:
- Description: Large-scale renewable energy-based power plants are becoming attractive technically and economically for generation mix around the world. Nevertheless, network operation has significantly changed due to the rapid integration of renewable energy in supply side. The integration of more renewable resources, especially inverter-based generation, deteriorates power system resilience to disturbances and substantially affects stable operations. The dynamic voltage stability becomes one of the major concerns for the transmission system operators (TSOs) due to the limited capabilities of inverter-based resources (IBRs). A heavily loaded and stressed renewable rich grid is susceptible to fault-induced delayed voltage recovery. Hence, it is crucial to examine the system response upon disturbances, to understand the voltage signature, to determine the optimal location and sizing of grid-connected IBRs. Moreover, the IBRs fault contribution mechanism investigation is essential in adopting additional grid support devices, control coordination, and the selection of appropriate corrective control schemes. This article utilizes a comprehensive assessment framework to assess power systems' dynamic voltage signature with large-scale PV under different realistic operating conditions. Several indices quantifying load bus voltage recovery have been used to explore the system' s steady-state, transient response, and voltage trajectories. The recovery indices help extricate the signature and influence of IBRs. The proposed framework's applicability is carried out on the New England IEEE-39 bus test system using the DIgSILENT platform. © 2013 IEEE.
- Authors: Alzahrani, S. , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 216529-216542
- Full Text:
- Reviewed:
- Description: Large-scale renewable energy-based power plants are becoming attractive technically and economically for generation mix around the world. Nevertheless, network operation has significantly changed due to the rapid integration of renewable energy in supply side. The integration of more renewable resources, especially inverter-based generation, deteriorates power system resilience to disturbances and substantially affects stable operations. The dynamic voltage stability becomes one of the major concerns for the transmission system operators (TSOs) due to the limited capabilities of inverter-based resources (IBRs). A heavily loaded and stressed renewable rich grid is susceptible to fault-induced delayed voltage recovery. Hence, it is crucial to examine the system response upon disturbances, to understand the voltage signature, to determine the optimal location and sizing of grid-connected IBRs. Moreover, the IBRs fault contribution mechanism investigation is essential in adopting additional grid support devices, control coordination, and the selection of appropriate corrective control schemes. This article utilizes a comprehensive assessment framework to assess power systems' dynamic voltage signature with large-scale PV under different realistic operating conditions. Several indices quantifying load bus voltage recovery have been used to explore the system' s steady-state, transient response, and voltage trajectories. The recovery indices help extricate the signature and influence of IBRs. The proposed framework's applicability is carried out on the New England IEEE-39 bus test system using the DIgSILENT platform. © 2013 IEEE.
MESH : a flexible manifold-embedded semantic hashing for cross-modal retrieval
- Zhong, Fangming, Wang, Guangze, Chen, Zhikui, Xia, Feng
- Authors: Zhong, Fangming , Wang, Guangze , Chen, Zhikui , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147569-147579
- Full Text:
- Reviewed:
- Description: Hashing based methods for cross-modal retrieval has been widely explored in recent years. However, most of them mainly focus on the preservation of neighborhood relationship and label consistency, while ignore the proximity of neighbors and proximity of classes, which degrades the discrimination of hash codes. And most of them learn hash codes and hashing functions simultaneously, which limits the flexibility of algorithms. To address these issues, in this article, we propose a two-step cross-modal retrieval method named Manifold-Embedded Semantic Hashing (MESH). It exploits Local Linear Embedding to model the neighborhood proximity and uses class semantic embeddings to consider the proximity of classes. By so doing, MESH can not only extract the manifold structure in different modalities, but also can embed the class semantic information into hash codes to further improve the discrimination of learned hash codes. Moreover, the two-step scheme makes MESH flexible to various hashing functions. Extensive experimental results on three datasets show that MESH is superior to 10 state-of-the-art cross-modal hashing methods. Moreover, MESH also demonstrates superiority on deep features compared with the deep cross-modal hashing method. © 2013 IEEE.
- Authors: Zhong, Fangming , Wang, Guangze , Chen, Zhikui , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147569-147579
- Full Text:
- Reviewed:
- Description: Hashing based methods for cross-modal retrieval has been widely explored in recent years. However, most of them mainly focus on the preservation of neighborhood relationship and label consistency, while ignore the proximity of neighbors and proximity of classes, which degrades the discrimination of hash codes. And most of them learn hash codes and hashing functions simultaneously, which limits the flexibility of algorithms. To address these issues, in this article, we propose a two-step cross-modal retrieval method named Manifold-Embedded Semantic Hashing (MESH). It exploits Local Linear Embedding to model the neighborhood proximity and uses class semantic embeddings to consider the proximity of classes. By so doing, MESH can not only extract the manifold structure in different modalities, but also can embed the class semantic information into hash codes to further improve the discrimination of learned hash codes. Moreover, the two-step scheme makes MESH flexible to various hashing functions. Extensive experimental results on three datasets show that MESH is superior to 10 state-of-the-art cross-modal hashing methods. Moreover, MESH also demonstrates superiority on deep features compared with the deep cross-modal hashing method. © 2013 IEEE.
Model compression for IoT applications in industry 4.0 via multiscale knowledge transfer
- Fu, Shipeng, Li, Zhen, Liu, Kai, Din, Sadia, Imran, Muhammad, Yang, Xiaomin
- Authors: Fu, Shipeng , Li, Zhen , Liu, Kai , Din, Sadia , Imran, Muhammad , Yang, Xiaomin
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 16, no. 9 (2020), p. 6013-6022
- Full Text: false
- Reviewed:
- Description: Recently, Industry 4.0 has attracted much attention. It has close relations with the Internet of Things (IoT). On the other hand, convolutional neural networks (CNNs) have shown promising performance in many foundational services of the IoT applications. For the IoT applications with high-speed data streams and the requirement of time-sensitive actions, fast processing is demanded on small-scale platforms or even on IoT devices themselves. Therefore, it is inappropriate to employ cumbersome CNNs in IoT applications, making the study of model compression necessary. In knowledge transfer, it is common to employ a deep, well-trained network, called teacher, to guide a shallow, untrained network, called student, to have better performance. Previous works have made many attempts to transfer single-scale knowledge from teacher to student, leading to degradation of generalization ability. In this article, we introduce multiscale representations to knowledge transfer, which facilitates the generalization ability of student. We divide student and teacher into several stages. Student learns from multiscale knowledge provided by teacher at the end of each stage. Extensive experiments demonstrate the effectiveness of our proposed method both on image classification and on single image super-resolution. The huge performance gap between student and teacher is significantly narrowed down by our proposed method, making student suitable for IoT applications. © 2005-2012 IEEE.