Sequence-to-sequence learning-based conversion of pseudo-code to source code using neural translation approach
- Acharjee, Uzzal, Arefin, Minhazul, Hossen, Kazi, Uddin, Mohammed, Uddin, Md Ashraf, Islam, Linta
- Authors: Acharjee, Uzzal , Arefin, Minhazul , Hossen, Kazi , Uddin, Mohammed , Uddin, Md Ashraf , Islam, Linta
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 26730-26742
- Full Text:
- Reviewed:
- Description: Pseudo-code refers to an informal means of representing algorithms that do not require the exact syntax of a computer programming language. Pseudo-code helps developers and researchers represent their algorithms using human-readable language. Generally, researchers can convert the pseudo-code into computer source code using different conversion techniques. The efficiency of such conversion methods is measured based on the converted algorithm's correctness. Researchers have already explored diverse technologies to devise conversion methods with higher accuracy. This paper proposes a novel pseudo-code conversion learning method that includes natural language processing-based text preprocessing and a sequence-to-sequence deep learning-based model trained with the SPoC dataset. We conducted an extensive experiment on our designed algorithm using descriptive bilingual understudy scoring and compared our results with state-of-the-art techniques. Result analysis shows that our approach is more accurate and efficient than other existing conversion methods in terms of several performances metrics. Furthermore, the proposed method outperforms the existing approaches because our method utilizes two Long-Short-Term-Memory networks that might increase the accuracy. © 2013 IEEE.
- Authors: Acharjee, Uzzal , Arefin, Minhazul , Hossen, Kazi , Uddin, Mohammed , Uddin, Md Ashraf , Islam, Linta
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 26730-26742
- Full Text:
- Reviewed:
- Description: Pseudo-code refers to an informal means of representing algorithms that do not require the exact syntax of a computer programming language. Pseudo-code helps developers and researchers represent their algorithms using human-readable language. Generally, researchers can convert the pseudo-code into computer source code using different conversion techniques. The efficiency of such conversion methods is measured based on the converted algorithm's correctness. Researchers have already explored diverse technologies to devise conversion methods with higher accuracy. This paper proposes a novel pseudo-code conversion learning method that includes natural language processing-based text preprocessing and a sequence-to-sequence deep learning-based model trained with the SPoC dataset. We conducted an extensive experiment on our designed algorithm using descriptive bilingual understudy scoring and compared our results with state-of-the-art techniques. Result analysis shows that our approach is more accurate and efficient than other existing conversion methods in terms of several performances metrics. Furthermore, the proposed method outperforms the existing approaches because our method utilizes two Long-Short-Term-Memory networks that might increase the accuracy. © 2013 IEEE.
Formal modeling and verification of a blockchain-based crowdsourcing consensus protocol
- Afzaal, Hamra, Imran, Muhammad, Janjua, Muhammad, Gochhayat, Sarada
- Authors: Afzaal, Hamra , Imran, Muhammad , Janjua, Muhammad , Gochhayat, Sarada
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 8163-8183
- Full Text:
- Reviewed:
- Description: Crowdsourcing is an effective technique that allows humans to solve complex problems that are hard to accomplish by automated tools. Some significant challenges in crowdsourcing systems include avoiding security attacks, effective trust management, and ensuring the system's correctness. Blockchain is a promising technology that can be efficiently exploited to address security and trust issues. The consensus protocol is a core component of a blockchain network through which all the blockchain peers achieve an agreement about the state of the distributed ledger. Therefore, its security, trustworthiness, and correctness have vital importance. This work proposes a Secure and Trustworthy Blockchain-based Crowdsourcing (STBC) consensus protocol to address these challenges. Model checking is an effective and automatic technique based on formal methods that is utilized to ensure the correctness of STBC consensus protocol. The proposed consensus protocol's formal specification is described using Communicating Sequential Programs (CSP#). Safety, fault tolerance, leader trust, and validators' trust are important properties for a consensus protocol, which are formally specified through Linear Temporal Logic (LTL) to prevent several security attacks, such as blockchain fork, selfish mining, and invalid block insertion. Process Analysis Toolkit (PAT) is utilized for the formal verification of the proposed consensus protocol. © 2022 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Afzaal, Hamra , Imran, Muhammad , Janjua, Muhammad , Gochhayat, Sarada
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 8163-8183
- Full Text:
- Reviewed:
- Description: Crowdsourcing is an effective technique that allows humans to solve complex problems that are hard to accomplish by automated tools. Some significant challenges in crowdsourcing systems include avoiding security attacks, effective trust management, and ensuring the system's correctness. Blockchain is a promising technology that can be efficiently exploited to address security and trust issues. The consensus protocol is a core component of a blockchain network through which all the blockchain peers achieve an agreement about the state of the distributed ledger. Therefore, its security, trustworthiness, and correctness have vital importance. This work proposes a Secure and Trustworthy Blockchain-based Crowdsourcing (STBC) consensus protocol to address these challenges. Model checking is an effective and automatic technique based on formal methods that is utilized to ensure the correctness of STBC consensus protocol. The proposed consensus protocol's formal specification is described using Communicating Sequential Programs (CSP#). Safety, fault tolerance, leader trust, and validators' trust are important properties for a consensus protocol, which are formally specified through Linear Temporal Logic (LTL) to prevent several security attacks, such as blockchain fork, selfish mining, and invalid block insertion. Process Analysis Toolkit (PAT) is utilized for the formal verification of the proposed consensus protocol. © 2022 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
A novel collaborative IoD-assisted VANET approach for coverage area maximization
- Ahmed, Gamil, Sheltami, Tarek, Mahmoud, Ashraf, Imran, Muhammad, Shoaib, Muhammad
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
A fault-tolerant cascaded switched-capacitor multilevel inverter for domestic applications in smart grids
- Akbari, Ehsan, Teimouri, Ali, Saki, Mojtaba, Rezaei, Mohammad, Hu, Jiefeng, Band, Shahab, Pai, Hao-Ting, Mosavi, Amir
- Authors: Akbari, Ehsan , Teimouri, Ali , Saki, Mojtaba , Rezaei, Mohammad , Hu, Jiefeng , Band, Shahab , Pai, Hao-Ting , Mosavi, Amir
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 110590-110602
- Full Text:
- Reviewed:
- Description: Cascaded multilevel inverters (MLIs) generate an output voltage using series-connected power modules that employ standard configurations of low-voltage components. Each module may employ one or more switched capacitors to double or quadruple its input voltage. The higher number of switched capacitors and semiconductor switches in MLIs compared to conventional two-level inverters has led to concerns about overall system reliability. A fault-tolerant design can mitigate this reliability issue. If one part of the system fails, the MLI can continue its planned operation at a reduced level rather than the entire system failing, which makes the fault tolerance of the MLI particularly important. In this paper, a novel fault location technique is presented that leads to a significant reduction in fault location detection time based on the reliability priority of the components of the proposed fault-tolerant switched capacitor cascaded MLI (CSCMLI). The main contribution of this paper is to reduce the number of MLI switches under fault conditions while operating at lower levels. The fault-tolerant inverter requires fewer switches at higher reliability, and the comparison with similar MLIs shows a faster dynamic response of fault detection and reduced fault location detection time. The experimental results confirm the effectiveness of the presented methods applied in the CSCMLI. Also, all experimental data including processor code, schematic, PCB, and video of CSCMLI operation are attached. © 2013 IEEE.
- Authors: Akbari, Ehsan , Teimouri, Ali , Saki, Mojtaba , Rezaei, Mohammad , Hu, Jiefeng , Band, Shahab , Pai, Hao-Ting , Mosavi, Amir
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 110590-110602
- Full Text:
- Reviewed:
- Description: Cascaded multilevel inverters (MLIs) generate an output voltage using series-connected power modules that employ standard configurations of low-voltage components. Each module may employ one or more switched capacitors to double or quadruple its input voltage. The higher number of switched capacitors and semiconductor switches in MLIs compared to conventional two-level inverters has led to concerns about overall system reliability. A fault-tolerant design can mitigate this reliability issue. If one part of the system fails, the MLI can continue its planned operation at a reduced level rather than the entire system failing, which makes the fault tolerance of the MLI particularly important. In this paper, a novel fault location technique is presented that leads to a significant reduction in fault location detection time based on the reliability priority of the components of the proposed fault-tolerant switched capacitor cascaded MLI (CSCMLI). The main contribution of this paper is to reduce the number of MLI switches under fault conditions while operating at lower levels. The fault-tolerant inverter requires fewer switches at higher reliability, and the comparison with similar MLIs shows a faster dynamic response of fault detection and reduced fault location detection time. The experimental results confirm the effectiveness of the presented methods applied in the CSCMLI. Also, all experimental data including processor code, schematic, PCB, and video of CSCMLI operation are attached. © 2013 IEEE.
A critical analysis of mobility management related issues of wireless sensor networks in cyber physical systems
- Al-Muhtadi, Jalal, Qiang, Ma, Zeb, Khan, Chaudhry, Junaid, Imran, Muhammad
- Authors: Al-Muhtadi, Jalal , Qiang, Ma , Zeb, Khan , Chaudhry, Junaid , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 16363-16376
- Full Text:
- Reviewed:
- Description: Mobility management has been a long-standing issue in mobile wireless sensor networks and especially in the context of cyber physical systems its implications are immense. This paper presents a critical analysis of the current approaches to mobility management by evaluating them against a set of criteria which are essentially inherent characteristics of such systems on which these approaches are expected to provide acceptable performance. We summarize these characteristics by using a quadruple set of metrics. Additionally, using this set we classify the various approaches to mobility management that are discussed in this paper. Finally, the paper concludes by reviewing the main findings and providing suggestions that will be helpful to guide future research efforts in the area. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran” is provided in this record**
- Authors: Al-Muhtadi, Jalal , Qiang, Ma , Zeb, Khan , Chaudhry, Junaid , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 16363-16376
- Full Text:
- Reviewed:
- Description: Mobility management has been a long-standing issue in mobile wireless sensor networks and especially in the context of cyber physical systems its implications are immense. This paper presents a critical analysis of the current approaches to mobility management by evaluating them against a set of criteria which are essentially inherent characteristics of such systems on which these approaches are expected to provide acceptable performance. We summarize these characteristics by using a quadruple set of metrics. Additionally, using this set we classify the various approaches to mobility management that are discussed in this paper. Finally, the paper concludes by reviewing the main findings and providing suggestions that will be helpful to guide future research efforts in the area. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran” is provided in this record**
Energy efficiency perspectives of femtocells in internet of things : recent advances and challenges
- Al-Turjman, Fadi, Imran, Muhammad, Bakhsh, Sheikh
- Authors: Al-Turjman, Fadi , Imran, Muhammad , Bakhsh, Sheikh
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 26808-26818
- Full Text:
- Reviewed:
- Description: Energy efficiency is a growing concern in every aspect of the technology. Apart from maintaining profitability, energy efficiency means a decrease in the overall environmental effects, which is a serious concern in today's world. Using a femtocell in Internet of Things (IoT) can boost energy efficiency. To illustrate, femtocells can be used in smart homes, which is a subpart of the smart grid, as a communication mechanism in order to manage energy efficiency. Moreover, femtocells can be used in many IoT applications in order to provide communication. However, it is important to evaluate the energy efficiency of femtocells. This paper investigates recent advances and challenges in the energy efficiency of the femtocell in IoT. First, we introduce the idea of femtocells in the context of IoT and their role in IoT applications. Next, we describe prominent performance metrics in order to understand how the energy efficiency is evaluated. Then, we elucidate how energy can be modeled in terms of femtocell and provide some models from the literature. Since femtocells are used in heterogeneous networks to manage energy efficiency, we also express some energy efficiency schemes for deployment. The factors that affect the energy usage of a femtocell base station are discussed and then the power consumption of user equipment under femtocell coverage is mentioned. Finally, we highlight prominent open research issues and challenges. © 2013 IEEE.
- Authors: Al-Turjman, Fadi , Imran, Muhammad , Bakhsh, Sheikh
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 26808-26818
- Full Text:
- Reviewed:
- Description: Energy efficiency is a growing concern in every aspect of the technology. Apart from maintaining profitability, energy efficiency means a decrease in the overall environmental effects, which is a serious concern in today's world. Using a femtocell in Internet of Things (IoT) can boost energy efficiency. To illustrate, femtocells can be used in smart homes, which is a subpart of the smart grid, as a communication mechanism in order to manage energy efficiency. Moreover, femtocells can be used in many IoT applications in order to provide communication. However, it is important to evaluate the energy efficiency of femtocells. This paper investigates recent advances and challenges in the energy efficiency of the femtocell in IoT. First, we introduce the idea of femtocells in the context of IoT and their role in IoT applications. Next, we describe prominent performance metrics in order to understand how the energy efficiency is evaluated. Then, we elucidate how energy can be modeled in terms of femtocell and provide some models from the literature. Since femtocells are used in heterogeneous networks to manage energy efficiency, we also express some energy efficiency schemes for deployment. The factors that affect the energy usage of a femtocell base station are discussed and then the power consumption of user equipment under femtocell coverage is mentioned. Finally, we highlight prominent open research issues and challenges. © 2013 IEEE.
Deep learning-based approach for detecting trajectory modifications of cassini-huygens spacecraft
- Aldabbas, Ashraf, Gal, Zoltan, Ghori, Khawaja, Imran, Muhammad, Shoaib, Muhammad
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
An automatic digital audio authentication/forensics system
- Ali, Zulfiqar, Imran, Muhammad, Alsulaiman, Mansour
- Authors: Ali, Zulfiqar , Imran, Muhammad , Alsulaiman, Mansour
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 2994-3007
- Full Text:
- Reviewed:
- Description: With the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. To investigate and verify, this paper presents a novel automatic authentication system that differentiates between the forged and original audio. The design philosophy of the proposed system is primarily based on three psychoacoustic principles of hearing, which are implemented to simulate the human sound perception system. Moreover, the proposed system is able to classify between the audio of different environments recorded with the same microphone. To authenticate the audio and environment classification, the computed features based on the psychoacoustic principles of hearing are dangled to the Gaussian mixture model to make automatic decisions. It is worth mentioning that the proposed system authenticates an unknown speaker irrespective of the audio content i.e., independent of narrator and text. To evaluate the performance of the proposed system, audios in multi-environments are forged in such a way that a human cannot recognize them. Subjective evaluation by three human evaluators is performed to verify the quality of the generated forged audio. The proposed system provides a classification accuracy of 99.2% ± 2.6. Furthermore, the obtained accuracy for the other scenarios, such as text-dependent and text-independent audio authentication, is 100% by using the proposed system. © 2017 IEEE.
- Authors: Ali, Zulfiqar , Imran, Muhammad , Alsulaiman, Mansour
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 2994-3007
- Full Text:
- Reviewed:
- Description: With the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. To investigate and verify, this paper presents a novel automatic authentication system that differentiates between the forged and original audio. The design philosophy of the proposed system is primarily based on three psychoacoustic principles of hearing, which are implemented to simulate the human sound perception system. Moreover, the proposed system is able to classify between the audio of different environments recorded with the same microphone. To authenticate the audio and environment classification, the computed features based on the psychoacoustic principles of hearing are dangled to the Gaussian mixture model to make automatic decisions. It is worth mentioning that the proposed system authenticates an unknown speaker irrespective of the audio content i.e., independent of narrator and text. To evaluate the performance of the proposed system, audios in multi-environments are forged in such a way that a human cannot recognize them. Subjective evaluation by three human evaluators is performed to verify the quality of the generated forged audio. The proposed system provides a classification accuracy of 99.2% ± 2.6. Furthermore, the obtained accuracy for the other scenarios, such as text-dependent and text-independent audio authentication, is 100% by using the proposed system. © 2017 IEEE.
A robust consistency model of crowd workers in text labeling tasks
- Alqershi, Fattoh, Al-Qurishi, Muhammad, Aksoy, Mehmet, Alrubaian, Majed, Imran, Muhammad
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Modeling cyclic crack propagation in concrete using the scaled boundary finite element method coupled with the cumulative damage-plasticity constitutive law
- Alrayes, Omar, Könke, Carsten, Ooi, Ean Tat, Hamdia, Khader
- Authors: Alrayes, Omar , Könke, Carsten , Ooi, Ean Tat , Hamdia, Khader
- Date: 2023
- Type: Text , Journal article
- Relation: Materials Vol. 16, no. 2 (2023), p.
- Full Text:
- Reviewed:
- Description: Many concrete structures, such as bridges and wind turbine towers, fail mostly due to the fatigue rapture and bending, where the cracks are initiated and propagate under cyclic loading. Modeling the fracture process zone (FPZ) is essential to understanding the cracking behavior of heterogeneous, quasi-brittle materials such as concrete under monotonic and cyclic actions. The paper aims to present a numerical modeling approach for simulating crack growth using a scaled boundary finite element model (SBFEM). The cohesive traction law is explored to model the stress field under monotonic and cyclic loading conditions. In doing so, a new constitutive law is applied within the cohesive response. The cyclic damage accumulation during loading and unloading is formulated within the thermodynamic framework of the constitutive concrete model. We consider two common problems of three-point bending of a single-edge-notched concrete beam subjected to different loading conditions to validate the developed method. The simulation results show good agreement with experimental test measurements from the literature. The presented analysis can provide a further understanding of crack growth and damage accumulation within the cohesive response, and the SBFEM makes it possible to identify the fracture behavior of cyclic crack propagation in concrete members. © 2023 by the authors.
- Authors: Alrayes, Omar , Könke, Carsten , Ooi, Ean Tat , Hamdia, Khader
- Date: 2023
- Type: Text , Journal article
- Relation: Materials Vol. 16, no. 2 (2023), p.
- Full Text:
- Reviewed:
- Description: Many concrete structures, such as bridges and wind turbine towers, fail mostly due to the fatigue rapture and bending, where the cracks are initiated and propagate under cyclic loading. Modeling the fracture process zone (FPZ) is essential to understanding the cracking behavior of heterogeneous, quasi-brittle materials such as concrete under monotonic and cyclic actions. The paper aims to present a numerical modeling approach for simulating crack growth using a scaled boundary finite element model (SBFEM). The cohesive traction law is explored to model the stress field under monotonic and cyclic loading conditions. In doing so, a new constitutive law is applied within the cohesive response. The cyclic damage accumulation during loading and unloading is formulated within the thermodynamic framework of the constitutive concrete model. We consider two common problems of three-point bending of a single-edge-notched concrete beam subjected to different loading conditions to validate the developed method. The simulation results show good agreement with experimental test measurements from the literature. The presented analysis can provide a further understanding of crack growth and damage accumulation within the cohesive response, and the SBFEM makes it possible to identify the fracture behavior of cyclic crack propagation in concrete members. © 2023 by the authors.
Exploring the Dynamic Voltage Signature of Renewable Rich Weak Power System
- Alzahrani, S., Shah, Rakibuzzaman, Mithulananthan, N.
- Authors: Alzahrani, S. , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 216529-216542
- Full Text:
- Reviewed:
- Description: Large-scale renewable energy-based power plants are becoming attractive technically and economically for generation mix around the world. Nevertheless, network operation has significantly changed due to the rapid integration of renewable energy in supply side. The integration of more renewable resources, especially inverter-based generation, deteriorates power system resilience to disturbances and substantially affects stable operations. The dynamic voltage stability becomes one of the major concerns for the transmission system operators (TSOs) due to the limited capabilities of inverter-based resources (IBRs). A heavily loaded and stressed renewable rich grid is susceptible to fault-induced delayed voltage recovery. Hence, it is crucial to examine the system response upon disturbances, to understand the voltage signature, to determine the optimal location and sizing of grid-connected IBRs. Moreover, the IBRs fault contribution mechanism investigation is essential in adopting additional grid support devices, control coordination, and the selection of appropriate corrective control schemes. This article utilizes a comprehensive assessment framework to assess power systems' dynamic voltage signature with large-scale PV under different realistic operating conditions. Several indices quantifying load bus voltage recovery have been used to explore the system' s steady-state, transient response, and voltage trajectories. The recovery indices help extricate the signature and influence of IBRs. The proposed framework's applicability is carried out on the New England IEEE-39 bus test system using the DIgSILENT platform. © 2013 IEEE.
- Authors: Alzahrani, S. , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 216529-216542
- Full Text:
- Reviewed:
- Description: Large-scale renewable energy-based power plants are becoming attractive technically and economically for generation mix around the world. Nevertheless, network operation has significantly changed due to the rapid integration of renewable energy in supply side. The integration of more renewable resources, especially inverter-based generation, deteriorates power system resilience to disturbances and substantially affects stable operations. The dynamic voltage stability becomes one of the major concerns for the transmission system operators (TSOs) due to the limited capabilities of inverter-based resources (IBRs). A heavily loaded and stressed renewable rich grid is susceptible to fault-induced delayed voltage recovery. Hence, it is crucial to examine the system response upon disturbances, to understand the voltage signature, to determine the optimal location and sizing of grid-connected IBRs. Moreover, the IBRs fault contribution mechanism investigation is essential in adopting additional grid support devices, control coordination, and the selection of appropriate corrective control schemes. This article utilizes a comprehensive assessment framework to assess power systems' dynamic voltage signature with large-scale PV under different realistic operating conditions. Several indices quantifying load bus voltage recovery have been used to explore the system' s steady-state, transient response, and voltage trajectories. The recovery indices help extricate the signature and influence of IBRs. The proposed framework's applicability is carried out on the New England IEEE-39 bus test system using the DIgSILENT platform. © 2013 IEEE.
Blending big data analytics : review on challenges and a recent study
- Amalina, Fairuz, Targio Hashem, Ibrahim, Azizul, Zati, Fong, Ang, Imran, Muhammad
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
Investigating smart home security : is blockchain the answer?
- Arif, Samrah, Khan, M. Arif, Rehman, Sabih, Kabir, Muhammad, Imran, Muhammad
- Authors: Arif, Samrah , Khan, M. Arif , Rehman, Sabih , Kabir, Muhammad , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 117802-117816
- Full Text:
- Reviewed:
- Description: Smart Home automation is increasingly gaining popularity among current applications of Internet of Things (IoT) due to the convenience and facilities it provides to the home owners. Sensors are employed within the home appliances via wireless connectivity to be accessible remotely by home owners to operate these devices. With the exponential increase of smart home IoT devices in the marketplace such as door locks, light bulbs, power switches etc, numerous security concerns are arising due to limited storage and processing power of such devices, making these devices vulnerable to several attacks. Due to this reason, security implementations in the deployment of these devices has gained popularity among researchers as a critical research area. Moreover, the adoption of traditional security schemes has failed to address the unique security concerns associated with these devices. Blockchain, a decentralised database based on cryptographic techniques, is gaining enormous attention to assure security of IoT systems. The blockchain framework within an IoT system is a fascinating substitute to the traditional centralised models, which has some significant concerns in fulfilling the demand of smart homes security. In this article, we aim to examine the security of smart homes by instigating the adoption of blockchain and exploring some of the currently proposed smart home architectures using blockchain technology. To present our findings, we describe a simple secure smart home framework based on a refined version of blockchain called Consortium blockchain. We highlight the limitations and opportunities of adopting such an architecture. We further evaluate our model and conclude with the results by designing an experimental testbed using a few household IoT devices commonly available in the marketplace. © 2013 IEEE.
- Authors: Arif, Samrah , Khan, M. Arif , Rehman, Sabih , Kabir, Muhammad , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 117802-117816
- Full Text:
- Reviewed:
- Description: Smart Home automation is increasingly gaining popularity among current applications of Internet of Things (IoT) due to the convenience and facilities it provides to the home owners. Sensors are employed within the home appliances via wireless connectivity to be accessible remotely by home owners to operate these devices. With the exponential increase of smart home IoT devices in the marketplace such as door locks, light bulbs, power switches etc, numerous security concerns are arising due to limited storage and processing power of such devices, making these devices vulnerable to several attacks. Due to this reason, security implementations in the deployment of these devices has gained popularity among researchers as a critical research area. Moreover, the adoption of traditional security schemes has failed to address the unique security concerns associated with these devices. Blockchain, a decentralised database based on cryptographic techniques, is gaining enormous attention to assure security of IoT systems. The blockchain framework within an IoT system is a fascinating substitute to the traditional centralised models, which has some significant concerns in fulfilling the demand of smart homes security. In this article, we aim to examine the security of smart homes by instigating the adoption of blockchain and exploring some of the currently proposed smart home architectures using blockchain technology. To present our findings, we describe a simple secure smart home framework based on a refined version of blockchain called Consortium blockchain. We highlight the limitations and opportunities of adopting such an architecture. We further evaluate our model and conclude with the results by designing an experimental testbed using a few household IoT devices commonly available in the marketplace. © 2013 IEEE.
DQN approach for adaptive self-healing of VNFs in cloud-native network
- Arulappan, Arunkumar, Mahanti, Aniket, Passi, Kalpdrum, Srinivasan, Thiruvenkadam, Naha, Ranesh, Raja, Gunasekaran
- Authors: Arulappan, Arunkumar , Mahanti, Aniket , Passi, Kalpdrum , Srinivasan, Thiruvenkadam , Naha, Ranesh , Raja, Gunasekaran
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 34489-34504
- Full Text:
- Reviewed:
- Description: The transformation from physical network function to Virtual Network Function (VNF) requires a fundamental design change in how applications and services are tested and assured in a hybrid virtual network. Once the VNFs are onboarded in a cloud network infrastructure, operators need to test VNFs in real-time at the time of instantiation automatically. This paper explicitly analyses the problem of adaptive self-healing of a Virtual Machine (VM) allocated by the VNF with the Deep Reinforcement Learning (DRL) approach. The DRL-based big data collection and analytics engine performs aggregation to probe and analyze data for troubleshooting and performance management. This engine helps to determine corrective actions (self-healing), such as scaling or migrating VNFs. Hence, we proposed a Deep Queue Learning (DQL) based Deep Queue Networks (DQN) mechanism for self-healing VNFs in the virtualized infrastructure manager. Virtual network probes of closed-loop orchestration perform the automation of the VNF and provide analytics for real-time, policy-driven orchestration in an open networking automation platform through the stochastic gradient descent method for VNF service assurance and network reliability. The proposed DQN/DDQN mechanism optimizes the price and lowers the cost by 18% for resource usage without disrupting the Quality of Service (QoS) provided by the VNF. The outcome of adaptive self-healing of the VNFs enhances the computational performance by 27% compared to other state-of-the-art algorithms. © 2013 IEEE.
- Authors: Arulappan, Arunkumar , Mahanti, Aniket , Passi, Kalpdrum , Srinivasan, Thiruvenkadam , Naha, Ranesh , Raja, Gunasekaran
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 34489-34504
- Full Text:
- Reviewed:
- Description: The transformation from physical network function to Virtual Network Function (VNF) requires a fundamental design change in how applications and services are tested and assured in a hybrid virtual network. Once the VNFs are onboarded in a cloud network infrastructure, operators need to test VNFs in real-time at the time of instantiation automatically. This paper explicitly analyses the problem of adaptive self-healing of a Virtual Machine (VM) allocated by the VNF with the Deep Reinforcement Learning (DRL) approach. The DRL-based big data collection and analytics engine performs aggregation to probe and analyze data for troubleshooting and performance management. This engine helps to determine corrective actions (self-healing), such as scaling or migrating VNFs. Hence, we proposed a Deep Queue Learning (DQL) based Deep Queue Networks (DQN) mechanism for self-healing VNFs in the virtualized infrastructure manager. Virtual network probes of closed-loop orchestration perform the automation of the VNF and provide analytics for real-time, policy-driven orchestration in an open networking automation platform through the stochastic gradient descent method for VNF service assurance and network reliability. The proposed DQN/DDQN mechanism optimizes the price and lowers the cost by 18% for resource usage without disrupting the Quality of Service (QoS) provided by the VNF. The outcome of adaptive self-healing of the VNFs enhances the computational performance by 27% compared to other state-of-the-art algorithms. © 2013 IEEE.
Water quality management using hybrid machine learning and data mining algorithms : an indexing approach
- Aslam, Bilal, Maqsoom, Ahsen, Cheema, Ali, Ullah, Fahim, Alharbi, Abdullah, Imran, Muhammad
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
UDTN-RS : a new underwater delay tolerant network routing protocol for coastal patrol and surveillance
- Azad, Saiful, Neffati, Ahmed, Mahmud, Mufti, Kaiser, M., Ahmed, Muhammad, Kamruzzaman, Joarder
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
Cloudlet computing : recent advances, taxonomy, and challenges
- Babar, Mohammad, Khan, Muhammad, Ali, Farman, Imran, Muhammad, Shoaib, Muhammad
- Authors: Babar, Mohammad , Khan, Muhammad , Ali, Farman , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 29609-29622
- Full Text:
- Reviewed:
- Description: A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE.
- Authors: Babar, Mohammad , Khan, Muhammad , Ali, Farman , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 29609-29622
- Full Text:
- Reviewed:
- Description: A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE.
Blasting pattern optimization using gene expression programming and grasshopper optimization algorithm to minimise blast-induced ground vibrations
- Bayat, Parichehra, Monjezi, Mejrdamesj, Mehrdanesh, Amirhosseina, Khandelwal, Manoj
- Authors: Bayat, Parichehra , Monjezi, Mejrdamesj , Mehrdanesh, Amirhosseina , Khandelwal, Manoj
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. 4 (2022), p. 3341-3350
- Full Text:
- Reviewed:
- Description: Blast-induced ground vibration is considered as one of the most hazardous phenomena of mine blasting, which can even cause casualties and severe damages to the adjacent properties. Measuring peak particle velocity (PPV) is helpful to know the actual vibration level but prediction of blast vibration prior to the blast is a tedious job due to involvement of blast design, explosive and rock parameters. Nowadays, efficient application of intelligent systems has been approved in different branches of science and technology. In this paper, a gene expression programming (GEP) model was developed to predict PPV using various blasting patterns as model inputs, which showed a high level of accuracy for the implemented model. Also, to optimize blast pattern attaining minimum ground vibration during blasting operation, the developed functional GEP model was taken as objective function for grasshopper optimization algorithm (GOA). Construction of GOA model was performed using a trial and error mechanism to find out the best possible pertinent GOA parameters. Finally, it was observed that utilizing GOA technique, PPV can be reduced by 67% with optimized blast parameters including burden of 3.21 m, spacing of 3.75 m, and charge per delay of 225 kg. A sensitivity analysis was also performed to understand the influence of each input parameters on the blast vibrations. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
- Authors: Bayat, Parichehra , Monjezi, Mejrdamesj , Mehrdanesh, Amirhosseina , Khandelwal, Manoj
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. 4 (2022), p. 3341-3350
- Full Text:
- Reviewed:
- Description: Blast-induced ground vibration is considered as one of the most hazardous phenomena of mine blasting, which can even cause casualties and severe damages to the adjacent properties. Measuring peak particle velocity (PPV) is helpful to know the actual vibration level but prediction of blast vibration prior to the blast is a tedious job due to involvement of blast design, explosive and rock parameters. Nowadays, efficient application of intelligent systems has been approved in different branches of science and technology. In this paper, a gene expression programming (GEP) model was developed to predict PPV using various blasting patterns as model inputs, which showed a high level of accuracy for the implemented model. Also, to optimize blast pattern attaining minimum ground vibration during blasting operation, the developed functional GEP model was taken as objective function for grasshopper optimization algorithm (GOA). Construction of GOA model was performed using a trial and error mechanism to find out the best possible pertinent GOA parameters. Finally, it was observed that utilizing GOA technique, PPV can be reduced by 67% with optimized blast parameters including burden of 3.21 m, spacing of 3.75 m, and charge per delay of 225 kg. A sensitivity analysis was also performed to understand the influence of each input parameters on the blast vibrations. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
Stability evaluation of dump slope using artificial neural network and multiple regression
- Bharati, , Ashutosh, Ray, Arunava, Khandelwal, Manoj, Rai, Rajesha, Jaiswal, , Ashok
- Authors: Bharati, , Ashutosh , Ray, Arunava , Khandelwal, Manoj , Rai, Rajesha , Jaiswal, , Ashok
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. (2022), p. 1835-1843
- Full Text:
- Reviewed:
- Description: The present paper focuses on designing an artificial neural network (ANN) model and a multiple regression analysis (MRA) that could be used to predict factor of safety of dragline dump slope. To implement these two models, the dataset was utilized from the numerical simulation results of dragline dump slopes, wherein 216 dragline dump slope models were simulated using a numerical modeling technique employed with the finite element method. The finite element model was incorporated a combination of three geometrical parameters, namely, coal-rib height (Crh), dragline dump slope height (Sh), and dragline dump slope angle (Sa) of the dump slope. The predicted results derived from the MRA and ANN models were compared with the results obtained from the numerical simulation of the dump slope models. Moreover, to compare the validity of both the models, various performance indicators, such as variance account for (VAF), determination coefficient (R2), root mean square error (RMSE), and residual error were calculated. Based on these performance indicators, the ANN model has shown a higher prediction accuracy than the MRA model. The study reveals that the ANN model developed in this research could be handy in designing the dragline dump slopes at the preliminary stage. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
- Authors: Bharati, , Ashutosh , Ray, Arunava , Khandelwal, Manoj , Rai, Rajesha , Jaiswal, , Ashok
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. (2022), p. 1835-1843
- Full Text:
- Reviewed:
- Description: The present paper focuses on designing an artificial neural network (ANN) model and a multiple regression analysis (MRA) that could be used to predict factor of safety of dragline dump slope. To implement these two models, the dataset was utilized from the numerical simulation results of dragline dump slopes, wherein 216 dragline dump slope models were simulated using a numerical modeling technique employed with the finite element method. The finite element model was incorporated a combination of three geometrical parameters, namely, coal-rib height (Crh), dragline dump slope height (Sh), and dragline dump slope angle (Sa) of the dump slope. The predicted results derived from the MRA and ANN models were compared with the results obtained from the numerical simulation of the dump slope models. Moreover, to compare the validity of both the models, various performance indicators, such as variance account for (VAF), determination coefficient (R2), root mean square error (RMSE), and residual error were calculated. Based on these performance indicators, the ANN model has shown a higher prediction accuracy than the MRA model. The study reveals that the ANN model developed in this research could be handy in designing the dragline dump slopes at the preliminary stage. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
An agriprecision decision support system for weed management in pastures
- Chegini, Hossein, Naha, Ranesh, Mahanti, Aniket, Gong, Mingwei, Passi, Kalpdrum
- Authors: Chegini, Hossein , Naha, Ranesh , Mahanti, Aniket , Gong, Mingwei , Passi, Kalpdrum
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 92660-92675
- Full Text:
- Reviewed:
- Description: Pastures are a vital source of dairy products and cattle nutrition, and as such, play a significant role in New Zealand's agricultural economy. However, weeds can be a major problem for pastures, making it a challenge for dairy farmers to monitor and control them. Currently, most of the tasks for weed management are done manually, and farmers lack persistent technology for weed control. This motivated us to design, implement, and evaluate a Decision Support System (DSS) to detect weeds in pastures and provide decisions for the cleanup of weeds. Our proposed system uses two primary inputs: weeds and bare patches. We created a synthetic dataset to train a weed detection model and designed a fuzzy inference system to assess a pasture. We also used a neuro-fuzzy system in our DSS to evaluate our fuzzy model and tune its parameters for better functioning and accuracy. Our work aims to assist dairy farmers in better weed monitoring, as well as to provide 2D maps of weed density and yield score, which can be of significant value when no digital and meaningful images of pastures exist. The system can also support farmers in scheduling, recommending prohibitive tasks, and storing historical data for pasture analysis, collaborated by stakeholders. © 2013 IEEE.
- Authors: Chegini, Hossein , Naha, Ranesh , Mahanti, Aniket , Gong, Mingwei , Passi, Kalpdrum
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 92660-92675
- Full Text:
- Reviewed:
- Description: Pastures are a vital source of dairy products and cattle nutrition, and as such, play a significant role in New Zealand's agricultural economy. However, weeds can be a major problem for pastures, making it a challenge for dairy farmers to monitor and control them. Currently, most of the tasks for weed management are done manually, and farmers lack persistent technology for weed control. This motivated us to design, implement, and evaluate a Decision Support System (DSS) to detect weeds in pastures and provide decisions for the cleanup of weeds. Our proposed system uses two primary inputs: weeds and bare patches. We created a synthetic dataset to train a weed detection model and designed a fuzzy inference system to assess a pasture. We also used a neuro-fuzzy system in our DSS to evaluate our fuzzy model and tune its parameters for better functioning and accuracy. Our work aims to assist dairy farmers in better weed monitoring, as well as to provide 2D maps of weed density and yield score, which can be of significant value when no digital and meaningful images of pastures exist. The system can also support farmers in scheduling, recommending prohibitive tasks, and storing historical data for pasture analysis, collaborated by stakeholders. © 2013 IEEE.