Edge computing for Internet of Everything : a survey
- Kong, Xiangjie, Wu, Yuhan, Wang, Hui, Xia, Feng
- Authors: Kong, Xiangjie , Wu, Yuhan , Wang, Hui , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 23 (2022), p. 23472-23485
- Full Text:
- Reviewed:
- Description: In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed. © 2014 IEEE.
- Authors: Kong, Xiangjie , Wu, Yuhan , Wang, Hui , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 23 (2022), p. 23472-23485
- Full Text:
- Reviewed:
- Description: In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed. © 2014 IEEE.
Emerging point of care devices and artificial intelligence : prospects and challenges for public health
- Stranieri, Andrew, Venkatraman, Sitalakshmi, Minicz, John, Zarnegar, Armita, Firmin, Sally, Balasubramanian, Venki, Jelinek, Herbert
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.
- Authors: Stranieri, Andrew , Venkatraman, Sitalakshmi , Minicz, John , Zarnegar, Armita , Firmin, Sally , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2022
- Type: Text , Journal article
- Relation: Smart Health Vol. 24, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Risk assessments for numerous conditions can now be performed cost-effectively and accurately using emerging point of care devices coupled with machine learning algorithms. In this article, the case is advanced that point of care testing in combination with risk assessments generated with artificial intelligence algorithms, applied to the universal screening of the general public for multiple conditions at one session, represents a new kind of in-expensive screening that can lead to the early detection of disease and other public health benefits. A case study of a diabetes screening clinic in a rural area of Australia is presented to illustrate its benefits. Universal, poly-aetiological screening is shown to meet the ten World Health Organisation criteria for screening programmes. © Elsevier Inc.
Energy harvesting in underwater acoustic wireless sensor networks : design, taxonomy, applications, challenges and future directions
- Khan, Anwar, Imran, Muhammad, Alharbi, Abdullah, Mohamed, Ehab, Fouda, Mostafa
- Authors: Khan, Anwar , Imran, Muhammad , Alharbi, Abdullah , Mohamed, Ehab , Fouda, Mostafa
- Date: 2022
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 10, no. (2022), p. 134606-134622
- Full Text:
- Reviewed:
- Description: In underwater acoustic wireless sensor networks (UAWSNs), energy harvesting either enhances the lifetime of a network by increasing the battery power of sensor nodes or ensures battery-less operation of nodes. This, in effect, results in sustainable and reliable operation of the network deployed for various underwater applications. This work provides a survey of the energy harvesting techniques for UAWSNs. Our work is unique than the existing work on underwater energy harvesting in that it includes state-of-the art techniques designed in the last decade. It analyzes every harvesting scheme in terms of its main idea, merits, demerits and the extent of the harvested power (energy). The description of the merits results in selection of the suitable scheme for the suitable underwater applications. The demerits of the addressed schemes provide an insight to their future enhancement and improvement. Moreover, the harvested techniques are classified into various categories depending upon the involved energy harvesting mechanism and compared based on the maximum and minimum amount of harvested power, which helps in selection of the suitable category keeping in view the power budget of an underwater network before deployment. The challenges in energy harvesting and in UAWSNs are described to provide an insight to them and to address them for further enhancement in the harvested extent. Finally, research directions are specified for future investigation. © 2013 IEEE.
- Authors: Khan, Anwar , Imran, Muhammad , Alharbi, Abdullah , Mohamed, Ehab , Fouda, Mostafa
- Date: 2022
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 10, no. (2022), p. 134606-134622
- Full Text:
- Reviewed:
- Description: In underwater acoustic wireless sensor networks (UAWSNs), energy harvesting either enhances the lifetime of a network by increasing the battery power of sensor nodes or ensures battery-less operation of nodes. This, in effect, results in sustainable and reliable operation of the network deployed for various underwater applications. This work provides a survey of the energy harvesting techniques for UAWSNs. Our work is unique than the existing work on underwater energy harvesting in that it includes state-of-the art techniques designed in the last decade. It analyzes every harvesting scheme in terms of its main idea, merits, demerits and the extent of the harvested power (energy). The description of the merits results in selection of the suitable scheme for the suitable underwater applications. The demerits of the addressed schemes provide an insight to their future enhancement and improvement. Moreover, the harvested techniques are classified into various categories depending upon the involved energy harvesting mechanism and compared based on the maximum and minimum amount of harvested power, which helps in selection of the suitable category keeping in view the power budget of an underwater network before deployment. The challenges in energy harvesting and in UAWSNs are described to provide an insight to them and to address them for further enhancement in the harvested extent. Finally, research directions are specified for future investigation. © 2013 IEEE.
False data detection in a clustered smart grid using unscented Kalman filter
- Rashed, Muhammad, Kamruzzaman, Joarder, Gondal, Iqbal, Islam, Syed
- Authors: Rashed, Muhammad , Kamruzzaman, Joarder , Gondal, Iqbal , Islam, Syed
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78548-78556
- Full Text:
- Reviewed:
- Description: The smart grid accessibility over the Internet of Things (IoT) is becoming attractive to electrical grid operators as it brings considerable operational and cost efficiencies. However, this in return creates significant cyber security challenges, such as fortification of state estimation data such as state variables against false data injection attacks (FDIAs). In this paper, a clustered partitioning state estimation (CPSE) technique is proposed to detect FDIA by using static state estimation, namely, weighted least square (WLS) method in conjunction with dynamic state estimation using minimum variance unscented Kalman filter (MV-UKF) which improves the accuracy of state estimation. The estimates acquired from the MV-UKF do not deviate like WLS as these are purely based on the previous iteration saved in the transition matrix. The deviation between the corresponding estimations of WLS and MV-UKF are utilised to partition the smart grid into smaller sub-systems to detect FDIA and then identify its location. To validate the proposed detection technique, FIDAs are injected into IEEE 14-bus, IEEE 30-bus, IEEE 118-bus, and IEEE 300-bus distribution feeder using MATPOWER simulation platform. Our results clearly demonstrate that the proposed technique can locate the attack area efficiently compared to other techniques such as chi square. © 2013 IEEE.
- Authors: Rashed, Muhammad , Kamruzzaman, Joarder , Gondal, Iqbal , Islam, Syed
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78548-78556
- Full Text:
- Reviewed:
- Description: The smart grid accessibility over the Internet of Things (IoT) is becoming attractive to electrical grid operators as it brings considerable operational and cost efficiencies. However, this in return creates significant cyber security challenges, such as fortification of state estimation data such as state variables against false data injection attacks (FDIAs). In this paper, a clustered partitioning state estimation (CPSE) technique is proposed to detect FDIA by using static state estimation, namely, weighted least square (WLS) method in conjunction with dynamic state estimation using minimum variance unscented Kalman filter (MV-UKF) which improves the accuracy of state estimation. The estimates acquired from the MV-UKF do not deviate like WLS as these are purely based on the previous iteration saved in the transition matrix. The deviation between the corresponding estimations of WLS and MV-UKF are utilised to partition the smart grid into smaller sub-systems to detect FDIA and then identify its location. To validate the proposed detection technique, FIDAs are injected into IEEE 14-bus, IEEE 30-bus, IEEE 118-bus, and IEEE 300-bus distribution feeder using MATPOWER simulation platform. Our results clearly demonstrate that the proposed technique can locate the attack area efficiently compared to other techniques such as chi square. © 2013 IEEE.
Filter feature selection based boolean modelling for genetic network inference
- Gamage, Hasini, Chetty, Madhu, Shatte, Adrian, Hallinan, Jennifer
- Authors: Gamage, Hasini , Chetty, Madhu , Shatte, Adrian , Hallinan, Jennifer
- Date: 2022
- Type: Text , Journal article
- Relation: BioSystems Vol. 221, no. (2022), p.
- Full Text:
- Reviewed:
- Description: The reconstruction of Gene Regulatory Networks (GRNs) from time series gene expression data is highly relevant for the discovery of complex biological interactions and dynamics. Various computational strategies have been developed for this task, but most approaches have low computational efficiency and are not able to cope with high-dimensional, low sample-number, gene expression data. In this paper, we introduce a novel combined filter feature selection approach for efficient and accurate inference of GRNs. A Boolean framework for network modelling is used to demonstrate the efficacy of the proposed approach. Using discretized microarray expression data, the genes most relevant to each target gene are first filtered using ReliefF, an instance-based feature ranking method that is here applied for the first time to GRN inference. Then, further gene selection from the filtered-gene list is done using a mutual information-based min-redundancy max-relevance criterion by eliminating irrelevant genes. This combined method is executed on resampled datasets to finalize the optimal set of regulatory genes. Building upon our previous research, a Pearson correlation coefficient-based Boolean modelling approach is utilized for the efficient identification of the optimal regulatory rules associated with selected regulatory genes. The proposed approach was evaluated using gene expression datasets from small-scale and medium-scale real gene networks, and was observed to be more effective than Linear Discriminant Analysis, performed better than the individual feature selection methods, and obtained improved Structural Accuracy with a higher number of true positives than other state-of-the-art methods, while outperforming these methods with respect to Dynamic Accuracy and efficiency. © 2022 Elsevier B.V.
- Authors: Gamage, Hasini , Chetty, Madhu , Shatte, Adrian , Hallinan, Jennifer
- Date: 2022
- Type: Text , Journal article
- Relation: BioSystems Vol. 221, no. (2022), p.
- Full Text:
- Reviewed:
- Description: The reconstruction of Gene Regulatory Networks (GRNs) from time series gene expression data is highly relevant for the discovery of complex biological interactions and dynamics. Various computational strategies have been developed for this task, but most approaches have low computational efficiency and are not able to cope with high-dimensional, low sample-number, gene expression data. In this paper, we introduce a novel combined filter feature selection approach for efficient and accurate inference of GRNs. A Boolean framework for network modelling is used to demonstrate the efficacy of the proposed approach. Using discretized microarray expression data, the genes most relevant to each target gene are first filtered using ReliefF, an instance-based feature ranking method that is here applied for the first time to GRN inference. Then, further gene selection from the filtered-gene list is done using a mutual information-based min-redundancy max-relevance criterion by eliminating irrelevant genes. This combined method is executed on resampled datasets to finalize the optimal set of regulatory genes. Building upon our previous research, a Pearson correlation coefficient-based Boolean modelling approach is utilized for the efficient identification of the optimal regulatory rules associated with selected regulatory genes. The proposed approach was evaluated using gene expression datasets from small-scale and medium-scale real gene networks, and was observed to be more effective than Linear Discriminant Analysis, performed better than the individual feature selection methods, and obtained improved Structural Accuracy with a higher number of true positives than other state-of-the-art methods, while outperforming these methods with respect to Dynamic Accuracy and efficiency. © 2022 Elsevier B.V.
Formal modeling and verification of a blockchain-based crowdsourcing consensus protocol
- Afzaal, Hamra, Imran, Muhammad, Janjua, Muhammad, Gochhayat, Sarada
- Authors: Afzaal, Hamra , Imran, Muhammad , Janjua, Muhammad , Gochhayat, Sarada
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 8163-8183
- Full Text:
- Reviewed:
- Description: Crowdsourcing is an effective technique that allows humans to solve complex problems that are hard to accomplish by automated tools. Some significant challenges in crowdsourcing systems include avoiding security attacks, effective trust management, and ensuring the system's correctness. Blockchain is a promising technology that can be efficiently exploited to address security and trust issues. The consensus protocol is a core component of a blockchain network through which all the blockchain peers achieve an agreement about the state of the distributed ledger. Therefore, its security, trustworthiness, and correctness have vital importance. This work proposes a Secure and Trustworthy Blockchain-based Crowdsourcing (STBC) consensus protocol to address these challenges. Model checking is an effective and automatic technique based on formal methods that is utilized to ensure the correctness of STBC consensus protocol. The proposed consensus protocol's formal specification is described using Communicating Sequential Programs (CSP#). Safety, fault tolerance, leader trust, and validators' trust are important properties for a consensus protocol, which are formally specified through Linear Temporal Logic (LTL) to prevent several security attacks, such as blockchain fork, selfish mining, and invalid block insertion. Process Analysis Toolkit (PAT) is utilized for the formal verification of the proposed consensus protocol. © 2022 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Afzaal, Hamra , Imran, Muhammad , Janjua, Muhammad , Gochhayat, Sarada
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 8163-8183
- Full Text:
- Reviewed:
- Description: Crowdsourcing is an effective technique that allows humans to solve complex problems that are hard to accomplish by automated tools. Some significant challenges in crowdsourcing systems include avoiding security attacks, effective trust management, and ensuring the system's correctness. Blockchain is a promising technology that can be efficiently exploited to address security and trust issues. The consensus protocol is a core component of a blockchain network through which all the blockchain peers achieve an agreement about the state of the distributed ledger. Therefore, its security, trustworthiness, and correctness have vital importance. This work proposes a Secure and Trustworthy Blockchain-based Crowdsourcing (STBC) consensus protocol to address these challenges. Model checking is an effective and automatic technique based on formal methods that is utilized to ensure the correctness of STBC consensus protocol. The proposed consensus protocol's formal specification is described using Communicating Sequential Programs (CSP#). Safety, fault tolerance, leader trust, and validators' trust are important properties for a consensus protocol, which are formally specified through Linear Temporal Logic (LTL) to prevent several security attacks, such as blockchain fork, selfish mining, and invalid block insertion. Process Analysis Toolkit (PAT) is utilized for the formal verification of the proposed consensus protocol. © 2022 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Joint resource allocation to minimize execution time of federated learning in cell-free massive MIMO
- Vu, Tung, Ngo, Duy, Ngo, Hien, Dao, Minh, Tran, Nguyen, Middleton, Richard
- Authors: Vu, Tung , Ngo, Duy , Ngo, Hien , Dao, Minh , Tran, Nguyen , Middleton, Richard
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 21 (2022), p. 21736-21750
- Full Text:
- Reviewed:
- Description: Due to its communication efficiency and privacy-preserving capability, federated learning (FL) has emerged as a promising framework for machine learning in 5G-and-beyond wireless networks. Of great interest is the design and optimization of new wireless network structures that support the stable and fast operation of FL. Cell-free massive multiple-input-multiple-output (CFmMIMO) turns out to be a suitable candidate, which allows each communication round in the iterative FL process to be stably executed within a large-scale coherence time. Aiming to reduce the total execution time of the FL process in CFmMIMO, this article proposes choosing only a subset of available users to participate in FL. An optimal selection of users with favorable link conditions would minimize the execution time of each communication round while limiting the total number of communication rounds required. Toward this end, we formulate a joint optimization problem of user selection, transmit power, and processing frequency, subject to a predefined minimum number of participating users to guarantee the quality of learning. We then develop a new algorithm that is proven to converge to the neighborhood of the stationary points of the formulated problem. Numerical results confirm that our proposed approach significantly reduces the FL total execution time over baseline schemes. The time reduction is more pronounced when the density of access point deployments is moderately low. © 2014 IEEE.
Joint resource allocation to minimize execution time of federated learning in cell-free massive MIMO
- Authors: Vu, Tung , Ngo, Duy , Ngo, Hien , Dao, Minh , Tran, Nguyen , Middleton, Richard
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 21 (2022), p. 21736-21750
- Full Text:
- Reviewed:
- Description: Due to its communication efficiency and privacy-preserving capability, federated learning (FL) has emerged as a promising framework for machine learning in 5G-and-beyond wireless networks. Of great interest is the design and optimization of new wireless network structures that support the stable and fast operation of FL. Cell-free massive multiple-input-multiple-output (CFmMIMO) turns out to be a suitable candidate, which allows each communication round in the iterative FL process to be stably executed within a large-scale coherence time. Aiming to reduce the total execution time of the FL process in CFmMIMO, this article proposes choosing only a subset of available users to participate in FL. An optimal selection of users with favorable link conditions would minimize the execution time of each communication round while limiting the total number of communication rounds required. Toward this end, we formulate a joint optimization problem of user selection, transmit power, and processing frequency, subject to a predefined minimum number of participating users to guarantee the quality of learning. We then develop a new algorithm that is proven to converge to the neighborhood of the stationary points of the formulated problem. Numerical results confirm that our proposed approach significantly reduces the FL total execution time over baseline schemes. The time reduction is more pronounced when the density of access point deployments is moderately low. © 2014 IEEE.
Mathematical modeling and parametric study of the limaçon rotary compressor
- Lu, Kui, Sultan, Ibrahim, Phung, Truong
- Authors: Lu, Kui , Sultan, Ibrahim , Phung, Truong
- Date: 2022
- Type: Text , Journal article
- Relation: International Journal of Refrigeration Vol. 134, no. (2022), p. 219-231
- Full Text:
- Reviewed:
- Description: In this paper, a class of rotary positive displacement compressors which is known as the limaçon compressor is introduced. The main feature of such a compressor is that profiles of its housing and rotor and the motion of its rotor are developed from a mathematical curve called the limaçon of Pascal. A mathematical model of the limaçon compressor, which incorporates the mass flow of the working fluid, the leakage loss, the dynamic response of the discharge valve, as well as the thermodynamic behaviors, is formulated, and the simulation of such a model has been performed to study the operational characteristics of the limaçon compressor. A parametric analysis is also conducted to investigate the effects of various parameters on the compressor performance. Based on the results, it is found that the machine performance deteriorates as the operating speed increases despite an initial rise in the volumetric efficiency. Additionally, the isentropic efficiency appears insensitive to the change of the pressure ratio, whereas a negative effect on the volumetric efficiency is noticed when the pressure ratio is increased. The effect of the valve diameter on the over-compression loss has also been studied, and the result indicates that a smaller valve diameter leads to a higher level of fluid over-compression. © 2021 Elsevier Ltd and IIR
- Authors: Lu, Kui , Sultan, Ibrahim , Phung, Truong
- Date: 2022
- Type: Text , Journal article
- Relation: International Journal of Refrigeration Vol. 134, no. (2022), p. 219-231
- Full Text:
- Reviewed:
- Description: In this paper, a class of rotary positive displacement compressors which is known as the limaçon compressor is introduced. The main feature of such a compressor is that profiles of its housing and rotor and the motion of its rotor are developed from a mathematical curve called the limaçon of Pascal. A mathematical model of the limaçon compressor, which incorporates the mass flow of the working fluid, the leakage loss, the dynamic response of the discharge valve, as well as the thermodynamic behaviors, is formulated, and the simulation of such a model has been performed to study the operational characteristics of the limaçon compressor. A parametric analysis is also conducted to investigate the effects of various parameters on the compressor performance. Based on the results, it is found that the machine performance deteriorates as the operating speed increases despite an initial rise in the volumetric efficiency. Additionally, the isentropic efficiency appears insensitive to the change of the pressure ratio, whereas a negative effect on the volumetric efficiency is noticed when the pressure ratio is increased. The effect of the valve diameter on the over-compression loss has also been studied, and the result indicates that a smaller valve diameter leads to a higher level of fluid over-compression. © 2021 Elsevier Ltd and IIR
Multi-mode damping control approach for the optimal resilience of renewable-rich power systems
- Setiadi, Herlambang, Mithulananthan, Nadarajah, Shah, Rakibuzzaman, Islam, Md Rabiul, Fekih, Afer, Krismanto, Awan, Abdillah, Muhammad
- Authors: Setiadi, Herlambang , Mithulananthan, Nadarajah , Shah, Rakibuzzaman , Islam, Md Rabiul , Fekih, Afer , Krismanto, Awan , Abdillah, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: Energies Vol. 15, no. 9 (2022), p.
- Full Text:
- Reviewed:
- Description: The integration of power-electronics-based power plants is developing significantly due to the proliferation of renewable energy sources. Although this type of power plant could positively affect society in terms of clean and sustainable energy, it also brings adverse effects, especially with the stability of the power system. The lack of inertia and different dynamic characteristics are the main issues associated with power-electronics-based power plants that could affect the oscillatory behaviour of the power system. Hence, it is important to design a comprehensive damping controller to damp oscillations due to the integration of a power-electronics-based power plant. This paper proposes a damping method for enhancing the oscillatory stability performance of power systems with high penetration of renewable energy systems. A resilient wide-area multimodal controller is proposed and used in conjunction with a battery energy storage system (BESS) to enhance the damping of critical modes. The proposed control also addresses resiliency issues associated with control signals and controllers. The optimal tuning of the control parameters for this proposed controller is challenging. Hence, the firefly algorithm was considered to be the optimisation method to design the wide-area multimodal controllers for BESS, wind, and photovoltaic (PV) systems. The performance of the proposed approach was assessed using a modified version of the Java Indonesian power system under various operating conditions. Both eigenvalue analysis and time-domain simulations are considered in the analysis. A comparison with other well-known metaheuristic methods was also carried out to show the proposed method’s efficacy. Obtained results confirmed the superior performance of the proposed approach in enhancing the small-signal stability of renewable-rich power systems. They also revealed that the proposed multimodal controller could enhance the penetration of renewable energy sources in the Javan power system by up to 50%. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Setiadi, Herlambang , Mithulananthan, Nadarajah , Shah, Rakibuzzaman , Islam, Md Rabiul , Fekih, Afer , Krismanto, Awan , Abdillah, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: Energies Vol. 15, no. 9 (2022), p.
- Full Text:
- Reviewed:
- Description: The integration of power-electronics-based power plants is developing significantly due to the proliferation of renewable energy sources. Although this type of power plant could positively affect society in terms of clean and sustainable energy, it also brings adverse effects, especially with the stability of the power system. The lack of inertia and different dynamic characteristics are the main issues associated with power-electronics-based power plants that could affect the oscillatory behaviour of the power system. Hence, it is important to design a comprehensive damping controller to damp oscillations due to the integration of a power-electronics-based power plant. This paper proposes a damping method for enhancing the oscillatory stability performance of power systems with high penetration of renewable energy systems. A resilient wide-area multimodal controller is proposed and used in conjunction with a battery energy storage system (BESS) to enhance the damping of critical modes. The proposed control also addresses resiliency issues associated with control signals and controllers. The optimal tuning of the control parameters for this proposed controller is challenging. Hence, the firefly algorithm was considered to be the optimisation method to design the wide-area multimodal controllers for BESS, wind, and photovoltaic (PV) systems. The performance of the proposed approach was assessed using a modified version of the Java Indonesian power system under various operating conditions. Both eigenvalue analysis and time-domain simulations are considered in the analysis. A comparison with other well-known metaheuristic methods was also carried out to show the proposed method’s efficacy. Obtained results confirmed the superior performance of the proposed approach in enhancing the small-signal stability of renewable-rich power systems. They also revealed that the proposed multimodal controller could enhance the penetration of renewable energy sources in the Javan power system by up to 50%. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
Multimodal educational data fusion for students' mental health detection
- Guo, Teng, Zhao, Wenhong, Alrashoud, Mubarak, Tolba, Amr, Firmin, Sally, Xia, Feng
- Authors: Guo, Teng , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Firmin, Sally , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 70370-70382
- Full Text:
- Reviewed:
- Description: Mental health issues can lead to serious consequences like depression, self-mutilation, and worse, especially for university students who are not physically and mentally mature. Not all students with poor mental health are aware of their situation and actively seek help. Proactive detection of mental problems is a critical step in addressing this issue. However, accurate detections are hard to achieve due to the inherent complexity and heterogeneity of unstructured multi-modal data generated by campus life. Against this background, we propose a detection framework for detecting students' mental health, named CASTLE (educational data fusion for mental health detection). Three parts are involved in this framework. First, we utilize representation learning to fuse data on social life, academic performance, and physical appearance. An algorithm, named MOON (multi-view social network embedding), is proposed to represent students' social life in a comprehensive way by fusing students' heterogeneous social relations effectively. Second, a synthetic minority oversampling technique algorithm (SMOTE) is applied to the label imbalance issue. Finally, a DNN (deep neural network) model is utilized for the final detection. The extensive results demonstrate the promising performance of the proposed methods in comparison to an extensive range of state-of-the-art baselines. © 2013 IEEE.
- Authors: Guo, Teng , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Firmin, Sally , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 70370-70382
- Full Text:
- Reviewed:
- Description: Mental health issues can lead to serious consequences like depression, self-mutilation, and worse, especially for university students who are not physically and mentally mature. Not all students with poor mental health are aware of their situation and actively seek help. Proactive detection of mental problems is a critical step in addressing this issue. However, accurate detections are hard to achieve due to the inherent complexity and heterogeneity of unstructured multi-modal data generated by campus life. Against this background, we propose a detection framework for detecting students' mental health, named CASTLE (educational data fusion for mental health detection). Three parts are involved in this framework. First, we utilize representation learning to fuse data on social life, academic performance, and physical appearance. An algorithm, named MOON (multi-view social network embedding), is proposed to represent students' social life in a comprehensive way by fusing students' heterogeneous social relations effectively. Second, a synthetic minority oversampling technique algorithm (SMOTE) is applied to the label imbalance issue. Finally, a DNN (deep neural network) model is utilized for the final detection. The extensive results demonstrate the promising performance of the proposed methods in comparison to an extensive range of state-of-the-art baselines. © 2013 IEEE.
Multistep predictions for adaptive sampling in mobile robotic sensor networks using proximal ADMM
- Le, Viet-Anh, Nguyen, Linh, Nghiem, Truong
- Authors: Le, Viet-Anh , Nguyen, Linh , Nghiem, Truong
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 64850-64861
- Full Text:
- Reviewed:
- Description: This paper presents a novel approach, using multi-step predictions, to the adaptive sampling problem for efficient monitoring of environmental spatial phenomena in a mobile sensor network. We employ a Gaussian process to represent the spatial field of interest, which is then used to predict the field at unmeasured locations. The adaptive sampling problem aims to drive the mobile sensors to optimally navigate the environment while the sensors adaptively take measurements of the spatial phenomena at each sampling step. To this end, an optimal sampling criterion based on conditional entropy is proposed, which minimizes the prediction uncertainty of the Gaussian process model. By predicting the measurements the mobile sensors potentially take in a finite horizon of multiple future sampling steps and exploiting the chain rule of the conditional entropy, a multi-step-ahead adaptive sampling optimization problem is formulated. Its objective is to find the optimal sampling paths for the mobile sensors in multiple sampling steps ahead. Robot-robot and robot-obstacle collision avoidance is formulated as mixed-integer constraints. Compared with the single-step-ahead approach typically adopted in the literature, our approach provides better navigation, deployment, and data collection with more informative sensor readings. However, the resulting mixed-integer nonlinear program is highly complex and intractable. We propose to employ the proximal alternating direction method of multipliers to efficiently solve this problem. More importantly, the solution obtained by the proposed algorithm is theoretically guaranteed to converge to a stationary value. The effectiveness of our proposed approach was extensively validated by simulation using a real-world dataset, which showed highly promising results. © 2013 IEEE.
- Authors: Le, Viet-Anh , Nguyen, Linh , Nghiem, Truong
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 64850-64861
- Full Text:
- Reviewed:
- Description: This paper presents a novel approach, using multi-step predictions, to the adaptive sampling problem for efficient monitoring of environmental spatial phenomena in a mobile sensor network. We employ a Gaussian process to represent the spatial field of interest, which is then used to predict the field at unmeasured locations. The adaptive sampling problem aims to drive the mobile sensors to optimally navigate the environment while the sensors adaptively take measurements of the spatial phenomena at each sampling step. To this end, an optimal sampling criterion based on conditional entropy is proposed, which minimizes the prediction uncertainty of the Gaussian process model. By predicting the measurements the mobile sensors potentially take in a finite horizon of multiple future sampling steps and exploiting the chain rule of the conditional entropy, a multi-step-ahead adaptive sampling optimization problem is formulated. Its objective is to find the optimal sampling paths for the mobile sensors in multiple sampling steps ahead. Robot-robot and robot-obstacle collision avoidance is formulated as mixed-integer constraints. Compared with the single-step-ahead approach typically adopted in the literature, our approach provides better navigation, deployment, and data collection with more informative sensor readings. However, the resulting mixed-integer nonlinear program is highly complex and intractable. We propose to employ the proximal alternating direction method of multipliers to efficiently solve this problem. More importantly, the solution obtained by the proposed algorithm is theoretically guaranteed to converge to a stationary value. The effectiveness of our proposed approach was extensively validated by simulation using a real-world dataset, which showed highly promising results. © 2013 IEEE.
Sequence-to-sequence learning-based conversion of pseudo-code to source code using neural translation approach
- Acharjee, Uzzal, Arefin, Minhazul, Hossen, Kazi, Uddin, Mohammed, Uddin, Md Ashraf, Islam, Linta
- Authors: Acharjee, Uzzal , Arefin, Minhazul , Hossen, Kazi , Uddin, Mohammed , Uddin, Md Ashraf , Islam, Linta
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 26730-26742
- Full Text:
- Reviewed:
- Description: Pseudo-code refers to an informal means of representing algorithms that do not require the exact syntax of a computer programming language. Pseudo-code helps developers and researchers represent their algorithms using human-readable language. Generally, researchers can convert the pseudo-code into computer source code using different conversion techniques. The efficiency of such conversion methods is measured based on the converted algorithm's correctness. Researchers have already explored diverse technologies to devise conversion methods with higher accuracy. This paper proposes a novel pseudo-code conversion learning method that includes natural language processing-based text preprocessing and a sequence-to-sequence deep learning-based model trained with the SPoC dataset. We conducted an extensive experiment on our designed algorithm using descriptive bilingual understudy scoring and compared our results with state-of-the-art techniques. Result analysis shows that our approach is more accurate and efficient than other existing conversion methods in terms of several performances metrics. Furthermore, the proposed method outperforms the existing approaches because our method utilizes two Long-Short-Term-Memory networks that might increase the accuracy. © 2013 IEEE.
- Authors: Acharjee, Uzzal , Arefin, Minhazul , Hossen, Kazi , Uddin, Mohammed , Uddin, Md Ashraf , Islam, Linta
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 26730-26742
- Full Text:
- Reviewed:
- Description: Pseudo-code refers to an informal means of representing algorithms that do not require the exact syntax of a computer programming language. Pseudo-code helps developers and researchers represent their algorithms using human-readable language. Generally, researchers can convert the pseudo-code into computer source code using different conversion techniques. The efficiency of such conversion methods is measured based on the converted algorithm's correctness. Researchers have already explored diverse technologies to devise conversion methods with higher accuracy. This paper proposes a novel pseudo-code conversion learning method that includes natural language processing-based text preprocessing and a sequence-to-sequence deep learning-based model trained with the SPoC dataset. We conducted an extensive experiment on our designed algorithm using descriptive bilingual understudy scoring and compared our results with state-of-the-art techniques. Result analysis shows that our approach is more accurate and efficient than other existing conversion methods in terms of several performances metrics. Furthermore, the proposed method outperforms the existing approaches because our method utilizes two Long-Short-Term-Memory networks that might increase the accuracy. © 2013 IEEE.
Stability evaluation of dump slope using artificial neural network and multiple regression
- Bharati, , Ashutosh, Ray, Arunava, Khandelwal, Manoj, Rai, Rajesha, Jaiswal, , Ashok
- Authors: Bharati, , Ashutosh , Ray, Arunava , Khandelwal, Manoj , Rai, Rajesha , Jaiswal, , Ashok
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. (2022), p. 1835-1843
- Full Text:
- Reviewed:
- Description: The present paper focuses on designing an artificial neural network (ANN) model and a multiple regression analysis (MRA) that could be used to predict factor of safety of dragline dump slope. To implement these two models, the dataset was utilized from the numerical simulation results of dragline dump slopes, wherein 216 dragline dump slope models were simulated using a numerical modeling technique employed with the finite element method. The finite element model was incorporated a combination of three geometrical parameters, namely, coal-rib height (Crh), dragline dump slope height (Sh), and dragline dump slope angle (Sa) of the dump slope. The predicted results derived from the MRA and ANN models were compared with the results obtained from the numerical simulation of the dump slope models. Moreover, to compare the validity of both the models, various performance indicators, such as variance account for (VAF), determination coefficient (R2), root mean square error (RMSE), and residual error were calculated. Based on these performance indicators, the ANN model has shown a higher prediction accuracy than the MRA model. The study reveals that the ANN model developed in this research could be handy in designing the dragline dump slopes at the preliminary stage. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
- Authors: Bharati, , Ashutosh , Ray, Arunava , Khandelwal, Manoj , Rai, Rajesha , Jaiswal, , Ashok
- Date: 2022
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 38, no. (2022), p. 1835-1843
- Full Text:
- Reviewed:
- Description: The present paper focuses on designing an artificial neural network (ANN) model and a multiple regression analysis (MRA) that could be used to predict factor of safety of dragline dump slope. To implement these two models, the dataset was utilized from the numerical simulation results of dragline dump slopes, wherein 216 dragline dump slope models were simulated using a numerical modeling technique employed with the finite element method. The finite element model was incorporated a combination of three geometrical parameters, namely, coal-rib height (Crh), dragline dump slope height (Sh), and dragline dump slope angle (Sa) of the dump slope. The predicted results derived from the MRA and ANN models were compared with the results obtained from the numerical simulation of the dump slope models. Moreover, to compare the validity of both the models, various performance indicators, such as variance account for (VAF), determination coefficient (R2), root mean square error (RMSE), and residual error were calculated. Based on these performance indicators, the ANN model has shown a higher prediction accuracy than the MRA model. The study reveals that the ANN model developed in this research could be handy in designing the dragline dump slopes at the preliminary stage. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd. part of Springer Nature.
Water quality management using hybrid machine learning and data mining algorithms : an indexing approach
- Aslam, Bilal, Maqsoom, Ahsen, Cheema, Ali, Ullah, Fahim, Alharbi, Abdullah, Imran, Muhammad
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
A federated learning-based license plate recognition scheme for 5G-enabled Internet of vehicles
- Kong, Xiangjie, Wang, Kailai, Hou, Mingliang, Hao, Xinyu, Shen, Guojiang, Chen, Xin, Xia, Feng
- Authors: Kong, Xiangjie , Wang, Kailai , Hou, Mingliang , Hao, Xinyu , Shen, Guojiang , Chen, Xin , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 17, no. 12 (Dec 2021), p. 8523-8530
- Full Text:
- Reviewed:
- Description: License plate is an essential characteristic to identify vehicles for the traffic management, and thus, license plate recognition is important for Internet of Vehicles. Since 5G has been widely covered, mobile devices are utilized to assist the traffic management, which is a significant part of Industry 4.0. However, there have always been privacy risks due to centralized training of models. Also, the trained model cannot be directly deployed on the mobile device due to its large number of parameters. In this article, we propose a federated learning-based license plate recognition framework (FedLPR) to solve these problems. We design detection and recognition model to apply in the mobile device. In terms of user privacy, data in individuals is harnessed on their mobile devices instead of the server to train models based on federated learning. Extensive experiments demonstrate that FedLPR has high accuracy and acceptable communication cost while preserving user privacy.
- Authors: Kong, Xiangjie , Wang, Kailai , Hou, Mingliang , Hao, Xinyu , Shen, Guojiang , Chen, Xin , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 17, no. 12 (Dec 2021), p. 8523-8530
- Full Text:
- Reviewed:
- Description: License plate is an essential characteristic to identify vehicles for the traffic management, and thus, license plate recognition is important for Internet of Vehicles. Since 5G has been widely covered, mobile devices are utilized to assist the traffic management, which is a significant part of Industry 4.0. However, there have always been privacy risks due to centralized training of models. Also, the trained model cannot be directly deployed on the mobile device due to its large number of parameters. In this article, we propose a federated learning-based license plate recognition framework (FedLPR) to solve these problems. We design detection and recognition model to apply in the mobile device. In terms of user privacy, data in individuals is harnessed on their mobile devices instead of the server to train models based on federated learning. Extensive experiments demonstrate that FedLPR has high accuracy and acceptable communication cost while preserving user privacy.
A novel collaborative IoD-assisted VANET approach for coverage area maximization
- Ahmed, Gamil, Sheltami, Tarek, Mahmoud, Ashraf, Imran, Muhammad, Shoaib, Muhammad
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
Changes in anthocyanin and antioxidant contents during maturation of Australian highbush blueberry (Vaccinium corymbosum L.) Cultivars †
- Johnson, Joel, Steicke, Michelle, Mani, Janice, Rao, Shiwangni, Anderson, Scott, Wakeling, Lara, Naiker, Mani
- Authors: Johnson, Joel , Steicke, Michelle , Mani, Janice , Rao, Shiwangni , Anderson, Scott , Wakeling, Lara , Naiker, Mani
- Date: 2021
- Type: Text , Journal article
- Relation: Engineering Proceedings Vol. 11, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: The Australian blueberry industry is worth over $300 million, but there is limited information on factors influencing their chemical composition, particularly their ripeness and harvest stage. This pilot study investigated changes in total monomeric anthocyanin content (TMAC; measured using the pH-differential method) and total antioxidant capacity (TAC; measured with the cupric reducing antioxidant capacity assay) of four Australian highbush blueberry cultivars (Denise, Blue Rose, Brigitta and Bluecrop) at four time points and three maturity stages (unripe, moderately ripe and fully ripe). The TAC of most cultivars decreased by 8–18% during ripening, although that of the Blue Rose cultivar increased markedly. However, the TAC of ripe fruit from this cultivar also fluctuated markedly throughout the harvest season (between 1168–2171 mg Trolox equivalents 100 g
- Authors: Johnson, Joel , Steicke, Michelle , Mani, Janice , Rao, Shiwangni , Anderson, Scott , Wakeling, Lara , Naiker, Mani
- Date: 2021
- Type: Text , Journal article
- Relation: Engineering Proceedings Vol. 11, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: The Australian blueberry industry is worth over $300 million, but there is limited information on factors influencing their chemical composition, particularly their ripeness and harvest stage. This pilot study investigated changes in total monomeric anthocyanin content (TMAC; measured using the pH-differential method) and total antioxidant capacity (TAC; measured with the cupric reducing antioxidant capacity assay) of four Australian highbush blueberry cultivars (Denise, Blue Rose, Brigitta and Bluecrop) at four time points and three maturity stages (unripe, moderately ripe and fully ripe). The TAC of most cultivars decreased by 8–18% during ripening, although that of the Blue Rose cultivar increased markedly. However, the TAC of ripe fruit from this cultivar also fluctuated markedly throughout the harvest season (between 1168–2171 mg Trolox equivalents 100 g
Cloudlet computing : recent advances, taxonomy, and challenges
- Babar, Mohammad, Khan, Muhammad, Ali, Farman, Imran, Muhammad, Shoaib, Muhammad
- Authors: Babar, Mohammad , Khan, Muhammad , Ali, Farman , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 29609-29622
- Full Text:
- Reviewed:
- Description: A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE.
- Authors: Babar, Mohammad , Khan, Muhammad , Ali, Farman , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 29609-29622
- Full Text:
- Reviewed:
- Description: A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE.
DC fault identification in multiterminal HVDC systems based on reactor voltage gradient
- Hassan, Mehedi, Hossain, M., Shah, Rakibuzzaman
- Authors: Hassan, Mehedi , Hossain, M. , Shah, Rakibuzzaman
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 115855-115867
- Full Text:
- Reviewed:
- Description: With the increasing number of renewable generations, the prospects of long-distance bulk power transmission impels the expansion of point-to-point High Voltage Direct Current (HVDC) grid to an emerging Multi-terminal high-voltage Direct Current (MTDC) grid. The DC grid protection with faster selectivity enhances the operational continuity of the MTDC grid. Based on the reactor voltage gradient (RVG), this paper proposes a fast and reliable fault identification technique with precise discrimination of internal and external DC faults. Considering the voltage developed across the modular multilevel converter (MMC) reactor and DC terminal reactor, the RVG is formulated to characterise an internal and external DC fault. With a window of four RVG samples, the fault is detected and discriminated by the proposed main protection scheme amidst a period of five sampling intervals. Depending on the reactor current increment, a backup protection scheme is also proposed to enhance the protection reliability. The performance of the proposed scheme is validated in a four-terminal MTDC grid. The results under meaningful fault events show that the proposed scheme is capable to identify the DC fault within millisecond. Moreover, the evaluation of the protection sensitivity and robustness reveals that the proposed scheme is highly selective for a wide range of fault resistances and locations, higher sampling frequencies, and irrelevant transient events. Furthermore, the comparison results exhibit that the proposed RVG method improves the discrimination performance of the protection scheme and thereby, proves to be a better choice for future DC fault identification.
- Authors: Hassan, Mehedi , Hossain, M. , Shah, Rakibuzzaman
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 115855-115867
- Full Text:
- Reviewed:
- Description: With the increasing number of renewable generations, the prospects of long-distance bulk power transmission impels the expansion of point-to-point High Voltage Direct Current (HVDC) grid to an emerging Multi-terminal high-voltage Direct Current (MTDC) grid. The DC grid protection with faster selectivity enhances the operational continuity of the MTDC grid. Based on the reactor voltage gradient (RVG), this paper proposes a fast and reliable fault identification technique with precise discrimination of internal and external DC faults. Considering the voltage developed across the modular multilevel converter (MMC) reactor and DC terminal reactor, the RVG is formulated to characterise an internal and external DC fault. With a window of four RVG samples, the fault is detected and discriminated by the proposed main protection scheme amidst a period of five sampling intervals. Depending on the reactor current increment, a backup protection scheme is also proposed to enhance the protection reliability. The performance of the proposed scheme is validated in a four-terminal MTDC grid. The results under meaningful fault events show that the proposed scheme is capable to identify the DC fault within millisecond. Moreover, the evaluation of the protection sensitivity and robustness reveals that the proposed scheme is highly selective for a wide range of fault resistances and locations, higher sampling frequencies, and irrelevant transient events. Furthermore, the comparison results exhibit that the proposed RVG method improves the discrimination performance of the protection scheme and thereby, proves to be a better choice for future DC fault identification.
Deep learning-based approach for detecting trajectory modifications of cassini-huygens spacecraft
- Aldabbas, Ashraf, Gal, Zoltan, Ghori, Khawaja, Imran, Muhammad, Shoaib, Muhammad
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.