A novel dataset for baby broccoli identification by using YOLOv8 model
- Mohamed, Rizan, Appuhamillage, Gayan, Kamruzzaman, Joarder, Nguyen, Linh
- Authors: Mohamed, Rizan , Appuhamillage, Gayan , Kamruzzaman, Joarder , Nguyen, Linh
- Date: 2024
- Type: Text , Conference paper
- Relation: 33rd International Symposium on Industrial Electronics, ISIE 2024, Ulsan, South Korea, 18-21 June 2024, IEEE International Symposium on Industrial Electronics, 2024 33rd International Symposium on Industrial Electronics (ISIE) Proceedings
- Full Text: false
- Reviewed:
- Description: As global population increases, the need for agricultural automation becomes crucial for a stable food supply. An autonomous mechanism for picking baby broccoli could economically benefit farmers and society. To date, efforts to develop an automated baby broccoli harvesting solution have been absent. A key step in automation, particularly for the automated recognition of baby broccoli heads through computer vision, involves the creation of a rich dataset that sufficiently captures the characteristics of baby broccoli heads, as such a dataset has not yet been collected or published. This paper marks the first step towards automating baby broccoli harvesting by creating a novel dataset and testing its accuracy for precise identification. We have gathered data using custom software and a RealSense D435 Depth camera, known for its depth and stereo vision capabilities. The initial results of the model that was trained from our dataset had mAP values ranging from 0.869 to 0.942. This initial training shows that the dataset fits the purpose of detecting baby broccoli heads. Further results shows that when the model is 97.7% confident or more, its predictions are 100% precise. © 2024 IEEE.
A temporal deep q learning for optimal load balancing in software-defined networks
- Sharma, Aakanksha, Balasubramanian, Venki, Kamruzzaman, Joarder
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2024
- Type: Text , Journal article
- Relation: Sensors Vol. 24, no. 4 (2024), p.
- Full Text:
- Reviewed:
- Description: With the rapid advancement of the Internet of Things (IoT), there is a global surge in network traffic. Software-Defined Networks (SDNs) provide a holistic network perspective, facilitating software-based traffic analysis, and are more suitable to handle dynamic loads than a traditional network. The standard SDN architecture control plane has been designed for a single controller or multiple distributed controllers; however, a logically centralized single controller faces severe bottleneck issues. Most proposed solutions in the literature are based on the static deployment of multiple controllers without the consideration of flow fluctuations and traffic bursts, which ultimately leads to a lack of load balancing among controllers in real time, resulting in increased network latency. Moreover, some methods addressing dynamic controller mapping in multi-controller SDNs consider load fluctuation and latency but face controller placement problems. Earlier, we proposed priority scheduling and congestion control algorithm (eSDN) and dynamic mapping of controllers for dynamic SDN (dSDN) to address this issue. However, the future growth of IoT is unpredictable and potentially exponential; to accommodate this futuristic trend, we need an intelligent solution to handle the complexity of growing heterogeneous devices and minimize network latency. Therefore, this paper continues our previous research and proposes temporal deep Q learning in the dSDN controller. A Temporal Deep Q learning Network (tDQN) serves as a self-learning reinforcement-based model. The agent in the tDQN learns to improve decision-making for switch-controller mapping through a reward–punish scheme, maximizing the goal of reducing network latency during the iterative learning process. Our approach—tDQN—effectively addresses dynamic flow mapping and latency optimization without increasing the number of optimally placed controllers. A multi-objective optimization problem for flow fluctuation is formulated to divert the traffic to the best-suited controller dynamically. Extensive simulation results with varied network scenarios and traffic show that the tDQN outperforms traditional networks, eSDNs, and dSDNs in terms of throughput, delay, jitter, packet delivery ratio, and packet loss. © 2024 by the authors.
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2024
- Type: Text , Journal article
- Relation: Sensors Vol. 24, no. 4 (2024), p.
- Full Text:
- Reviewed:
- Description: With the rapid advancement of the Internet of Things (IoT), there is a global surge in network traffic. Software-Defined Networks (SDNs) provide a holistic network perspective, facilitating software-based traffic analysis, and are more suitable to handle dynamic loads than a traditional network. The standard SDN architecture control plane has been designed for a single controller or multiple distributed controllers; however, a logically centralized single controller faces severe bottleneck issues. Most proposed solutions in the literature are based on the static deployment of multiple controllers without the consideration of flow fluctuations and traffic bursts, which ultimately leads to a lack of load balancing among controllers in real time, resulting in increased network latency. Moreover, some methods addressing dynamic controller mapping in multi-controller SDNs consider load fluctuation and latency but face controller placement problems. Earlier, we proposed priority scheduling and congestion control algorithm (eSDN) and dynamic mapping of controllers for dynamic SDN (dSDN) to address this issue. However, the future growth of IoT is unpredictable and potentially exponential; to accommodate this futuristic trend, we need an intelligent solution to handle the complexity of growing heterogeneous devices and minimize network latency. Therefore, this paper continues our previous research and proposes temporal deep Q learning in the dSDN controller. A Temporal Deep Q learning Network (tDQN) serves as a self-learning reinforcement-based model. The agent in the tDQN learns to improve decision-making for switch-controller mapping through a reward–punish scheme, maximizing the goal of reducing network latency during the iterative learning process. Our approach—tDQN—effectively addresses dynamic flow mapping and latency optimization without increasing the number of optimally placed controllers. A multi-objective optimization problem for flow fluctuation is formulated to divert the traffic to the best-suited controller dynamically. Extensive simulation results with varied network scenarios and traffic show that the tDQN outperforms traditional networks, eSDNs, and dSDNs in terms of throughput, delay, jitter, packet delivery ratio, and packet loss. © 2024 by the authors.
Device identification method for internet of things based on spatial-temporal feature residuals
- Dong, Shi, Shu, Longui, Xia, Qinyu, Kamruzzaman, Joarder, Xia, Yuanjun, Peng, Tao
- Authors: Dong, Shi , Shu, Longui , Xia, Qinyu , Kamruzzaman, Joarder , Xia, Yuanjun , Peng, Tao
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Transactions on Services Computing Vol. 17, no. 6 (2024), p. 3400-3416
- Full Text: false
- Reviewed:
- Description: In recent years, the Internet of Things (IoT) has penetrated all aspects of our lives through smart cities, health, industries and others that are related to people's livelihood. With the increasing number of IoT devices, more and more personal information is exposed in the network space, which inevitably brings some network security problems. Due to the diversity and heterogeneity of IoT devices, identification of such devices in the complex IoT environments remains a major challenge. Existing deep learning-based device identification methods achieve identification of IoT devices by automatically extracting device traffic features, but usually only single modal features of device traffic are considered, which cannot achieve all-around characterization features of communication traffic and affect the identification results. Therefore, we propose an identification method, termed DMRMTT, that employs a Deep convolutional maxout network and MTT model (Multiple Time-series Transformers) to automatically extract the spatial and temporal features of IoT communication session fingerprints and perform further fusion using the structure of the residual, which makes up for the limitations of the existing methods for studying device traffic. This method can improve the characterization of device traffic behaviour and achieve a more accurate identification of IoT devices. Its efficacy is experimentally validated by using two publicly availbale datasets and compared with existing methods. Results show that our method outperforms other methods in widely used performance metrics and achieves 99.82% identification accuracy, demonstrating its superiority and usefulness in IoT device identification. © 2008-2012 IEEE.
Enhancing telemarketing success using ensemble-based online machine learning
- Kaisar, Shahriar, Rashid, Md Mamunur, Chowdhury, Abdullahi, Shafin, Sakib, Kamruzzaman, Joarder, Diro, Abebe
- Authors: Kaisar, Shahriar , Rashid, Md Mamunur , Chowdhury, Abdullahi , Shafin, Sakib , Kamruzzaman, Joarder , Diro, Abebe
- Date: 2024
- Type: Text , Journal article
- Relation: Big Data Mining and Analytics Vol. 7, no. 2 (2024), p. 294-314
- Full Text:
- Reviewed:
- Description: Telemarketing is a well-established marketing approach to offering products and services to prospective customers. The effectiveness of such an approach, however, is highly dependent on the selection of the appropriate consumer base, as reaching uninterested customers will induce annoyance and consume costly enterprise resources in vain while missing interested ones. The introduction of business intelligence and machine learning models can positively influence the decision-making process by predicting the potential customer base, and the existing literature in this direction shows promising results. However, the selection of influential features and the construction of effective learning models for improved performance remain a challenge. Furthermore, from the modelling perspective, the class imbalance nature of the training data, where samples with unsuccessful outcomes highly outnumber successful ones, further compounds the problem by creating biased and inaccurate models. Additionally, customer preferences are likely to change over time due to various reasons, and/or a fresh group of customers may be targeted for a new product or service, necessitating model retraining which is not addressed at all in existing works. A major challenge in model retraining is maintaining a balance between stability (retaining older knowledge) and plasticity (being receptive to new information). To address the above issues, this paper proposes an ensemble machine learning model with feature selection and oversampling techniques to identify potential customers more accurately. A novel online learning method is proposed for model retraining when new samples are available over time. This newly introduced method equips the proposed approach to deal with dynamic data, leading to improved readiness of the proposed model for practical adoption, and is a highly useful addition to the literature. Extensive experiments with real-world data show that the proposed approach achieves excellent results in all cases (e.g., 98.6% accuracy in classifying customers) and outperforms recent competing models in the literature by a considerable margin of 3% on a widely used dataset. © 2018 Tsinghua University Press.
- Authors: Kaisar, Shahriar , Rashid, Md Mamunur , Chowdhury, Abdullahi , Shafin, Sakib , Kamruzzaman, Joarder , Diro, Abebe
- Date: 2024
- Type: Text , Journal article
- Relation: Big Data Mining and Analytics Vol. 7, no. 2 (2024), p. 294-314
- Full Text:
- Reviewed:
- Description: Telemarketing is a well-established marketing approach to offering products and services to prospective customers. The effectiveness of such an approach, however, is highly dependent on the selection of the appropriate consumer base, as reaching uninterested customers will induce annoyance and consume costly enterprise resources in vain while missing interested ones. The introduction of business intelligence and machine learning models can positively influence the decision-making process by predicting the potential customer base, and the existing literature in this direction shows promising results. However, the selection of influential features and the construction of effective learning models for improved performance remain a challenge. Furthermore, from the modelling perspective, the class imbalance nature of the training data, where samples with unsuccessful outcomes highly outnumber successful ones, further compounds the problem by creating biased and inaccurate models. Additionally, customer preferences are likely to change over time due to various reasons, and/or a fresh group of customers may be targeted for a new product or service, necessitating model retraining which is not addressed at all in existing works. A major challenge in model retraining is maintaining a balance between stability (retaining older knowledge) and plasticity (being receptive to new information). To address the above issues, this paper proposes an ensemble machine learning model with feature selection and oversampling techniques to identify potential customers more accurately. A novel online learning method is proposed for model retraining when new samples are available over time. This newly introduced method equips the proposed approach to deal with dynamic data, leading to improved readiness of the proposed model for practical adoption, and is a highly useful addition to the literature. Extensive experiments with real-world data show that the proposed approach achieves excellent results in all cases (e.g., 98.6% accuracy in classifying customers) and outperforms recent competing models in the literature by a considerable margin of 3% on a widely used dataset. © 2018 Tsinghua University Press.
Large language models and sentiment analysis in financial markets : a review, datasets, and case study
- Liu, Chenghao, Arulappan, Arunkumar, Naha, Ranesh, Mahanti, Aniket, Kamruzzaman, Joarder, Ra, In-Ho
- Authors: Liu, Chenghao , Arulappan, Arunkumar , Naha, Ranesh , Mahanti, Aniket , Kamruzzaman, Joarder , Ra, In-Ho
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 134041-134061
- Full Text:
- Reviewed:
- Description: This paper comprehensively examines Large Language Models (LLMs) in sentiment analysis, specifically focusing on financial markets and exploring the correlation between news sentiment and Bitcoin prices. We systematically categorize various LLMs used in financial sentiment analysis, highlighting their unique applications and features. We also investigate the methodologies for effective data collection and categorization, underscoring the need for diverse and comprehensive datasets. Our research features a case study investigating the correlation between news sentiment and Bitcoin prices, utilizing advanced sentiment analysis and financial analysis methods to demonstrate the practical application of LLMs. The findings reveal a modest but discernible correlation between news sentiment and Bitcoin price fluctuations, with historical news patterns showing a more substantial impact on Bitcoin's longer-term price than immediate news events. This highlights LLMs' potential in market trend prediction and informed investment decision-making. © 2013 IEEE.
- Authors: Liu, Chenghao , Arulappan, Arunkumar , Naha, Ranesh , Mahanti, Aniket , Kamruzzaman, Joarder , Ra, In-Ho
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 134041-134061
- Full Text:
- Reviewed:
- Description: This paper comprehensively examines Large Language Models (LLMs) in sentiment analysis, specifically focusing on financial markets and exploring the correlation between news sentiment and Bitcoin prices. We systematically categorize various LLMs used in financial sentiment analysis, highlighting their unique applications and features. We also investigate the methodologies for effective data collection and categorization, underscoring the need for diverse and comprehensive datasets. Our research features a case study investigating the correlation between news sentiment and Bitcoin prices, utilizing advanced sentiment analysis and financial analysis methods to demonstrate the practical application of LLMs. The findings reveal a modest but discernible correlation between news sentiment and Bitcoin price fluctuations, with historical news patterns showing a more substantial impact on Bitcoin's longer-term price than immediate news events. This highlights LLMs' potential in market trend prediction and informed investment decision-making. © 2013 IEEE.
Task offloading strategies for mobile edge computing : a survey
- Dong, Shi, Tang, Junxiao, Abbas, Khushnood, Hou, Ruizhe, Kamruzzaman, Joarder, Rutkowski, Leszek, Buyya, Rajkumar
- Authors: Dong, Shi , Tang, Junxiao , Abbas, Khushnood , Hou, Ruizhe , Kamruzzaman, Joarder , Rutkowski, Leszek , Buyya, Rajkumar
- Date: 2024
- Type: Text , Journal article , Review
- Relation: Computer Networks Vol. 254, no. (2024), p.
- Full Text: false
- Reviewed:
- Description: With the wide adoption of 5G technology and the rapid development of 6G technology, a variety of new applications have emerged. A multitude of compute-intensive and time-sensitive applications deployed on terminal equipment have placed increased demands on Internet delay and bandwidth. Mobile Edge Computing (MEC) can effectively mitigate the issues of long transmission times, high energy consumption, and data insecurity. Task offloading, as a key technology within MEC, has become a prominent research focus in this field. This paper presents a comprehensive review of the current research progress in MEC task offloading. Firstly, it introduces the fundamental concepts, application scenarios, and related technologies of MEC. Secondly, it categorizes offloading decisions into five aspects: reducing delay, minimizing energy consumption, balancing energy consumption and delay, enabling high-computing offloading, and addressing different application scenarios. It then critically analyzes and compares existing research efforts in these areas. © 2024 Elsevier B.V.
Weighted rank difference ensemble : a new form of ensemble feature selection method for medical datasets
- Begum, Arju, Mondal, M. Rubaiyat, Podder, Prajoy, Kamruzzaman, Joarder
- Authors: Begum, Arju , Mondal, M. Rubaiyat , Podder, Prajoy , Kamruzzaman, Joarder
- Date: 2024
- Type: Text , Journal article
- Relation: BioMedInformatics Vol. 4, no. 1 (2024), p. 477-488
- Full Text:
- Reviewed:
- Description: Background: Feature selection (FS), a crucial preprocessing step in machine learning, greatly reduces the dimension of data and improves model performance. This paper focuses on selecting features for medical data classification. Methods: In this work, a new form of ensemble FS method called weighted rank difference ensemble (WRD-Ensemble) has been put forth. It combines three FS methods to produce a stable and diverse subset of features. The three base FS approaches are Pearson’s correlation coefficient (PCC), reliefF, and gain ratio (GR). These three FS approaches produce three distinct lists of features, and then they order each feature by importance or weight. The final subset of features in this study is chosen using the average weight of each feature and the rank difference of a feature across three ranked lists. Using the average weight and rank difference of each feature, unstable and less significant features are eliminated from the feature space. The WRD-Ensemble method is applied to three medical datasets: chronic kidney disease (CKD), lung cancer, and heart disease. These data samples are classified using logistic regression (LR). Results: The experimental results show that compared to the base FS methods and other ensemble FS methods, the proposed WRD-Ensemble method leads to obtaining the highest accuracy value of 98.97% for CKD, 93.24% for lung cancer, and 83.84% for heart disease. Conclusion: The results indicate that the proposed WRD-Ensemble method can potentially improve the accuracy of disease diagnosis models, contributing to advances in clinical decision-making. © 2024 by the authors.
- Authors: Begum, Arju , Mondal, M. Rubaiyat , Podder, Prajoy , Kamruzzaman, Joarder
- Date: 2024
- Type: Text , Journal article
- Relation: BioMedInformatics Vol. 4, no. 1 (2024), p. 477-488
- Full Text:
- Reviewed:
- Description: Background: Feature selection (FS), a crucial preprocessing step in machine learning, greatly reduces the dimension of data and improves model performance. This paper focuses on selecting features for medical data classification. Methods: In this work, a new form of ensemble FS method called weighted rank difference ensemble (WRD-Ensemble) has been put forth. It combines three FS methods to produce a stable and diverse subset of features. The three base FS approaches are Pearson’s correlation coefficient (PCC), reliefF, and gain ratio (GR). These three FS approaches produce three distinct lists of features, and then they order each feature by importance or weight. The final subset of features in this study is chosen using the average weight of each feature and the rank difference of a feature across three ranked lists. Using the average weight and rank difference of each feature, unstable and less significant features are eliminated from the feature space. The WRD-Ensemble method is applied to three medical datasets: chronic kidney disease (CKD), lung cancer, and heart disease. These data samples are classified using logistic regression (LR). Results: The experimental results show that compared to the base FS methods and other ensemble FS methods, the proposed WRD-Ensemble method leads to obtaining the highest accuracy value of 98.97% for CKD, 93.24% for lung cancer, and 83.84% for heart disease. Conclusion: The results indicate that the proposed WRD-Ensemble method can potentially improve the accuracy of disease diagnosis models, contributing to advances in clinical decision-making. © 2024 by the authors.
Wireless underground sensor communication using acoustic technology
- Al Moshi, Md Adnan, Hardie, Marcus, Choudhury, Tanveer, Kamruzzaman, Joarder
- Authors: Al Moshi, Md Adnan , Hardie, Marcus , Choudhury, Tanveer , Kamruzzaman, Joarder
- Date: 2024
- Type: Text , Journal article
- Relation: Sensors Vol. 24, no. 10 (2024), p.
- Full Text:
- Reviewed:
- Description: The rapid advancement toward smart cities has accelerated the adoption of various Internet of Things (IoT) devices for underground applications, including agriculture, which aims to enhance sustainability by reducing the use of vital resources such as water and maximizing production. On-farm IoT devices with above-ground wireless nodes are vulnerable to damage and data loss due to heavy machinery movement, animal grazing, and pests. To mitigate these risks, wireless Underground Sensor Networks (WUSNs) are proposed, where devices are buried underground. However, implementing WUSNs faces challenges due to soil heterogeneity and the need for low-power, small-size, and long-range communication technology. While existing radio frequency (RF)-based solutions are impeded by substantial signal attenuation and low coverage, acoustic wave-based WUSNs have the potential to overcome these impediments. This paper is the first attempt to review acoustic propagation models to discern a suitable model for the advancement of acoustic WUSNs tailored to the agricultural context. Our findings indicate the Kelvin–Voigt model as a suitable framework for estimating signal attenuation, which has been verified through alignment with documented outcomes from experimental studies conducted in agricultural settings. By leveraging data from various soil types, this research underscores the feasibility of acoustic signal-based WUSNs. © 2024 by the authors.
- Authors: Al Moshi, Md Adnan , Hardie, Marcus , Choudhury, Tanveer , Kamruzzaman, Joarder
- Date: 2024
- Type: Text , Journal article
- Relation: Sensors Vol. 24, no. 10 (2024), p.
- Full Text:
- Reviewed:
- Description: The rapid advancement toward smart cities has accelerated the adoption of various Internet of Things (IoT) devices for underground applications, including agriculture, which aims to enhance sustainability by reducing the use of vital resources such as water and maximizing production. On-farm IoT devices with above-ground wireless nodes are vulnerable to damage and data loss due to heavy machinery movement, animal grazing, and pests. To mitigate these risks, wireless Underground Sensor Networks (WUSNs) are proposed, where devices are buried underground. However, implementing WUSNs faces challenges due to soil heterogeneity and the need for low-power, small-size, and long-range communication technology. While existing radio frequency (RF)-based solutions are impeded by substantial signal attenuation and low coverage, acoustic wave-based WUSNs have the potential to overcome these impediments. This paper is the first attempt to review acoustic propagation models to discern a suitable model for the advancement of acoustic WUSNs tailored to the agricultural context. Our findings indicate the Kelvin–Voigt model as a suitable framework for estimating signal attenuation, which has been verified through alignment with documented outcomes from experimental studies conducted in agricultural settings. By leveraging data from various soil types, this research underscores the feasibility of acoustic signal-based WUSNs. © 2024 by the authors.
Work-in-progress paper : synergizing YOLOv8 and PCA for size estimation of baby broccoli
- Mohamed, Rizan, Appuhamillage, Gayan, Kamruzzaman, Joarder, Nguyen, Linh
- Authors: Mohamed, Rizan , Appuhamillage, Gayan , Kamruzzaman, Joarder , Nguyen, Linh
- Date: 2024
- Type: Text , Conference paper
- Relation: 33rd International Symposium on Industrial Electronics, ISIE 2024, Ulsan, South Korea, 18-21 June 2024, IEEE International Symposium on Industrial Electronics, 2024 33rd International Symposium on Industrial Electronics (ISIE) Proceedings
- Full Text: false
- Reviewed:
- Description: This research aims to advance agricultural automation by developing a machine learning-based method for accurately measuring harvest-ready baby broccoli through estimating the size of individual heads. Unlike traditional broccoli, baby broccoli's varying head depth complicates size estimation, making direct pixel-size conversion ineffective. To overcome this, we utilized depth-sensing cameras to capture precise dimensions. Our initial efforts involved curating a unique dataset and applying the YOLO computer vision algorithm for segmenting baby broccoli heads. We then calculated the size of each identified head using Principal Component Analysis (PCA). Given that baby broccoli tends to grow in tight clusters, our method includes tracking individual heads within a frame and associating them with specific size information to ensure accurate management of information. By incorporating stereo vision and depth data from the realsense D435 camera along with instance segmentation and PCA, our initial results achieved size estimates with an error rate below 10 %. The paper further recommends enhancing accuracy through a hybrid approach that combines deep learning, neural networks, and PCA. © 2024 IEEE.
A novel dynamic software-defined networking approach to neutralize traffic burst
- Sharma, Aakanksha, Balasubramanian, Venki, Kamruzzaman, Joarder
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
An evidence theoretic approach for traffic signal intrusion detection
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Das, Rajkumar, Newaz, Shah
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Blockchain technology and application : an overview
- Dong, Shi, Abbas, Khushnood, Li, Meixi, Kamruzzaman, Joarder
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
Cancer classification utilizing voting classifier with ensemble feature selection method and transcriptomic data
- Khatun, Rabea, Akter, Maksuda, Islam, Md Manowarul, Uddin, Md Ashraf, Talukder, Md Alamin, Kamruzzaman, Joarder, Azad, Akm, Paul, Bikash, Almoyad, Muhammad, Aryal, Sunil, Moni, Mohammad
- Authors: Khatun, Rabea , Akter, Maksuda , Islam, Md Manowarul , Uddin, Md Ashraf , Talukder, Md Alamin , Kamruzzaman, Joarder , Azad, Akm , Paul, Bikash , Almoyad, Muhammad , Aryal, Sunil , Moni, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Genes Vol. 14, no. 9 (2023), p.
- Full Text:
- Reviewed:
- Description: Biomarker-based cancer identification and classification tools are widely used in bioinformatics and machine learning fields. However, the high dimensionality of microarray gene expression data poses a challenge for identifying important genes in cancer diagnosis. Many feature selection algorithms optimize cancer diagnosis by selecting optimal features. This article proposes an ensemble rank-based feature selection method (EFSM) and an ensemble weighted average voting classifier (VT) to overcome this challenge. The EFSM uses a ranking method that aggregates features from individual selection methods to efficiently discover the most relevant and useful features. The VT combines support vector machine, k-nearest neighbor, and decision tree algorithms to create an ensemble model. The proposed method was tested on three benchmark datasets and compared to existing built-in ensemble models. The results show that our model achieved higher accuracy, with 100% for leukaemia, 94.74% for colon cancer, and 94.34% for the 11-tumor dataset. This study concludes by identifying a subset of the most important cancer-causing genes and demonstrating their significance compared to the original data. The proposed approach surpasses existing strategies in accuracy and stability, significantly impacting the development of ML-based gene analysis. It detects vital genes with higher precision and stability than other existing methods. © 2023 by the authors.
- Authors: Khatun, Rabea , Akter, Maksuda , Islam, Md Manowarul , Uddin, Md Ashraf , Talukder, Md Alamin , Kamruzzaman, Joarder , Azad, Akm , Paul, Bikash , Almoyad, Muhammad , Aryal, Sunil , Moni, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Genes Vol. 14, no. 9 (2023), p.
- Full Text:
- Reviewed:
- Description: Biomarker-based cancer identification and classification tools are widely used in bioinformatics and machine learning fields. However, the high dimensionality of microarray gene expression data poses a challenge for identifying important genes in cancer diagnosis. Many feature selection algorithms optimize cancer diagnosis by selecting optimal features. This article proposes an ensemble rank-based feature selection method (EFSM) and an ensemble weighted average voting classifier (VT) to overcome this challenge. The EFSM uses a ranking method that aggregates features from individual selection methods to efficiently discover the most relevant and useful features. The VT combines support vector machine, k-nearest neighbor, and decision tree algorithms to create an ensemble model. The proposed method was tested on three benchmark datasets and compared to existing built-in ensemble models. The results show that our model achieved higher accuracy, with 100% for leukaemia, 94.74% for colon cancer, and 94.34% for the 11-tumor dataset. This study concludes by identifying a subset of the most important cancer-causing genes and demonstrating their significance compared to the original data. The proposed approach surpasses existing strategies in accuracy and stability, significantly impacting the development of ML-based gene analysis. It detects vital genes with higher precision and stability than other existing methods. © 2023 by the authors.
Decentralized content sharing in mobile ad-hoc networks : a survey
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Rashid, Md Mamunur
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Rashid, Md Mamunur
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Digital Communications and Networks Vol. 9, no. 6 (2023), p. 1363-1398
- Full Text:
- Reviewed:
- Description: The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic. To address this issue and take advantage of the short-range communication capabilities of smart mobile devices, the decentralized content sharing approach has emerged as a suitable and promising alternative. Decentralized content sharing uses a peer-to-peer network among co-located smart mobile device users to fulfil content requests. Several articles have been published to date to address its different aspects including group management, interest extraction, message forwarding, participation incentive, and content replication. This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration. © 2022 Chongqing University of Posts and Telecommunications
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Rashid, Md Mamunur
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Digital Communications and Networks Vol. 9, no. 6 (2023), p. 1363-1398
- Full Text:
- Reviewed:
- Description: The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic. To address this issue and take advantage of the short-range communication capabilities of smart mobile devices, the decentralized content sharing approach has emerged as a suitable and promising alternative. Decentralized content sharing uses a peer-to-peer network among co-located smart mobile device users to fulfil content requests. Several articles have been published to date to address its different aspects including group management, interest extraction, message forwarding, participation incentive, and content replication. This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration. © 2022 Chongqing University of Posts and Telecommunications
Deep learning and federated learning for screening COVID-19 : a review
- Mondal, M., Bharati, Subrato, Podder, Prajoy, Kamruzzaman, Joarder
- Authors: Mondal, M. , Bharati, Subrato , Podder, Prajoy , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: BioMedInformatics Vol. 3, no. 3 (2023), p. 691-713
- Full Text:
- Reviewed:
- Description: Since December 2019, a novel coronavirus disease (COVID-19) has infected millions of individuals. This paper conducts a thorough study of the use of deep learning (DL) and federated learning (FL) approaches to COVID-19 screening. To begin, an evaluation of research articles published between 1 January 2020 and 28 June 2023 is presented, considering the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. The review compares various datasets on medical imaging, including X-ray, computed tomography (CT) scans, and ultrasound images, in terms of the number of images, COVID-19 samples, and classes in the datasets. Following that, a description of existing DL algorithms applied to various datasets is offered. Additionally, a summary of recent work on FL for COVID-19 screening is provided. Efforts to improve the quality of FL models are comprehensively reviewed and objectively evaluated. © 2023 by the authors.
- Authors: Mondal, M. , Bharati, Subrato , Podder, Prajoy , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: BioMedInformatics Vol. 3, no. 3 (2023), p. 691-713
- Full Text:
- Reviewed:
- Description: Since December 2019, a novel coronavirus disease (COVID-19) has infected millions of individuals. This paper conducts a thorough study of the use of deep learning (DL) and federated learning (FL) approaches to COVID-19 screening. To begin, an evaluation of research articles published between 1 January 2020 and 28 June 2023 is presented, considering the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. The review compares various datasets on medical imaging, including X-ray, computed tomography (CT) scans, and ultrasound images, in terms of the number of images, COVID-19 samples, and classes in the datasets. Following that, a description of existing DL algorithms applied to various datasets is offered. Additionally, a summary of recent work on FL for COVID-19 screening is provided. Efforts to improve the quality of FL models are comprehensively reviewed and objectively evaluated. © 2023 by the authors.
Detecting fake news of evolving events using machine learning : case of Russia-Ukraine war
- Ferdush, Jannatul, Kamruzzaman, Joarder, Karmakar, Gour, Gondal, Iqbal, Das, Raj
- Authors: Ferdush, Jannatul , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal , Das, Raj
- Date: 2023
- Type: Text , Conference paper
- Relation: 34th Australasian Conference on Information Systems, ACIS 2023, Wellington, 5-8 December 2023, Australasian Conference on Information Systems, ACIS 2023
- Full Text:
- Description: Fake news detection is important in the context of evolving events in today’s information-driven society. The current status of fake news detection literature focuses on static news articles, neglecting the challenges posed by the dynamic nature of evolving events. So, this research contributes to the existing literature by addressing the specific challenges of fake news detection within evolving events. By incorporating machine learning techniques and considering the evolving nature of events, our approach offers a scalable and adaptable solution for detecting fake news in evolving situations by incrementally updating training data and retraining the model. For the evaluation purpose, we also created a new fake news dataset on the Russia-Ukraine war from the Twitter postings. Extensive evaluation of our proposed model demonstrates that the model archives an overall accuracy of 94% in identifying fake/true news on evolving the Russia-Ukraine war event and outperforms two recent completing methods by a margin of 5%~10%. Copyright © 2023 Jannatul et al.
- Authors: Ferdush, Jannatul , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal , Das, Raj
- Date: 2023
- Type: Text , Conference paper
- Relation: 34th Australasian Conference on Information Systems, ACIS 2023, Wellington, 5-8 December 2023, Australasian Conference on Information Systems, ACIS 2023
- Full Text:
- Description: Fake news detection is important in the context of evolving events in today’s information-driven society. The current status of fake news detection literature focuses on static news articles, neglecting the challenges posed by the dynamic nature of evolving events. So, this research contributes to the existing literature by addressing the specific challenges of fake news detection within evolving events. By incorporating machine learning techniques and considering the evolving nature of events, our approach offers a scalable and adaptable solution for detecting fake news in evolving situations by incrementally updating training data and retraining the model. For the evaluation purpose, we also created a new fake news dataset on the Russia-Ukraine war from the Twitter postings. Extensive evaluation of our proposed model demonstrates that the model archives an overall accuracy of 94% in identifying fake/true news on evolving the Russia-Ukraine war event and outperforms two recent completing methods by a margin of 5%~10%. Copyright © 2023 Jannatul et al.
Dynamic trust boundary identification for the secure communications of the entities via 6G
- Basri, Rabeya, Karmakar, Gour, Kamruzzaman, Joarder, Newaz, S. H. Shah, Nguyen, Linh, Usman, Muhammad
- Authors: Basri, Rabeya , Karmakar, Gour , Kamruzzaman, Joarder , Newaz, S. H. Shah , Nguyen, Linh , Usman, Muhammad
- Date: 2023
- Type: Text , Conference paper
- Relation: 18th International Conference on Information Security Practice and Experience (ISPEC), 24-25 August 2023, Copenhagen, Denmark, International Conference on Information Security Practice and Experience: 18th International Conference, ISPEC 2023, Copenhagen, Denmark, August 24–25, 2023, Proceedings Vol. 14341, p. 194-208
- Full Text:
- Reviewed:
- Description: 6G is more likely prone to a range of known and unknown cyber-attacks because of its highly distributive nature. Current literature and research prove that a trust boundary can be used as a security door (e.g., gateway/firewall) to validate entities and applications attempting to access 6G networks. Trust boundaries allow these entities to connect or work with entities of other trust boundaries via 6G by dynamically monitoring their interactions, behaviors, and data transmissions. The importance of trust boundaries in security protection mechanisms demands a dynamic multi-trust boundary identification. There exists an automatic trust boundary identification for IoT data. However, it is a binary trust boundary classification and the dataset used in the approach is not suitable for dynamic trust boundary identification. Motivated by these facts, to provide automatic security protection for entities in 6G, in this paper, we propose a mechanism to identify dynamic and multiple trust boundaries based on trust values and geographical location coordinates of 6G communication entities. Our proposed mechanism uses unsupervised clustering and splitting and merging techniques. The experimental results show that entities can dynamically change their boundary location if their trust values and locations change over time. We also analyze the trust boundary identification accuracy in terms of our defined two performance metrics, i.e., trust consistency and the degree of gateway coverage. The proposed scheme allows us to distinguish between entities and control their access to the 6G network based on their trust levels to ensure secure and resilient communication.
- Authors: Basri, Rabeya , Karmakar, Gour , Kamruzzaman, Joarder , Newaz, S. H. Shah , Nguyen, Linh , Usman, Muhammad
- Date: 2023
- Type: Text , Conference paper
- Relation: 18th International Conference on Information Security Practice and Experience (ISPEC), 24-25 August 2023, Copenhagen, Denmark, International Conference on Information Security Practice and Experience: 18th International Conference, ISPEC 2023, Copenhagen, Denmark, August 24–25, 2023, Proceedings Vol. 14341, p. 194-208
- Full Text:
- Reviewed:
- Description: 6G is more likely prone to a range of known and unknown cyber-attacks because of its highly distributive nature. Current literature and research prove that a trust boundary can be used as a security door (e.g., gateway/firewall) to validate entities and applications attempting to access 6G networks. Trust boundaries allow these entities to connect or work with entities of other trust boundaries via 6G by dynamically monitoring their interactions, behaviors, and data transmissions. The importance of trust boundaries in security protection mechanisms demands a dynamic multi-trust boundary identification. There exists an automatic trust boundary identification for IoT data. However, it is a binary trust boundary classification and the dataset used in the approach is not suitable for dynamic trust boundary identification. Motivated by these facts, to provide automatic security protection for entities in 6G, in this paper, we propose a mechanism to identify dynamic and multiple trust boundaries based on trust values and geographical location coordinates of 6G communication entities. Our proposed mechanism uses unsupervised clustering and splitting and merging techniques. The experimental results show that entities can dynamically change their boundary location if their trust values and locations change over time. We also analyze the trust boundary identification accuracy in terms of our defined two performance metrics, i.e., trust consistency and the degree of gateway coverage. The proposed scheme allows us to distinguish between entities and control their access to the 6G network based on their trust levels to ensure secure and resilient communication.
Identification of fake news : a semantic driven technique for transfer domain
- Ferdush, Jannatul, Kamruzzaman, Joarder, Karmakar, Gour, Gondal, Iqbal, Das, Rajkumar
- Authors: Ferdush, Jannatul , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal , Das, Rajkumar
- Date: 2023
- Type: Text , Conference paper
- Relation: 29th International Conference on Neural Information Processing, ICONIP 2022, Virtual, online, 22-26 November 2022, Communications in Computer and Information Science Vol. 1793 CCIS, p. 564-575
- Full Text: false
- Reviewed:
- Description: Fake news spreads quickly on online social media and adversely impacts political, social, religious, and economic stability. This necessitates an efficient fake news detector which is now feasible due to advances in natural language processing and artificial intelligence. However, existing fake news detection (FND) systems are built on tokenization, embedding, and structure-based feature extraction, and fail drastically in real life because of the difference in vocabulary and its distribution across various domains. This article evaluates the effectiveness of various categories of traditional features in cross-domain FND and proposes a new method. Our proposed method shows significant improvement over recent methods in the literature for cross-domain fake news detection in terms of widely used performance metrics. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
Process reliability analysis applied for continual improvement of large-scale alumina refineries
- Don, R. Welandage, Chattopadhyay, Gopinath, Kamruzzaman, Joarder
- Authors: Don, R. Welandage , Chattopadhyay, Gopinath , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Conference paper
- Relation: 7th International Congress and Workshop on Industrial AI and eMaintenance, IAI 2023, Lulea, Sweden, 13-15 June 2023, International Congress and Workshop on Industrial AI and eMaintenance 2023 Conference proceedings p. 665-677
- Full Text: false
- Reviewed:
- Description: Large-scale alumina refineries use strategic planning to forecast production plans for short, medium, and long term operational decisions. However, actual production deviates from the forecast due to reasons within Supplier, Input, Process, Output and Contractor (SIPOC) related variations including unplanned downtimes, issues with supply chain disruptions, availability of staff and demand fluctuations due to numerous factors including environmental changes, if any. The unreliable production process results in lost revenue and adversely affects the corporate image. This paper presents a statistical approach applying the Weibull model to identify the causes of production deviation and find improvement opportunities for reducing costs and risks while enhancing performance. An illustrative example from a chemical alumina refinery plant in Australia is presented. The various steps used in the analysis are discussed in this paper using illustrative example where production data is analysed and compared for diverse options of interventions for a robust and effective method for managers to better understand the gaps for monitoring and assuring plant performance. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.