Digital twin mobility profiling : a spatio-temporal graph learning approach
- Chen, Xin, Hou, Mingliang, Tang, Tao, Kaur, Achhardeep, Xia, Feng
- Authors: Chen, Xin , Hou, Mingliang , Tang, Tao , Kaur, Achhardeep , Xia, Feng
- Date: 2022
- Type: Text , Conference paper
- Relation: 23rd IEEE International Conference on High Performance Computing and Communications, 7th IEEE International Conference on Data Science and Systems, 19th IEEE International Conference on Smart City and 7th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC-DSS-SmartCity-DependSys 2021, Hainan, China, 20-22 December 2021, Proceedings 2021 IEEE 23rd International Conference on High Performance Computing & Communications, 7th International Conference on Data Science & Systems 19th International Conference on Smart City 7th International Conference on Dependability in Sensor, Cloud & Big Data Systems & Applications p. 1178-1187
- Full Text: false
- Reviewed:
- Description: With the arrival of the big data era, mobility profiling has become a viable method of utilizing enormous amounts of mobility data to create an intelligent transportation system. Mobility profiling can extract potential patterns in urban traffic from mobility data and is critical for a variety of traffic-related applications. However, due to the high level of complexity and the huge amount of data, mobility profiling faces huge challenges. Digital Twin (DT) technology paves the way for cost-effective and performance-optimised management by digitally creating a virtual representation of the network to simulate its behaviour. In order to capture the complex spatio-temporal features in traffic scenario, we construct alignment diagrams to assist in completing the spatio-temporal correlation representation and design dilated alignment convolution network (DACN) to learn the fine-grained correlations, i.e., spatio-temporal interactions. We propose a digital twin mobility profiling (DTMP) framework to learn node profiles on a mobility network DT model. Extensive experiments have been conducted upon three real-world datasets. Experimental results demonstrate the effectiveness of DTMP. © 2021 IEEE.
Edge computing for Internet of Everything : a survey
- Kong, Xiangjie, Wu, Yuhan, Wang, Hui, Xia, Feng
- Authors: Kong, Xiangjie , Wu, Yuhan , Wang, Hui , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 23 (2022), p. 23472-23485
- Full Text:
- Reviewed:
- Description: In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed. © 2014 IEEE.
- Authors: Kong, Xiangjie , Wu, Yuhan , Wang, Hui , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 23 (2022), p. 23472-23485
- Full Text:
- Reviewed:
- Description: In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed. © 2014 IEEE.
Edge data based trailer inception probabilistic matrix factorization for context-aware movie recommendation
- Chen, Honglong, Li, Zhe, Wang, Zhu, Ni, Zhichen, Li, Junjian, Xu, Ge, Aziz, Abdul, Xia, Feng
- Authors: Chen, Honglong , Li, Zhe , Wang, Zhu , Ni, Zhichen , Li, Junjian , Xu, Ge , Aziz, Abdul , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: World Wide Web Vol. 25, no. 5 (2022), p. 1863-1882
- Full Text:
- Reviewed:
- Description: The rapid growth of edge data generated by mobile devices and applications deployed at the edge of the network has exacerbated the problem of information overload. As an effective way to alleviate information overload, recommender system can improve the quality of various services by adding application data generated by users on edge devices, such as visual and textual information, on the basis of sparse rating data. The visual information in the movie trailer is a significant part of the movie recommender system. However, due to the complexity of visual information extraction, data sparsity cannot be remarkably alleviated by merely using the rough visual features to improve the rating prediction accuracy. Fortunately, the convolutional neural network can be used to extract the visual features precisely. Therefore, the end-to-end neural image caption (NIC) model can be utilized to obtain the textual information describing the visual features of movie trailers. This paper proposes a trailer inception probabilistic matrix factorization model called Ti-PMF, which combines NIC, recurrent convolutional neural network, and probabilistic matrix factorization models as the rating prediction model. We implement the proposed Ti-PMF model with extensive experiments on three real-world datasets to validate its effectiveness. The experimental results illustrate that the proposed Ti-PMF outperforms the existing ones. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
- Authors: Chen, Honglong , Li, Zhe , Wang, Zhu , Ni, Zhichen , Li, Junjian , Xu, Ge , Aziz, Abdul , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: World Wide Web Vol. 25, no. 5 (2022), p. 1863-1882
- Full Text:
- Reviewed:
- Description: The rapid growth of edge data generated by mobile devices and applications deployed at the edge of the network has exacerbated the problem of information overload. As an effective way to alleviate information overload, recommender system can improve the quality of various services by adding application data generated by users on edge devices, such as visual and textual information, on the basis of sparse rating data. The visual information in the movie trailer is a significant part of the movie recommender system. However, due to the complexity of visual information extraction, data sparsity cannot be remarkably alleviated by merely using the rough visual features to improve the rating prediction accuracy. Fortunately, the convolutional neural network can be used to extract the visual features precisely. Therefore, the end-to-end neural image caption (NIC) model can be utilized to obtain the textual information describing the visual features of movie trailers. This paper proposes a trailer inception probabilistic matrix factorization model called Ti-PMF, which combines NIC, recurrent convolutional neural network, and probabilistic matrix factorization models as the rating prediction model. We implement the proposed Ti-PMF model with extensive experiments on three real-world datasets to validate its effectiveness. The experimental results illustrate that the proposed Ti-PMF outperforms the existing ones. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Educational anomaly analytics : features, methods, and challenges
- Guo, Teng, Bai, Xiaomei, Tian, Xue, Firmin, Sally, Xia, Feng
- Authors: Guo, Teng , Bai, Xiaomei , Tian, Xue , Firmin, Sally , Xia, Feng
- Date: 2022
- Type: Text , Journal article , Review
- Relation: Frontiers in Big Data Vol. 4, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Anomalies in education affect the personal careers of students and universities' retention rates. Understanding the laws behind educational anomalies promotes the development of individual students and improves the overall quality of education. However, the inaccessibility of educational data hinders the development of the field. Previous research in this field used questionnaires, which are time- and cost-consuming and hardly applicable to large-scale student cohorts. With the popularity of educational management systems and the rise of online education during the prevalence of COVID-19, a large amount of educational data is available online and offline, providing an unprecedented opportunity to explore educational anomalies from a data-driven perspective. As an emerging field, educational anomaly analytics rapidly attracts scholars from a variety of fields, including education, psychology, sociology, and computer science. This paper intends to provide a comprehensive review of data-driven analytics of educational anomalies from a methodological standpoint. We focus on the following five types of research that received the most attention: course failure prediction, dropout prediction, mental health problems detection, prediction of difficulty in graduation, and prediction of difficulty in employment. Then, we discuss the challenges of current related research. This study aims to provide references for educational policymaking while promoting the development of educational anomaly analytics as a growing field. Copyright © 2022 Guo, Bai, Tian, Firmin and Xia.
- Authors: Guo, Teng , Bai, Xiaomei , Tian, Xue , Firmin, Sally , Xia, Feng
- Date: 2022
- Type: Text , Journal article , Review
- Relation: Frontiers in Big Data Vol. 4, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Anomalies in education affect the personal careers of students and universities' retention rates. Understanding the laws behind educational anomalies promotes the development of individual students and improves the overall quality of education. However, the inaccessibility of educational data hinders the development of the field. Previous research in this field used questionnaires, which are time- and cost-consuming and hardly applicable to large-scale student cohorts. With the popularity of educational management systems and the rise of online education during the prevalence of COVID-19, a large amount of educational data is available online and offline, providing an unprecedented opportunity to explore educational anomalies from a data-driven perspective. As an emerging field, educational anomaly analytics rapidly attracts scholars from a variety of fields, including education, psychology, sociology, and computer science. This paper intends to provide a comprehensive review of data-driven analytics of educational anomalies from a methodological standpoint. We focus on the following five types of research that received the most attention: course failure prediction, dropout prediction, mental health problems detection, prediction of difficulty in graduation, and prediction of difficulty in employment. Then, we discuss the challenges of current related research. This study aims to provide references for educational policymaking while promoting the development of educational anomaly analytics as a growing field. Copyright © 2022 Guo, Bai, Tian, Firmin and Xia.
Efficient anomaly recognition using surveillance videos
- Saleem, Gulshan, Bajwa, Usama, Raza, Rana, Alqahtani, Fayez, Tolba, Amr, Xia, Feng
- Authors: Saleem, Gulshan , Bajwa, Usama , Raza, Rana , Alqahtani, Fayez , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 8, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Smart surveillance is a difficult task that is gaining popularity due to its direct link to human safety. Today, many indoor and outdoor surveillance systems are in use at public places and smart cities. Because these systems are expensive to deploy, these are out of reach for the vast majority of the public and private sectors. Due to the lack of a precise definition of an anomaly, automated surveillance is a challenging task, especially when large amounts of data, such as 24/7 CCTV footage, must be processed. When implementing such systems in real-time environments, the high computational resource requirements for automated surveillance becomes a major bottleneck. Another challenge is to recognize anomalies accurately as achieving high accuracy while reducing computational cost is more challenging. To address these challenge, this research is based on the developing a system that is both efficient and cost effective. Although 3D convolutional neural networks have proven to be accurate, they are prohibitively expensive for practical use, particularly in real-time surveillance. In this article, we present two contributions: a resource-efficient framework for anomaly recognition problems and two-class and multi-class anomaly recognition on spatially augmented surveillance videos. This research aims to address the problem of computation overhead while maintaining recognition accuracy. The proposed Temporal based Anomaly Recognizer (TAR) framework combines a partial shift strategy with a 2D convolutional architecture-based model, namely MobileNetV2. Extensive experiments were carried out to evaluate the model's performance on the UCF Crime dataset, with MobileNetV2 as the baseline architecture; it achieved an accuracy of 88% which is 2.47% increased performance than available state-of-the-art. The proposed framework achieves 52.7% accuracy for multiclass anomaly recognition on the UCF Crime2Local dataset. The proposed model has been tested in real-time camera stream settings and can handle six streams simultaneously without the need for additional resources. © Copyright 2022 Saleem et al.
- Authors: Saleem, Gulshan , Bajwa, Usama , Raza, Rana , Alqahtani, Fayez , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 8, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Smart surveillance is a difficult task that is gaining popularity due to its direct link to human safety. Today, many indoor and outdoor surveillance systems are in use at public places and smart cities. Because these systems are expensive to deploy, these are out of reach for the vast majority of the public and private sectors. Due to the lack of a precise definition of an anomaly, automated surveillance is a challenging task, especially when large amounts of data, such as 24/7 CCTV footage, must be processed. When implementing such systems in real-time environments, the high computational resource requirements for automated surveillance becomes a major bottleneck. Another challenge is to recognize anomalies accurately as achieving high accuracy while reducing computational cost is more challenging. To address these challenge, this research is based on the developing a system that is both efficient and cost effective. Although 3D convolutional neural networks have proven to be accurate, they are prohibitively expensive for practical use, particularly in real-time surveillance. In this article, we present two contributions: a resource-efficient framework for anomaly recognition problems and two-class and multi-class anomaly recognition on spatially augmented surveillance videos. This research aims to address the problem of computation overhead while maintaining recognition accuracy. The proposed Temporal based Anomaly Recognizer (TAR) framework combines a partial shift strategy with a 2D convolutional architecture-based model, namely MobileNetV2. Extensive experiments were carried out to evaluate the model's performance on the UCF Crime dataset, with MobileNetV2 as the baseline architecture; it achieved an accuracy of 88% which is 2.47% increased performance than available state-of-the-art. The proposed framework achieves 52.7% accuracy for multiclass anomaly recognition on the UCF Crime2Local dataset. The proposed model has been tested in real-time camera stream settings and can handle six streams simultaneously without the need for additional resources. © Copyright 2022 Saleem et al.
Exploring human mobility for multi-pattern passenger prediction : a graph learning framework
- Kong, Xiangjiea, Wang, Kailai, Hou, Mingliang, Xia, Feng, Karmakar, Gour, Li, Jianxin
- Authors: Kong, Xiangjiea , Wang, Kailai , Hou, Mingliang , Xia, Feng , Karmakar, Gour , Li, Jianxin
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 23, no. 9 (2022), p. 16148-16160
- Full Text:
- Reviewed:
- Description: Traffic flow prediction is an integral part of an intelligent transportation system and thus fundamental for various traffic-related applications. Buses are an indispensable way of moving for urban residents with fixed routes and schedules, which leads to latent travel regularity. However, human mobility patterns, specifically the complex relationships between bus passengers, are deeply hidden in this fixed mobility mode. Although many models exist to predict traffic flow, human mobility patterns have not been well explored in this regard. To address this research gap and learn human mobility knowledge from this fixed travel behaviors, we propose a multi-pattern passenger flow prediction framework, MPGCN, based on Graph Convolutional Network (GCN). Firstly, we construct a novel sharing-stop network to model relationships between passengers based on bus record data. Then, we employ GCN to extract features from the graph by learning useful topology information and introduce a deep clustering method to recognize mobility patterns hidden in bus passengers. Furthermore, to fully utilize spatio-temporal information, we propose GCN2Flow to predict passenger flow based on various mobility patterns. To the best of our knowledge, this paper is the first work to adopt a multi-pattern approach to predict the bus passenger flow by taking advantage of graph learning. We design a case study for optimizing routes. Extensive experiments upon a real-world bus dataset demonstrate that MPGCN has potential efficacy in passenger flow prediction and route optimization. © 2000-2011 IEEE.
- Authors: Kong, Xiangjiea , Wang, Kailai , Hou, Mingliang , Xia, Feng , Karmakar, Gour , Li, Jianxin
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 23, no. 9 (2022), p. 16148-16160
- Full Text:
- Reviewed:
- Description: Traffic flow prediction is an integral part of an intelligent transportation system and thus fundamental for various traffic-related applications. Buses are an indispensable way of moving for urban residents with fixed routes and schedules, which leads to latent travel regularity. However, human mobility patterns, specifically the complex relationships between bus passengers, are deeply hidden in this fixed mobility mode. Although many models exist to predict traffic flow, human mobility patterns have not been well explored in this regard. To address this research gap and learn human mobility knowledge from this fixed travel behaviors, we propose a multi-pattern passenger flow prediction framework, MPGCN, based on Graph Convolutional Network (GCN). Firstly, we construct a novel sharing-stop network to model relationships between passengers based on bus record data. Then, we employ GCN to extract features from the graph by learning useful topology information and introduce a deep clustering method to recognize mobility patterns hidden in bus passengers. Furthermore, to fully utilize spatio-temporal information, we propose GCN2Flow to predict passenger flow based on various mobility patterns. To the best of our knowledge, this paper is the first work to adopt a multi-pattern approach to predict the bus passenger flow by taking advantage of graph learning. We design a case study for optimizing routes. Extensive experiments upon a real-world bus dataset demonstrate that MPGCN has potential efficacy in passenger flow prediction and route optimization. © 2000-2011 IEEE.
Expressing metaphorically, writing creatively: Metaphor identification for creativity assessment
- Zhang, Dongyu, Zhang, Minghao, Peng, Ciyuan, Xia, Feng
- Authors: Zhang, Dongyu , Zhang, Minghao , Peng, Ciyuan , Xia, Feng
- Date: 2022
- Type: Text , Conference proceedings
- Relation: WWW '22: Companion Proceedings of the Web Conference , Virtual event , April 2022 p. 1198-
- Full Text:
- Reviewed:
- Description: Metaphor, which can implicitly express profound meanings and emotions, is a unique writing technique frequently used in human language. In writing, meaningful metaphorical expressions can enhance the literariness and creativity of texts. Therefore, the usage of metaphor is a significant impact factor when assessing the creativity and literariness of writing. However, little to no automatic writing assessment system considers metaphorical expressions when giving the score of creativity. For improving the accuracy of automatic writing assessment, this paper proposes a novel creativity assessment model that imports a token-level metaphor identification method to extract metaphors as the indicators for creativity scoring. The experimental results show that our model can accurately assess the creativity of different texts with precise metaphor identification. To the best of our knowledge, we are the first to apply automatic metaphor identification to assess writing creativity. Moreover, identifying features (e.g., metaphors) that influence writing creativity using computational approaches can offer fair and reliable assessment methods for educational settings.
- Authors: Zhang, Dongyu , Zhang, Minghao , Peng, Ciyuan , Xia, Feng
- Date: 2022
- Type: Text , Conference proceedings
- Relation: WWW '22: Companion Proceedings of the Web Conference , Virtual event , April 2022 p. 1198-
- Full Text:
- Reviewed:
- Description: Metaphor, which can implicitly express profound meanings and emotions, is a unique writing technique frequently used in human language. In writing, meaningful metaphorical expressions can enhance the literariness and creativity of texts. Therefore, the usage of metaphor is a significant impact factor when assessing the creativity and literariness of writing. However, little to no automatic writing assessment system considers metaphorical expressions when giving the score of creativity. For improving the accuracy of automatic writing assessment, this paper proposes a novel creativity assessment model that imports a token-level metaphor identification method to extract metaphors as the indicators for creativity scoring. The experimental results show that our model can accurately assess the creativity of different texts with precise metaphor identification. To the best of our knowledge, we are the first to apply automatic metaphor identification to assess writing creativity. Moreover, identifying features (e.g., metaphors) that influence writing creativity using computational approaches can offer fair and reliable assessment methods for educational settings.
Familiarity-based collaborative team recognition in academic social networks
- Yu, Shuo, Xia, Feng, Zhang, Chen, Wei, Haoran, Keogh, Kathleen, Chen, Honglong
- Authors: Yu, Shuo , Xia, Feng , Zhang, Chen , Wei, Haoran , Keogh, Kathleen , Chen, Honglong
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 9, no. 5 (2022), p. 1432-1445
- Full Text:
- Reviewed:
- Description: Collaborative teamwork is key to major scientific discoveries. However, the prevalence of collaboration among researchers makes team recognition increasingly challenging. Previous studies have demonstrated that people are more likely to collaborate with individuals they are familiar with. In this work, we employ the definition of familiarity and then propose faMiliarity-based cOllaborative Team recOgnition (MOTO) algorithm to recognize collaborative teams. MOTO calculates the shortest distance matrix within the global collaboration network and the local density of each node. Central team members are initially recognized based on local density. Then, MOTO recognizes the remaining team members by using the familiarity metric and shortest distance matrix. Extensive experiments have been conducted upon a large-scale dataset. The experimental results show that compared with baseline methods, MOTO can recognize the largest number of teams. The teams recognized by the MOTO possess more cohesive team structures and lower team communication costs compared with other methods. MOTO utilizes familiarity in team recognition to identify cohesive academic teams. The recognized teams are in line with real-world collaborative teamwork patterns. Based on team recognition using MOTO, the research team structure and performance are further analyzed for given time periods. The number of teams that consist of members from different institutions increases gradually. Such teams are found to perform better in comparison with those whose members are from the same institution. © 2014 IEEE.
- Authors: Yu, Shuo , Xia, Feng , Zhang, Chen , Wei, Haoran , Keogh, Kathleen , Chen, Honglong
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 9, no. 5 (2022), p. 1432-1445
- Full Text:
- Reviewed:
- Description: Collaborative teamwork is key to major scientific discoveries. However, the prevalence of collaboration among researchers makes team recognition increasingly challenging. Previous studies have demonstrated that people are more likely to collaborate with individuals they are familiar with. In this work, we employ the definition of familiarity and then propose faMiliarity-based cOllaborative Team recOgnition (MOTO) algorithm to recognize collaborative teams. MOTO calculates the shortest distance matrix within the global collaboration network and the local density of each node. Central team members are initially recognized based on local density. Then, MOTO recognizes the remaining team members by using the familiarity metric and shortest distance matrix. Extensive experiments have been conducted upon a large-scale dataset. The experimental results show that compared with baseline methods, MOTO can recognize the largest number of teams. The teams recognized by the MOTO possess more cohesive team structures and lower team communication costs compared with other methods. MOTO utilizes familiarity in team recognition to identify cohesive academic teams. The recognized teams are in line with real-world collaborative teamwork patterns. Based on team recognition using MOTO, the research team structure and performance are further analyzed for given time periods. The number of teams that consist of members from different institutions increases gradually. Such teams are found to perform better in comparison with those whose members are from the same institution. © 2014 IEEE.
Graph augmentation learning
- Yu, Shuo, Huang, Huafei, Dao, Minh, Xia, Feng
- Authors: Yu, Shuo , Huang, Huafei , Dao, Minh , Xia, Feng
- Date: 2022
- Type: Text , Conference paper
- Relation: 31st ACM Web Conference, WWW 2022, Virtual, online, 25 April 2022, WWW 2022 - Companion Proceedings of the Web Conference 2022 p. 1063-1072
- Full Text:
- Reviewed:
- Description: Graph Augmentation Learning (GAL) provides outstanding solutions for graph learning in handling incomplete data, noise data, etc. Numerous GAL methods have been proposed for graph-based applications such as social network analysis and traffic flow forecasting. However, the underlying reasons for the effectiveness of these GAL methods are still unclear. As a consequence, how to choose optimal graph augmentation strategy for a certain application scenario is still in black box. There is a lack of systematic, comprehensive, and experimentally validated guideline of GAL for scholars. Therefore, in this survey, we in-depth review GAL techniques from macro (graph), meso (subgraph), and micro (node/edge) levels. We further detailedly illustrate how GAL enhance the data quality and the model performance. The aggregation mechanism of augmentation strategies and graph learning models are also discussed by different application scenarios, i.e., data-specific, model-specific, and hybrid scenarios. To better show the outperformance of GAL, we experimentally validate the effectiveness and adaptability of different GAL strategies in different downstream tasks. Finally, we share our insights on several open issues of GAL, including heterogeneity, spatio-temporal dynamics, scalability, and generalization. © 2022 ACM.
- Authors: Yu, Shuo , Huang, Huafei , Dao, Minh , Xia, Feng
- Date: 2022
- Type: Text , Conference paper
- Relation: 31st ACM Web Conference, WWW 2022, Virtual, online, 25 April 2022, WWW 2022 - Companion Proceedings of the Web Conference 2022 p. 1063-1072
- Full Text:
- Reviewed:
- Description: Graph Augmentation Learning (GAL) provides outstanding solutions for graph learning in handling incomplete data, noise data, etc. Numerous GAL methods have been proposed for graph-based applications such as social network analysis and traffic flow forecasting. However, the underlying reasons for the effectiveness of these GAL methods are still unclear. As a consequence, how to choose optimal graph augmentation strategy for a certain application scenario is still in black box. There is a lack of systematic, comprehensive, and experimentally validated guideline of GAL for scholars. Therefore, in this survey, we in-depth review GAL techniques from macro (graph), meso (subgraph), and micro (node/edge) levels. We further detailedly illustrate how GAL enhance the data quality and the model performance. The aggregation mechanism of augmentation strategies and graph learning models are also discussed by different application scenarios, i.e., data-specific, model-specific, and hybrid scenarios. To better show the outperformance of GAL, we experimentally validate the effectiveness and adaptability of different GAL strategies in different downstream tasks. Finally, we share our insights on several open issues of GAL, including heterogeneity, spatio-temporal dynamics, scalability, and generalization. © 2022 ACM.
Graph self-supervised learning : a survey
- Liu, Yixin, Jin, Ming, Pan, Shirui, Zhou, Chuan, Zheng, Yu, Xia, Feng, Yu, Philip
- Authors: Liu, Yixin , Jin, Ming , Pan, Shirui , Zhou, Chuan , Zheng, Yu , Xia, Feng , Yu, Philip
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Knowledge and Data Engineering Vol. 35, no. 6 (2022), p. 5879-5900
- Full Text:
- Reviewed:
- Description: Deep learning on graphs has attracted significant interests recently. However, most of the works have focused on (semi-) supervised learning, resulting in shortcomings including heavy label reliance, poor generalization, and weak robustness. To address these issues, self-supervised learning (SSL), which extracts informative knowledge through well-designed pretext tasks without relying on manual labels, has become a promising and trending learning paradigm for graph data. Different from SSL on other domains like computer vision and natural language processing, SSL on graphs has an exclusive background, design ideas, and taxonomies. Under the umbrella of graph self-supervised learning, we present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data. We construct a unified framework that mathematically formalizes the paradigm of graph SSL. According to the objectives of pretext tasks, we divide these approaches into four categories: generation-based, auxiliary property-based, contrast-based, and hybrid approaches. We further describe the applications of graph SSL across various research fields and summarize the commonly used datasets, evaluation benchmark, performance comparison and open-source codes of graph SSL. Finally, we discuss the remaining challenges and potential future directions in this research field. IEEE
- Authors: Liu, Yixin , Jin, Ming , Pan, Shirui , Zhou, Chuan , Zheng, Yu , Xia, Feng , Yu, Philip
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Knowledge and Data Engineering Vol. 35, no. 6 (2022), p. 5879-5900
- Full Text:
- Reviewed:
- Description: Deep learning on graphs has attracted significant interests recently. However, most of the works have focused on (semi-) supervised learning, resulting in shortcomings including heavy label reliance, poor generalization, and weak robustness. To address these issues, self-supervised learning (SSL), which extracts informative knowledge through well-designed pretext tasks without relying on manual labels, has become a promising and trending learning paradigm for graph data. Different from SSL on other domains like computer vision and natural language processing, SSL on graphs has an exclusive background, design ideas, and taxonomies. Under the umbrella of graph self-supervised learning, we present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data. We construct a unified framework that mathematically formalizes the paradigm of graph SSL. According to the objectives of pretext tasks, we divide these approaches into four categories: generation-based, auxiliary property-based, contrast-based, and hybrid approaches. We further describe the applications of graph SSL across various research fields and summarize the commonly used datasets, evaluation benchmark, performance comparison and open-source codes of graph SSL. Finally, we discuss the remaining challenges and potential future directions in this research field. IEEE
GraphLearning’22: 1st International Workshop on Graph Learning
- Xia, Feng, Lambiotte, Renaud, Aggarwal, Charu
- Authors: Xia, Feng , Lambiotte, Renaud , Aggarwal, Charu
- Date: 2022
- Type: Text , Conference proceedings
- Relation: WWW '22: Companion Proceedings of the Web Conference 2022, Virtual Event, Lyon France April 25 - 29, 2022 p. 1004-1005
- Full Text:
- Reviewed:
- Description: The First Workshop on Graph Learning aims to bring together researchers and practitioners from academia and industry to discuss recent advances and core challenges of graph learning. This workshop will be established as a platform for multiple disciplines such as computer science, applied mathematics, physics, social sciences, data science, complex networks, and systems engineering. Core challenges in regard to theory, methodology, and applications of graph learning will be the main center of discussions at the workshop.
- Authors: Xia, Feng , Lambiotte, Renaud , Aggarwal, Charu
- Date: 2022
- Type: Text , Conference proceedings
- Relation: WWW '22: Companion Proceedings of the Web Conference 2022, Virtual Event, Lyon France April 25 - 29, 2022 p. 1004-1005
- Full Text:
- Reviewed:
- Description: The First Workshop on Graph Learning aims to bring together researchers and practitioners from academia and industry to discuss recent advances and core challenges of graph learning. This workshop will be established as a platform for multiple disciplines such as computer science, applied mathematics, physics, social sciences, data science, complex networks, and systems engineering. Core challenges in regard to theory, methodology, and applications of graph learning will be the main center of discussions at the workshop.
International workshop on data-driven science of science
- Bu, Yi, Liu, Meijun, Zhai, Yujia, Ding, Ying, Xia, Feng, Acuña, Daniel, Zhang, Yi
- Authors: Bu, Yi , Liu, Meijun , Zhai, Yujia , Ding, Ying , Xia, Feng , Acuña, Daniel , Zhang, Yi
- Date: 2022
- Type: Text , Conference paper
- Relation: 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2022, Washington, USA, 14-18 August 2022, Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining p. 4856-4857
- Full Text: false
- Reviewed:
- Description: Citation data, along with other bibliographic datasets, have long been adopted by the knowledge and data discovery community as an important direction for presenting the validity and effectiveness of proposed algorithms and strategies. Many top computer scientists are also excellent researchers in the science of science. The purpose of this workshop is to bridge the two communities (i.e., the knowledge discovery community and the science of science community) together as the scholarly activities become salient web and social activities that start to generate a ripple effect on broader knowledge discovery communities. This workshop will showcase the current data-driven science of science research by highlighting several studies and constructing a community of researchers to explore questions critical to the future of data-driven science of science, especially a community of data-driven science of science in Data Science so as to facilitate collaboration and inspire innovation. Through discussion on emerging and critical topics in the science of science, this workshop aims to help generate effective solutions for addressing environmental, societal, and technological problems in the scientific community. © 2022 Owner/Author.
Layered malicious nodes detection with graph attention network in human-cyber-physical networks
- Lin, Yuhang, Huang, Yanze, Hsieh, Sun-Yuan, Lin, Limei, Xia, Feng
- Authors: Lin, Yuhang , Huang, Yanze , Hsieh, Sun-Yuan , Lin, Limei , Xia, Feng
- Date: 2022
- Type: Text , Conference paper
- Relation: 19th IEEE International Conference on Mobile Ad Hoc and Smart Systems, MASS 2022, Dever USA, 20-22 October 2022, Proceedings 2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems MASS 2022 p. 523-529
- Full Text: false
- Reviewed:
- Description: With the advancement of network information technology and smart device technology, cyberspace is gradually evolved into Human-Cyber-Physical Networks (HCPNs). At the same time, the security problems caused by malicious nodes are becoming more and more serious. It is urgent to propose an efficient approach for malicious node detection. In this paper, we apply graph attention network (GAT) to detect malicious nodes layer by layer in HCPN. In addition, we investigate the influence of graph structure features on the detection performance in terms of accuracy, precision, recall, F1-score by comparing with graph convolutional network-based approach. Experimental results show that our approach has better performance as well as stronger generalizability than graph convolutional network-based approach in general. © 2022 IEEE.
MET-Meme : a multimodal meme dataset rich in metaphors
- Xu, Bo, Li, Tingtin, Zheng, Junzhe, Naseriparsa, Mehdi, Zhao, Zhehuan, Lin, Hongfei, Xia, Feng
- Authors: Xu, Bo , Li, Tingtin , Zheng, Junzhe , Naseriparsa, Mehdi , Zhao, Zhehuan , Lin, Hongfei , Xia, Feng
- Date: 2022
- Type: Text , Conference paper
- Relation: 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2022, Madrid, Spain, 11-15 July 2022, SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval p. 2887-2899
- Full Text: false
- Reviewed:
- Description: Memes have become the popular means of communication for Internet users worldwide. Understanding the Internet meme is one of the most tricky challenges in natural language processing (NLP) tasks due to its convenient non-standard writing and network vocabulary. Recently, many linguists suggested that memes contain rich metaphorical information. However, the existing researches ignore this key feature. Therefore, to incorporate informative metaphors into the meme analysis, we introduce a novel multimodal meme dataset called MET-Meme, which is rich in metaphorical features. It contains 10045 text-image pairs, with manual annotations of the metaphor occurrence, sentiment categories, intentions, and offensiveness degree. Moreover, we propose a range of strong baselines to demonstrate the importance of combining metaphorical features for meme sentiment analysis and semantic understanding tasks, respectively. MET-Meme, and its code are released publicly for research in \urlhttps: //github.com/liaolianfoka/MET-Meme-A-Multi-modal-Meme-Dataset-Rich-in-Metaphors. © 2022 ACM.
Multimodal educational data fusion for students' mental health detection
- Guo, Teng, Zhao, Wenhong, Alrashoud, Mubarak, Tolba, Amr, Firmin, Sally, Xia, Feng
- Authors: Guo, Teng , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Firmin, Sally , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 70370-70382
- Full Text:
- Reviewed:
- Description: Mental health issues can lead to serious consequences like depression, self-mutilation, and worse, especially for university students who are not physically and mentally mature. Not all students with poor mental health are aware of their situation and actively seek help. Proactive detection of mental problems is a critical step in addressing this issue. However, accurate detections are hard to achieve due to the inherent complexity and heterogeneity of unstructured multi-modal data generated by campus life. Against this background, we propose a detection framework for detecting students' mental health, named CASTLE (educational data fusion for mental health detection). Three parts are involved in this framework. First, we utilize representation learning to fuse data on social life, academic performance, and physical appearance. An algorithm, named MOON (multi-view social network embedding), is proposed to represent students' social life in a comprehensive way by fusing students' heterogeneous social relations effectively. Second, a synthetic minority oversampling technique algorithm (SMOTE) is applied to the label imbalance issue. Finally, a DNN (deep neural network) model is utilized for the final detection. The extensive results demonstrate the promising performance of the proposed methods in comparison to an extensive range of state-of-the-art baselines. © 2013 IEEE.
- Authors: Guo, Teng , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Firmin, Sally , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 70370-70382
- Full Text:
- Reviewed:
- Description: Mental health issues can lead to serious consequences like depression, self-mutilation, and worse, especially for university students who are not physically and mentally mature. Not all students with poor mental health are aware of their situation and actively seek help. Proactive detection of mental problems is a critical step in addressing this issue. However, accurate detections are hard to achieve due to the inherent complexity and heterogeneity of unstructured multi-modal data generated by campus life. Against this background, we propose a detection framework for detecting students' mental health, named CASTLE (educational data fusion for mental health detection). Three parts are involved in this framework. First, we utilize representation learning to fuse data on social life, academic performance, and physical appearance. An algorithm, named MOON (multi-view social network embedding), is proposed to represent students' social life in a comprehensive way by fusing students' heterogeneous social relations effectively. Second, a synthetic minority oversampling technique algorithm (SMOTE) is applied to the label imbalance issue. Finally, a DNN (deep neural network) model is utilized for the final detection. The extensive results demonstrate the promising performance of the proposed methods in comparison to an extensive range of state-of-the-art baselines. © 2013 IEEE.
Physics-informed graph learning
- Peng, Ciyuan, Xia, Feng, Saikrishna, Vidya, Liu, Huan
- Authors: Peng, Ciyuan , Xia, Feng , Saikrishna, Vidya , Liu, Huan
- Date: 2022
- Type: Text , Conference paper
- Relation: 22nd IEEE International Conference on Data Mining Workshops, ICDMW 2022, Orlando, Florida, 28 November to 1 December 2022, Proceedings: IEEE International Conference on Data Mining Workshops, ICDMW Vol. 2022-November, p. 732-739
- Full Text: false
- Reviewed:
- Description: An expeditious development of graph learning in recent years has found innumerable applications in several di-versified fields. Of the main associated challenges are the volume and complexity of graph data. The graph learning models suffer from the inability to efficiently learn graph information. In order to indemnify this inefficacy, physics-informed graph learning (PIGL) is emerging. PIGL incorporates physics rules while performing graph learning, which has enormous benefits. This paper presents a systematic review of PIGL methods. We begin with introducing a unified framework of graph learning models followed by examining existing PIGL methods in relation to the unified framework. We also discuss several future challenges for PIGL. This survey paper is expected to stimulate innovative research and development activities pertaining to PIGL. © 2022 IEEE.
Relational structure-aware knowledge graph representation in complex space
- Sun, Ke, Yu, Shuo, Peng, Ciyuan, Wang, Yueru, Alfarraj, Osama, Tolba, Amr, Xia, Feng
- Authors: Sun, Ke , Yu, Shuo , Peng, Ciyuan , Wang, Yueru , Alfarraj, Osama , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 11 (2022), p.
- Full Text:
- Reviewed:
- Description: Relations in knowledge graphs have rich relational structures and various binary relational patterns. Various relation modelling strategies are proposed for embedding knowledge graphs, but they fail to fully capture both features of relations, rich relational structures and various binary relational patterns. To address the problem of insufficient embedding due to the complexity of the relations, we propose a novel knowledge graph representation model in complex space, namely MARS, to exploit complex relations to embed knowledge graphs. MARS takes the mechanisms of complex numbers and message-passing and then embeds triplets into relation-specific complex hyperplanes. Thus, MARS can well preserve various relation patterns, as well as structural information in knowledge graphs. In addition, we find that the scores generated from the score function approximate a Gaussian distribution. The scores in the tail cannot effectively represent triplets. To address this particular issue and improve the precision of embeddings, we use the standard deviation to limit the dispersion of the score distribution, resulting in more accurate embeddings of triplets. Comprehensive experiments on multiple benchmarks demonstrate that our model significantly outperforms existing state-of-the-art models for link prediction and triple classification. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Sun, Ke , Yu, Shuo , Peng, Ciyuan , Wang, Yueru , Alfarraj, Osama , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 11 (2022), p.
- Full Text:
- Reviewed:
- Description: Relations in knowledge graphs have rich relational structures and various binary relational patterns. Various relation modelling strategies are proposed for embedding knowledge graphs, but they fail to fully capture both features of relations, rich relational structures and various binary relational patterns. To address the problem of insufficient embedding due to the complexity of the relations, we propose a novel knowledge graph representation model in complex space, namely MARS, to exploit complex relations to embed knowledge graphs. MARS takes the mechanisms of complex numbers and message-passing and then embeds triplets into relation-specific complex hyperplanes. Thus, MARS can well preserve various relation patterns, as well as structural information in knowledge graphs. In addition, we find that the scores generated from the score function approximate a Gaussian distribution. The scores in the tail cannot effectively represent triplets. To address this particular issue and improve the precision of embeddings, we use the standard deviation to limit the dispersion of the score distribution, resulting in more accurate embeddings of triplets. Comprehensive experiments on multiple benchmarks demonstrate that our model significantly outperforms existing state-of-the-art models for link prediction and triple classification. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
RMGen : a tri-layer vehicular trajectory data generation model exploring urban region division and mobility pattern
- Kong, Xiangjie, Chen, Qiao, Hou, Mingliang, Rahim, Azizur, Ma, Kai, Xia, Feng
- Authors: Kong, Xiangjie , Chen, Qiao , Hou, Mingliang , Rahim, Azizur , Ma, Kai , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Vehicular Technology Vol. 71, no. 9 (2022), p. 9225-9238
- Full Text:
- Reviewed:
- Description: As an important branch of the Internet of Things (IoT), the Internet of Vehicles (IoV) has attracted extensive attention in the research field. To deeply study the IoV and build a vehicle spatiotemporal interaction network, it is necessary to use the trajectory data of private cars. However, due to privacy and security protection policies and other reasons, the data set of private cars cannot be obtained, which hinders the research on the social attributes of vehicles in the IoV. Most of the previous work generated the same type of data, and how to generate private car data sets from various existing data sets is a huge challenge. In this paper, we propose a tri-layer framework to solve this problem. First, we propose a novel region division scheme that considers detailed inter-region relations connected by traffic flux. Second, a new spatial-temporal interaction model is developed to estimate the traffic flow between two regions. Third, we devise an evaluation pipeline to validate generation results from microscopic and macroscopic perspectives. Qualitative and quantitative results demonstrate that the data generated in heavy density scenarios can provide strong data support for downstream IoV and mobility research tasks. © 1967-2012 IEEE.
- Authors: Kong, Xiangjie , Chen, Qiao , Hou, Mingliang , Rahim, Azizur , Ma, Kai , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Vehicular Technology Vol. 71, no. 9 (2022), p. 9225-9238
- Full Text:
- Reviewed:
- Description: As an important branch of the Internet of Things (IoT), the Internet of Vehicles (IoV) has attracted extensive attention in the research field. To deeply study the IoV and build a vehicle spatiotemporal interaction network, it is necessary to use the trajectory data of private cars. However, due to privacy and security protection policies and other reasons, the data set of private cars cannot be obtained, which hinders the research on the social attributes of vehicles in the IoV. Most of the previous work generated the same type of data, and how to generate private car data sets from various existing data sets is a huge challenge. In this paper, we propose a tri-layer framework to solve this problem. First, we propose a novel region division scheme that considers detailed inter-region relations connected by traffic flux. Second, a new spatial-temporal interaction model is developed to estimate the traffic flow between two regions. Third, we devise an evaluation pipeline to validate generation results from microscopic and macroscopic perspectives. Qualitative and quantitative results demonstrate that the data generated in heavy density scenarios can provide strong data support for downstream IoV and mobility research tasks. © 1967-2012 IEEE.
Robust graph neural networks via ensemble learning
- Lin, Qi, Yu, Shuo, Sun, Ke, Zhao, Wenhong, Alfarraj, Osama, Tolba, Amr, Xia, Feng
- Authors: Lin, Qi , Yu, Shuo , Sun, Ke , Zhao, Wenhong , Alfarraj, Osama , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 8 (2022), p.
- Full Text:
- Reviewed:
- Description: Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. However, most existing GNNs suffer from the nonrobustness issues, which poses a great challenge for applying GNNs into sensitive scenarios. Some researchers concentrate on constructing an ensemble model to mitigate the nonrobustness issues. Nevertheless, these methods ignore the interaction among base models, leading to similar graph representations. Moreover, due to the deterministic propagation applied in most existing GNNs, each node highly relies on its neighbors, leaving the nodes to be sensitive to perturbations. Therefore, in this paper, we propose a novel framework of graph ensemble learning based on knowledge passing (called GEL) to address the above issues. In order to achieve interaction, we consider the predictions of prior models as knowledge to obtain more reliable predictions. Moreover, we design a multilayer DropNode propagation strategy to reduce each node’s dependence on particular neighbors. This strategy also empowers each node to aggregate information from diverse neighbors, alleviating oversmoothing issues. We conduct experiments on three benchmark datasets, including Cora, Citeseer, and Pubmed. GEL outperforms GCN by more than 5% in terms of accuracy across all three datasets and also performs better than other state-of-the-art baselines. Extensive experimental results also show that the GEL alleviates the nonrobustness and oversmoothing issues. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Lin, Qi , Yu, Shuo , Sun, Ke , Zhao, Wenhong , Alfarraj, Osama , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 8 (2022), p.
- Full Text:
- Reviewed:
- Description: Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. However, most existing GNNs suffer from the nonrobustness issues, which poses a great challenge for applying GNNs into sensitive scenarios. Some researchers concentrate on constructing an ensemble model to mitigate the nonrobustness issues. Nevertheless, these methods ignore the interaction among base models, leading to similar graph representations. Moreover, due to the deterministic propagation applied in most existing GNNs, each node highly relies on its neighbors, leaving the nodes to be sensitive to perturbations. Therefore, in this paper, we propose a novel framework of graph ensemble learning based on knowledge passing (called GEL) to address the above issues. In order to achieve interaction, we consider the predictions of prior models as knowledge to obtain more reliable predictions. Moreover, we design a multilayer DropNode propagation strategy to reduce each node’s dependence on particular neighbors. This strategy also empowers each node to aggregate information from diverse neighbors, alleviating oversmoothing issues. We conduct experiments on three benchmark datasets, including Cora, Citeseer, and Pubmed. GEL outperforms GCN by more than 5% in terms of accuracy across all three datasets and also performs better than other state-of-the-art baselines. Extensive experimental results also show that the GEL alleviates the nonrobustness and oversmoothing issues. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
SEeMS : advanced artificial neural networks for employee learning motivation prediction
- Sin, Audrey, Islam, Sardar, Prentice, Catherine, Xia, Feng
- Authors: Sin, Audrey , Islam, Sardar , Prentice, Catherine , Xia, Feng
- Date: 2022
- Type: Text , Conference paper
- Relation: 7th IEEE International conference for Convergence in Technology, I2CT 2022, Pune, India, 7-9 April 2022, Proceedings 2022 IEEE 7th International conference for Convergence in Technology (I2CT)
- Full Text: false
- Reviewed:
- Description: Employee learning motivation is vital for employee professional development and organisational success. However, worldwide statistics show that employees are generally unmotivated to learn. This study aims to examine employee learning motivation signals to determine the best-fit model for early intervention. In this paper, we present SEeMS a Smart Employee learning Motivation System to predict employee learning motivation autonomously. An Advanced Artificial Neural Networks (AANN) with a blended activation function of Sigmoid and ReLu (bSigReLu) is proposed and compared with other learning models. Experimental results demonstrate that the proposed model outperformed conventional state-of-art models. This novel study contributes to the field of organisational behaviour and data science by extending the usage of kernels and customised activation functions to solve the employee learning motivation problem. The superiority of the algorithm makes SEeMS ideal for practical deployment. According to the predictions, organisations could design better strategies to improve employee learning motivation for targeted employees. It is the first step towards achieving an eco-system of self-motivated employee learning that contributes to employee job satisfaction, performance, and well-being, indirectly contributing to employer competitiveness. © 2022 IEEE.