Metaphor research in the 21st century : a bibliographic analysis
- Zhang, Dongyu, Zhang, Minghao, Peng, Ciyuan, Jung, Jason, Xia, Feng
- Authors: Zhang, Dongyu , Zhang, Minghao , Peng, Ciyuan , Jung, Jason , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Science and Information Systems Vol. 18, no. 1 (2020), p. 303-322
- Full Text:
- Reviewed:
- Description: Metaphor is widely used in human communication. The cohort of scholars studying metaphor in various fields is continuously growing, but very few work has been done in bibliographical analysis of metaphor research. This paper examines the advancements in metaphor research from 2000 to 2017. Using data retrieved from Microsoft Academic Graph and Web of Science, this paper makes a macro analysis of metaphor research, and expounds the underlying patterns of its development. Taking into consideration sub-fields of metaphor research, the internal analysis of metaphor research is carried out from a micro perspective to reveal the evolution of research topics and the inherent relationships among them. This paper provides novel insights into the current state of the art of metaphor research as well as future trends in this field, which may spark new research interests in metaphor from both linguistic and interdisciplinary perspectives. © 2020, ComSIS Consortium. All rights reserved.
- Authors: Zhang, Dongyu , Zhang, Minghao , Peng, Ciyuan , Jung, Jason , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Science and Information Systems Vol. 18, no. 1 (2020), p. 303-322
- Full Text:
- Reviewed:
- Description: Metaphor is widely used in human communication. The cohort of scholars studying metaphor in various fields is continuously growing, but very few work has been done in bibliographical analysis of metaphor research. This paper examines the advancements in metaphor research from 2000 to 2017. Using data retrieved from Microsoft Academic Graph and Web of Science, this paper makes a macro analysis of metaphor research, and expounds the underlying patterns of its development. Taking into consideration sub-fields of metaphor research, the internal analysis of metaphor research is carried out from a micro perspective to reveal the evolution of research topics and the inherent relationships among them. This paper provides novel insights into the current state of the art of metaphor research as well as future trends in this field, which may spark new research interests in metaphor from both linguistic and interdisciplinary perspectives. © 2020, ComSIS Consortium. All rights reserved.
A federated learning-based license plate recognition scheme for 5G-enabled Internet of vehicles
- Kong, Xiangjie, Wang, Kailai, Hou, Mingliang, Hao, Xinyu, Shen, Guojiang, Chen, Xin, Xia, Feng
- Authors: Kong, Xiangjie , Wang, Kailai , Hou, Mingliang , Hao, Xinyu , Shen, Guojiang , Chen, Xin , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 17, no. 12 (Dec 2021), p. 8523-8530
- Full Text:
- Reviewed:
- Description: License plate is an essential characteristic to identify vehicles for the traffic management, and thus, license plate recognition is important for Internet of Vehicles. Since 5G has been widely covered, mobile devices are utilized to assist the traffic management, which is a significant part of Industry 4.0. However, there have always been privacy risks due to centralized training of models. Also, the trained model cannot be directly deployed on the mobile device due to its large number of parameters. In this article, we propose a federated learning-based license plate recognition framework (FedLPR) to solve these problems. We design detection and recognition model to apply in the mobile device. In terms of user privacy, data in individuals is harnessed on their mobile devices instead of the server to train models based on federated learning. Extensive experiments demonstrate that FedLPR has high accuracy and acceptable communication cost while preserving user privacy.
- Authors: Kong, Xiangjie , Wang, Kailai , Hou, Mingliang , Hao, Xinyu , Shen, Guojiang , Chen, Xin , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 17, no. 12 (Dec 2021), p. 8523-8530
- Full Text:
- Reviewed:
- Description: License plate is an essential characteristic to identify vehicles for the traffic management, and thus, license plate recognition is important for Internet of Vehicles. Since 5G has been widely covered, mobile devices are utilized to assist the traffic management, which is a significant part of Industry 4.0. However, there have always been privacy risks due to centralized training of models. Also, the trained model cannot be directly deployed on the mobile device due to its large number of parameters. In this article, we propose a federated learning-based license plate recognition framework (FedLPR) to solve these problems. We design detection and recognition model to apply in the mobile device. In terms of user privacy, data in individuals is harnessed on their mobile devices instead of the server to train models based on federated learning. Extensive experiments demonstrate that FedLPR has high accuracy and acceptable communication cost while preserving user privacy.
Collaborative filtering with network representation learning for citation recommendation
- Wang, Wei, Tang, Tao, Xia, Feng, Gong, Zhiguo, Chen, Zhikui, Liu, Huan
- Authors: Wang, Wei , Tang, Tao , Xia, Feng , Gong, Zhiguo , Chen, Zhikui , Liu, Huan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Big Data Vol. 8, no. 5 (2022), p. 1233-1246
- Full Text:
- Reviewed:
- Description: Citation recommendation plays an important role in the context of scholarly big data, where finding relevant papers has become more difficult because of information overload. Applying traditional collaborative filtering (CF) to citation recommendation is challenging due to the cold start problem and the lack of paper ratings. To address these challenges, in this article, we propose a collaborative filtering with network representation learning framework for citation recommendation, namely CNCRec, which is a hybrid user-based CF considering both paper content and network topology. It aims at recommending citations in heterogeneous academic information networks. CNCRec creates the paper rating matrix based on attributed citation network representation learning, where the attributes are topics extracted from the paper text information. Meanwhile, the learned representations of attributed collaboration network is utilized to improve the selection of nearest neighbors. By harnessing the power of network representation learning, CNCRec is able to make full use of the whole citation network topology compared with previous context-aware network-based models. Extensive experiments on both DBLP and APS datasets show that the proposed method outperforms state-of-the-art methods in terms of precision, recall, and MRR (Mean Reciprocal Rank). Moreover, CNCRec can better solve the data sparsity problem compared with other CF-based baselines. © 2015 IEEE.
- Authors: Wang, Wei , Tang, Tao , Xia, Feng , Gong, Zhiguo , Chen, Zhikui , Liu, Huan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Big Data Vol. 8, no. 5 (2022), p. 1233-1246
- Full Text:
- Reviewed:
- Description: Citation recommendation plays an important role in the context of scholarly big data, where finding relevant papers has become more difficult because of information overload. Applying traditional collaborative filtering (CF) to citation recommendation is challenging due to the cold start problem and the lack of paper ratings. To address these challenges, in this article, we propose a collaborative filtering with network representation learning framework for citation recommendation, namely CNCRec, which is a hybrid user-based CF considering both paper content and network topology. It aims at recommending citations in heterogeneous academic information networks. CNCRec creates the paper rating matrix based on attributed citation network representation learning, where the attributes are topics extracted from the paper text information. Meanwhile, the learned representations of attributed collaboration network is utilized to improve the selection of nearest neighbors. By harnessing the power of network representation learning, CNCRec is able to make full use of the whole citation network topology compared with previous context-aware network-based models. Extensive experiments on both DBLP and APS datasets show that the proposed method outperforms state-of-the-art methods in terms of precision, recall, and MRR (Mean Reciprocal Rank). Moreover, CNCRec can better solve the data sparsity problem compared with other CF-based baselines. © 2015 IEEE.
Edge computing for Internet of Everything : a survey
- Kong, Xiangjie, Wu, Yuhan, Wang, Hui, Xia, Feng
- Authors: Kong, Xiangjie , Wu, Yuhan , Wang, Hui , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 23 (2022), p. 23472-23485
- Full Text:
- Reviewed:
- Description: In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed. © 2014 IEEE.
- Authors: Kong, Xiangjie , Wu, Yuhan , Wang, Hui , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 9, no. 23 (2022), p. 23472-23485
- Full Text:
- Reviewed:
- Description: In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed. © 2014 IEEE.
Edge data based trailer inception probabilistic matrix factorization for context-aware movie recommendation
- Chen, Honglong, Li, Zhe, Wang, Zhu, Ni, Zhichen, Li, Junjian, Xu, Ge, Aziz, Abdul, Xia, Feng
- Authors: Chen, Honglong , Li, Zhe , Wang, Zhu , Ni, Zhichen , Li, Junjian , Xu, Ge , Aziz, Abdul , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: World Wide Web Vol. 25, no. 5 (2022), p. 1863-1882
- Full Text:
- Reviewed:
- Description: The rapid growth of edge data generated by mobile devices and applications deployed at the edge of the network has exacerbated the problem of information overload. As an effective way to alleviate information overload, recommender system can improve the quality of various services by adding application data generated by users on edge devices, such as visual and textual information, on the basis of sparse rating data. The visual information in the movie trailer is a significant part of the movie recommender system. However, due to the complexity of visual information extraction, data sparsity cannot be remarkably alleviated by merely using the rough visual features to improve the rating prediction accuracy. Fortunately, the convolutional neural network can be used to extract the visual features precisely. Therefore, the end-to-end neural image caption (NIC) model can be utilized to obtain the textual information describing the visual features of movie trailers. This paper proposes a trailer inception probabilistic matrix factorization model called Ti-PMF, which combines NIC, recurrent convolutional neural network, and probabilistic matrix factorization models as the rating prediction model. We implement the proposed Ti-PMF model with extensive experiments on three real-world datasets to validate its effectiveness. The experimental results illustrate that the proposed Ti-PMF outperforms the existing ones. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
- Authors: Chen, Honglong , Li, Zhe , Wang, Zhu , Ni, Zhichen , Li, Junjian , Xu, Ge , Aziz, Abdul , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: World Wide Web Vol. 25, no. 5 (2022), p. 1863-1882
- Full Text:
- Reviewed:
- Description: The rapid growth of edge data generated by mobile devices and applications deployed at the edge of the network has exacerbated the problem of information overload. As an effective way to alleviate information overload, recommender system can improve the quality of various services by adding application data generated by users on edge devices, such as visual and textual information, on the basis of sparse rating data. The visual information in the movie trailer is a significant part of the movie recommender system. However, due to the complexity of visual information extraction, data sparsity cannot be remarkably alleviated by merely using the rough visual features to improve the rating prediction accuracy. Fortunately, the convolutional neural network can be used to extract the visual features precisely. Therefore, the end-to-end neural image caption (NIC) model can be utilized to obtain the textual information describing the visual features of movie trailers. This paper proposes a trailer inception probabilistic matrix factorization model called Ti-PMF, which combines NIC, recurrent convolutional neural network, and probabilistic matrix factorization models as the rating prediction model. We implement the proposed Ti-PMF model with extensive experiments on three real-world datasets to validate its effectiveness. The experimental results illustrate that the proposed Ti-PMF outperforms the existing ones. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Efficient anomaly recognition using surveillance videos
- Saleem, Gulshan, Bajwa, Usama, Raza, Rana, Alqahtani, Fayez, Tolba, Amr, Xia, Feng
- Authors: Saleem, Gulshan , Bajwa, Usama , Raza, Rana , Alqahtani, Fayez , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 8, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Smart surveillance is a difficult task that is gaining popularity due to its direct link to human safety. Today, many indoor and outdoor surveillance systems are in use at public places and smart cities. Because these systems are expensive to deploy, these are out of reach for the vast majority of the public and private sectors. Due to the lack of a precise definition of an anomaly, automated surveillance is a challenging task, especially when large amounts of data, such as 24/7 CCTV footage, must be processed. When implementing such systems in real-time environments, the high computational resource requirements for automated surveillance becomes a major bottleneck. Another challenge is to recognize anomalies accurately as achieving high accuracy while reducing computational cost is more challenging. To address these challenge, this research is based on the developing a system that is both efficient and cost effective. Although 3D convolutional neural networks have proven to be accurate, they are prohibitively expensive for practical use, particularly in real-time surveillance. In this article, we present two contributions: a resource-efficient framework for anomaly recognition problems and two-class and multi-class anomaly recognition on spatially augmented surveillance videos. This research aims to address the problem of computation overhead while maintaining recognition accuracy. The proposed Temporal based Anomaly Recognizer (TAR) framework combines a partial shift strategy with a 2D convolutional architecture-based model, namely MobileNetV2. Extensive experiments were carried out to evaluate the model's performance on the UCF Crime dataset, with MobileNetV2 as the baseline architecture; it achieved an accuracy of 88% which is 2.47% increased performance than available state-of-the-art. The proposed framework achieves 52.7% accuracy for multiclass anomaly recognition on the UCF Crime2Local dataset. The proposed model has been tested in real-time camera stream settings and can handle six streams simultaneously without the need for additional resources. © Copyright 2022 Saleem et al.
- Authors: Saleem, Gulshan , Bajwa, Usama , Raza, Rana , Alqahtani, Fayez , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 8, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Smart surveillance is a difficult task that is gaining popularity due to its direct link to human safety. Today, many indoor and outdoor surveillance systems are in use at public places and smart cities. Because these systems are expensive to deploy, these are out of reach for the vast majority of the public and private sectors. Due to the lack of a precise definition of an anomaly, automated surveillance is a challenging task, especially when large amounts of data, such as 24/7 CCTV footage, must be processed. When implementing such systems in real-time environments, the high computational resource requirements for automated surveillance becomes a major bottleneck. Another challenge is to recognize anomalies accurately as achieving high accuracy while reducing computational cost is more challenging. To address these challenge, this research is based on the developing a system that is both efficient and cost effective. Although 3D convolutional neural networks have proven to be accurate, they are prohibitively expensive for practical use, particularly in real-time surveillance. In this article, we present two contributions: a resource-efficient framework for anomaly recognition problems and two-class and multi-class anomaly recognition on spatially augmented surveillance videos. This research aims to address the problem of computation overhead while maintaining recognition accuracy. The proposed Temporal based Anomaly Recognizer (TAR) framework combines a partial shift strategy with a 2D convolutional architecture-based model, namely MobileNetV2. Extensive experiments were carried out to evaluate the model's performance on the UCF Crime dataset, with MobileNetV2 as the baseline architecture; it achieved an accuracy of 88% which is 2.47% increased performance than available state-of-the-art. The proposed framework achieves 52.7% accuracy for multiclass anomaly recognition on the UCF Crime2Local dataset. The proposed model has been tested in real-time camera stream settings and can handle six streams simultaneously without the need for additional resources. © Copyright 2022 Saleem et al.
Graph self-supervised learning : a survey
- Liu, Yixin, Jin, Ming, Pan, Shirui, Zhou, Chuan, Zheng, Yu, Xia, Feng, Yu, Philip
- Authors: Liu, Yixin , Jin, Ming , Pan, Shirui , Zhou, Chuan , Zheng, Yu , Xia, Feng , Yu, Philip
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Knowledge and Data Engineering Vol. 35, no. 6 (2022), p. 5879-5900
- Full Text:
- Reviewed:
- Description: Deep learning on graphs has attracted significant interests recently. However, most of the works have focused on (semi-) supervised learning, resulting in shortcomings including heavy label reliance, poor generalization, and weak robustness. To address these issues, self-supervised learning (SSL), which extracts informative knowledge through well-designed pretext tasks without relying on manual labels, has become a promising and trending learning paradigm for graph data. Different from SSL on other domains like computer vision and natural language processing, SSL on graphs has an exclusive background, design ideas, and taxonomies. Under the umbrella of graph self-supervised learning, we present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data. We construct a unified framework that mathematically formalizes the paradigm of graph SSL. According to the objectives of pretext tasks, we divide these approaches into four categories: generation-based, auxiliary property-based, contrast-based, and hybrid approaches. We further describe the applications of graph SSL across various research fields and summarize the commonly used datasets, evaluation benchmark, performance comparison and open-source codes of graph SSL. Finally, we discuss the remaining challenges and potential future directions in this research field. IEEE
- Authors: Liu, Yixin , Jin, Ming , Pan, Shirui , Zhou, Chuan , Zheng, Yu , Xia, Feng , Yu, Philip
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Knowledge and Data Engineering Vol. 35, no. 6 (2022), p. 5879-5900
- Full Text:
- Reviewed:
- Description: Deep learning on graphs has attracted significant interests recently. However, most of the works have focused on (semi-) supervised learning, resulting in shortcomings including heavy label reliance, poor generalization, and weak robustness. To address these issues, self-supervised learning (SSL), which extracts informative knowledge through well-designed pretext tasks without relying on manual labels, has become a promising and trending learning paradigm for graph data. Different from SSL on other domains like computer vision and natural language processing, SSL on graphs has an exclusive background, design ideas, and taxonomies. Under the umbrella of graph self-supervised learning, we present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data. We construct a unified framework that mathematically formalizes the paradigm of graph SSL. According to the objectives of pretext tasks, we divide these approaches into four categories: generation-based, auxiliary property-based, contrast-based, and hybrid approaches. We further describe the applications of graph SSL across various research fields and summarize the commonly used datasets, evaluation benchmark, performance comparison and open-source codes of graph SSL. Finally, we discuss the remaining challenges and potential future directions in this research field. IEEE
Multimodal educational data fusion for students' mental health detection
- Guo, Teng, Zhao, Wenhong, Alrashoud, Mubarak, Tolba, Amr, Firmin, Sally, Xia, Feng
- Authors: Guo, Teng , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Firmin, Sally , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 70370-70382
- Full Text:
- Reviewed:
- Description: Mental health issues can lead to serious consequences like depression, self-mutilation, and worse, especially for university students who are not physically and mentally mature. Not all students with poor mental health are aware of their situation and actively seek help. Proactive detection of mental problems is a critical step in addressing this issue. However, accurate detections are hard to achieve due to the inherent complexity and heterogeneity of unstructured multi-modal data generated by campus life. Against this background, we propose a detection framework for detecting students' mental health, named CASTLE (educational data fusion for mental health detection). Three parts are involved in this framework. First, we utilize representation learning to fuse data on social life, academic performance, and physical appearance. An algorithm, named MOON (multi-view social network embedding), is proposed to represent students' social life in a comprehensive way by fusing students' heterogeneous social relations effectively. Second, a synthetic minority oversampling technique algorithm (SMOTE) is applied to the label imbalance issue. Finally, a DNN (deep neural network) model is utilized for the final detection. The extensive results demonstrate the promising performance of the proposed methods in comparison to an extensive range of state-of-the-art baselines. © 2013 IEEE.
- Authors: Guo, Teng , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Firmin, Sally , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 70370-70382
- Full Text:
- Reviewed:
- Description: Mental health issues can lead to serious consequences like depression, self-mutilation, and worse, especially for university students who are not physically and mentally mature. Not all students with poor mental health are aware of their situation and actively seek help. Proactive detection of mental problems is a critical step in addressing this issue. However, accurate detections are hard to achieve due to the inherent complexity and heterogeneity of unstructured multi-modal data generated by campus life. Against this background, we propose a detection framework for detecting students' mental health, named CASTLE (educational data fusion for mental health detection). Three parts are involved in this framework. First, we utilize representation learning to fuse data on social life, academic performance, and physical appearance. An algorithm, named MOON (multi-view social network embedding), is proposed to represent students' social life in a comprehensive way by fusing students' heterogeneous social relations effectively. Second, a synthetic minority oversampling technique algorithm (SMOTE) is applied to the label imbalance issue. Finally, a DNN (deep neural network) model is utilized for the final detection. The extensive results demonstrate the promising performance of the proposed methods in comparison to an extensive range of state-of-the-art baselines. © 2013 IEEE.
RMGen : a tri-layer vehicular trajectory data generation model exploring urban region division and mobility pattern
- Kong, Xiangjie, Chen, Qiao, Hou, Mingliang, Rahim, Azizur, Ma, Kai, Xia, Feng
- Authors: Kong, Xiangjie , Chen, Qiao , Hou, Mingliang , Rahim, Azizur , Ma, Kai , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Vehicular Technology Vol. 71, no. 9 (2022), p. 9225-9238
- Full Text:
- Reviewed:
- Description: As an important branch of the Internet of Things (IoT), the Internet of Vehicles (IoV) has attracted extensive attention in the research field. To deeply study the IoV and build a vehicle spatiotemporal interaction network, it is necessary to use the trajectory data of private cars. However, due to privacy and security protection policies and other reasons, the data set of private cars cannot be obtained, which hinders the research on the social attributes of vehicles in the IoV. Most of the previous work generated the same type of data, and how to generate private car data sets from various existing data sets is a huge challenge. In this paper, we propose a tri-layer framework to solve this problem. First, we propose a novel region division scheme that considers detailed inter-region relations connected by traffic flux. Second, a new spatial-temporal interaction model is developed to estimate the traffic flow between two regions. Third, we devise an evaluation pipeline to validate generation results from microscopic and macroscopic perspectives. Qualitative and quantitative results demonstrate that the data generated in heavy density scenarios can provide strong data support for downstream IoV and mobility research tasks. © 1967-2012 IEEE.
- Authors: Kong, Xiangjie , Chen, Qiao , Hou, Mingliang , Rahim, Azizur , Ma, Kai , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Vehicular Technology Vol. 71, no. 9 (2022), p. 9225-9238
- Full Text:
- Reviewed:
- Description: As an important branch of the Internet of Things (IoT), the Internet of Vehicles (IoV) has attracted extensive attention in the research field. To deeply study the IoV and build a vehicle spatiotemporal interaction network, it is necessary to use the trajectory data of private cars. However, due to privacy and security protection policies and other reasons, the data set of private cars cannot be obtained, which hinders the research on the social attributes of vehicles in the IoV. Most of the previous work generated the same type of data, and how to generate private car data sets from various existing data sets is a huge challenge. In this paper, we propose a tri-layer framework to solve this problem. First, we propose a novel region division scheme that considers detailed inter-region relations connected by traffic flux. Second, a new spatial-temporal interaction model is developed to estimate the traffic flow between two regions. Third, we devise an evaluation pipeline to validate generation results from microscopic and macroscopic perspectives. Qualitative and quantitative results demonstrate that the data generated in heavy density scenarios can provide strong data support for downstream IoV and mobility research tasks. © 1967-2012 IEEE.
Deep learning : survey of environmental and camera impacts on internet of things images
- Kaur, Roopdeep, Karmakar, Gour, Xia, Feng, Imran, Muhammad
- Authors: Kaur, Roopdeep , Karmakar, Gour , Xia, Feng , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 9 (2023), p. 9605-9638
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas. © 2023, The Author(s).
- Authors: Kaur, Roopdeep , Karmakar, Gour , Xia, Feng , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 9 (2023), p. 9605-9638
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas. © 2023, The Author(s).
Knowledge graphs : opportunities and challenges
- Peng, Ciyuan, Xia, Feng, Naseriparsa, Mehdi, Osborne, Francesco
- Authors: Peng, Ciyuan , Xia, Feng , Naseriparsa, Mehdi , Osborne, Francesco
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 11 (2023), p. 13071-13102
- Full Text:
- Reviewed:
- Description: With the explosive growth of artificial intelligence (AI) and big data, it has become vitally important to organize and represent the enormous volume of knowledge appropriately. As graph data, knowledge graphs accumulate and convey knowledge of the real world. It has been well-recognized that knowledge graphs effectively represent complex information; hence, they rapidly gain the attention of academia and industry in recent years. Thus to develop a deeper understanding of knowledge graphs, this paper presents a systematic overview of this field. Specifically, we focus on the opportunities and challenges of knowledge graphs. We first review the opportunities of knowledge graphs in terms of two aspects: (1) AI systems built upon knowledge graphs; (2) potential application fields of knowledge graphs. Then, we thoroughly discuss severe technical challenges in this field, such as knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning. We expect that this survey will shed new light on future research and the development of knowledge graphs. © 2023, The Author(s).
- Authors: Peng, Ciyuan , Xia, Feng , Naseriparsa, Mehdi , Osborne, Francesco
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 11 (2023), p. 13071-13102
- Full Text:
- Reviewed:
- Description: With the explosive growth of artificial intelligence (AI) and big data, it has become vitally important to organize and represent the enormous volume of knowledge appropriately. As graph data, knowledge graphs accumulate and convey knowledge of the real world. It has been well-recognized that knowledge graphs effectively represent complex information; hence, they rapidly gain the attention of academia and industry in recent years. Thus to develop a deeper understanding of knowledge graphs, this paper presents a systematic overview of this field. Specifically, we focus on the opportunities and challenges of knowledge graphs. We first review the opportunities of knowledge graphs in terms of two aspects: (1) AI systems built upon knowledge graphs; (2) potential application fields of knowledge graphs. Then, we thoroughly discuss severe technical challenges in this field, such as knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning. We expect that this survey will shed new light on future research and the development of knowledge graphs. © 2023, The Author(s).
MSCET : a multi-scenario offloading schedule for biomedical data processing and analysis in cloud-edge-terminal collaborative vehicular networks
- Ni, Zhichen, Chen, Honglong, Li, Zhe, Wang, Xiaomeng, Yan, Na, Liu, Weifeng, Xia, Feng
- Authors: Ni, Zhichen , Chen, Honglong , Li, Zhe , Wang, Xiaomeng , Yan, Na , Liu, Weifeng , Xia, Feng
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE/ACM Transactions on Computational Biology and Bioinformatics Vol. 20, no. 4 (2023), p. 2376-2386
- Full Text:
- Reviewed:
- Description: With the rapid development of Artificial Intelligence (AI) and Internet of Things (IoTs), an increasing number of computation intensive or delay sensitive biomedical data processing and analysis tasks are produced in vehicles, bringing more and more challenges to the biometric monitoring of drivers. Edge computing is a new paradigm to solve these challenges by offloading tasks from the resource-limited vehicles to Edge Servers (ESs) in Road Side Units (RSUs). However, most of the traditional offloading schedules for vehicular networks concentrate on the edge, while some tasks may be too complex for ESs to process. To this end, we consider a collaborative vehicular network in which the cloud, edge and terminal can cooperate with each other to accomplish the tasks. The vehicles can offload the computation intensive tasks to the cloud to save the resource of edge. We further construct the virtual resource pool which can integrate the resource of multiple ESs since some regions may be covered by multiple RSUs. In this paper, we propose a Multi-Scenario offloading schedule for biomedical data processing and analysis in Cloud-Edge-Terminal collaborative vehicular networks called MSCET. The parameters of the proposed MSCET are optimized to maximize the system utility. We also conduct extensive simulations to evaluate the proposed MSCET and the results illustrate that MSCET outperforms other existing schedules. © 2004-2012 IEEE.
- Authors: Ni, Zhichen , Chen, Honglong , Li, Zhe , Wang, Xiaomeng , Yan, Na , Liu, Weifeng , Xia, Feng
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE/ACM Transactions on Computational Biology and Bioinformatics Vol. 20, no. 4 (2023), p. 2376-2386
- Full Text:
- Reviewed:
- Description: With the rapid development of Artificial Intelligence (AI) and Internet of Things (IoTs), an increasing number of computation intensive or delay sensitive biomedical data processing and analysis tasks are produced in vehicles, bringing more and more challenges to the biometric monitoring of drivers. Edge computing is a new paradigm to solve these challenges by offloading tasks from the resource-limited vehicles to Edge Servers (ESs) in Road Side Units (RSUs). However, most of the traditional offloading schedules for vehicular networks concentrate on the edge, while some tasks may be too complex for ESs to process. To this end, we consider a collaborative vehicular network in which the cloud, edge and terminal can cooperate with each other to accomplish the tasks. The vehicles can offload the computation intensive tasks to the cloud to save the resource of edge. We further construct the virtual resource pool which can integrate the resource of multiple ESs since some regions may be covered by multiple RSUs. In this paper, we propose a Multi-Scenario offloading schedule for biomedical data processing and analysis in Cloud-Edge-Terminal collaborative vehicular networks called MSCET. The parameters of the proposed MSCET are optimized to maximize the system utility. We also conduct extensive simulations to evaluate the proposed MSCET and the results illustrate that MSCET outperforms other existing schedules. © 2004-2012 IEEE.
- «
- ‹
- 1
- ›
- »