The evolution of Turing Award Collaboration Network : bibliometric-level and network-level metrics
- Kong, Xiangjie, Shi, Yajie, Wang, Wei, Ma, Kai, Wan, Liangtian, Xia, Feng
- Authors: Kong, Xiangjie , Shi, Yajie , Wang, Wei , Ma, Kai , Wan, Liangtian , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 6, no. 6 (2019), p. 1318-1328
- Full Text:
- Reviewed:
- Description: The year of 2017 for the 50th anniversary of the Turing Award, which represents the top-level award in the computer science field, is a milestone. We study the long-term evolution of the Turing Award Collaboration Network, and it can be considered as a microcosm of the computer science field from 1974 to 2016. First, scholars tend to publish articles by themselves at the early stages, and they began to focus on tight collaboration since the late 1980s. Second, compared with the same scale random network, although the Turing Award Collaboration Network has small-world properties, it is not a scale-free network. The reason may be that the number of collaborators per scholar is limited. It is impossible for scholars to connect to others freely (preferential attachment) as the scale-free network. Third, to measure how far a scholar is from the Turing Award, we propose a metric called the Turing Number (TN) and find that the TN decreases gradually over time. Meanwhile, we discover the phenomenon that scholars prefer to gather into groups to do research with the development of computer science. This article presents a new way to explore the evolution of academic collaboration network in the field of computer science by building and analyzing the Turing Award Collaboration Network for decades. © 2014 IEEE.
- Authors: Kong, Xiangjie , Shi, Yajie , Wang, Wei , Ma, Kai , Wan, Liangtian , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 6, no. 6 (2019), p. 1318-1328
- Full Text:
- Reviewed:
- Description: The year of 2017 for the 50th anniversary of the Turing Award, which represents the top-level award in the computer science field, is a milestone. We study the long-term evolution of the Turing Award Collaboration Network, and it can be considered as a microcosm of the computer science field from 1974 to 2016. First, scholars tend to publish articles by themselves at the early stages, and they began to focus on tight collaboration since the late 1980s. Second, compared with the same scale random network, although the Turing Award Collaboration Network has small-world properties, it is not a scale-free network. The reason may be that the number of collaborators per scholar is limited. It is impossible for scholars to connect to others freely (preferential attachment) as the scale-free network. Third, to measure how far a scholar is from the Turing Award, we propose a metric called the Turing Number (TN) and find that the TN decreases gradually over time. Meanwhile, we discover the phenomenon that scholars prefer to gather into groups to do research with the development of computer science. This article presents a new way to explore the evolution of academic collaboration network in the field of computer science by building and analyzing the Turing Award Collaboration Network for decades. © 2014 IEEE.
Deep matrix factorization for trust-aware recommendation in social networks
- Wan, Liangtian, Xia, Feng, Kong, Xiangjie, Hsu, Ching-Hsien, Huang, Runhe, Ma, Jianhua
- Authors: Wan, Liangtian , Xia, Feng , Kong, Xiangjie , Hsu, Ching-Hsien , Huang, Runhe , Ma, Jianhua
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Network Science and Engineering Vol. 8, no. 1 (2021), p. 511-528
- Full Text:
- Reviewed:
- Description: Recent years have witnessed remarkable information overload in online social networks, and social network based approaches for recommender systems have been widely studied. The trust information in social networks among users is an important factor for improving recommendation performance. Many successful recommendation tasks are treated as the matrix factorization problems. However, the prediction performance of matrix factorization based methods largely depends on the matrixes initialization of users and items. To address this challenge, we develop a novel trust-aware approach based on deep learning to alleviate the initialization dependence. First, we propose two deep matrix factorization (DMF) techniques, i.e., linear DMF and non-linear DMF to extract features from the user-item rating matrix for improving the initialization accuracy. The trust relationship is integrated into the DMF model according to the preference similarity and the derivations of users on items. Second, we exploit deep marginalized Denoising Autoencoder (Deep-MDAE) to extract the latent representation in the hidden layer from the trust relationship matrix to approximate the user factor matrix factorized from the user-item rating matrix. The community regularization is integrated in the joint optimization function to take neighbours' effects into consideration. The results of DMF are applied to initialize the updating variables of Deep-MDAE in order to further improve the recommendation performance. Finally, we validate that the proposed approach outperforms state-of-the-art baselines for recommendation, especially for the cold-start users. © 2013 IEEE.
- Authors: Wan, Liangtian , Xia, Feng , Kong, Xiangjie , Hsu, Ching-Hsien , Huang, Runhe , Ma, Jianhua
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Network Science and Engineering Vol. 8, no. 1 (2021), p. 511-528
- Full Text:
- Reviewed:
- Description: Recent years have witnessed remarkable information overload in online social networks, and social network based approaches for recommender systems have been widely studied. The trust information in social networks among users is an important factor for improving recommendation performance. Many successful recommendation tasks are treated as the matrix factorization problems. However, the prediction performance of matrix factorization based methods largely depends on the matrixes initialization of users and items. To address this challenge, we develop a novel trust-aware approach based on deep learning to alleviate the initialization dependence. First, we propose two deep matrix factorization (DMF) techniques, i.e., linear DMF and non-linear DMF to extract features from the user-item rating matrix for improving the initialization accuracy. The trust relationship is integrated into the DMF model according to the preference similarity and the derivations of users on items. Second, we exploit deep marginalized Denoising Autoencoder (Deep-MDAE) to extract the latent representation in the hidden layer from the trust relationship matrix to approximate the user factor matrix factorized from the user-item rating matrix. The community regularization is integrated in the joint optimization function to take neighbours' effects into consideration. The results of DMF are applied to initialize the updating variables of Deep-MDAE in order to further improve the recommendation performance. Finally, we validate that the proposed approach outperforms state-of-the-art baselines for recommendation, especially for the cold-start users. © 2013 IEEE.
Random walks : a review of algorithms and applications
- Xia, Feng, Liu, Jiaying, Nie, Hansong, Fu, Yonghao, Wan, Liangtian, Kong, Xiangjie
- Authors: Xia, Feng , Liu, Jiaying , Nie, Hansong , Fu, Yonghao , Wan, Liangtian , Kong, Xiangjie
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 4, no. 2 (2020), p. 95-107
- Full Text:
- Reviewed:
- Description: A random walk is known as a random process which describes a path including a succession of random steps in the mathematical space. It has increasingly been popular in various disciplines such as mathematics and computer science. Furthermore, in quantum mechanics, quantum walks can be regarded as quantum analogues of classical random walks. Classical random walks and quantum walks can be used to calculate the proximity between nodes and extract the topology in the network. Various random walk related models can be applied in different fields, which is of great significance to downstream tasks such as link prediction, recommendation, computer vision, semi-supervised learning, and network embedding. In this article, we aim to provide a comprehensive review of classical random walks and quantum walks. We first review the knowledge of classical random walks and quantum walks, including basic concepts and some typical algorithms. We also compare the algorithms based on quantum walks and classical random walks from the perspective of time complexity. Then we introduce their applications in the field of computer science. Finally we discuss the open issues from the perspectives of efficiency, main-memory volume, and computing time of existing algorithms. This study aims to contribute to this growing area of research by exploring random walks and quantum walks together. © 2017 IEEE.
- Authors: Xia, Feng , Liu, Jiaying , Nie, Hansong , Fu, Yonghao , Wan, Liangtian , Kong, Xiangjie
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 4, no. 2 (2020), p. 95-107
- Full Text:
- Reviewed:
- Description: A random walk is known as a random process which describes a path including a succession of random steps in the mathematical space. It has increasingly been popular in various disciplines such as mathematics and computer science. Furthermore, in quantum mechanics, quantum walks can be regarded as quantum analogues of classical random walks. Classical random walks and quantum walks can be used to calculate the proximity between nodes and extract the topology in the network. Various random walk related models can be applied in different fields, which is of great significance to downstream tasks such as link prediction, recommendation, computer vision, semi-supervised learning, and network embedding. In this article, we aim to provide a comprehensive review of classical random walks and quantum walks. We first review the knowledge of classical random walks and quantum walks, including basic concepts and some typical algorithms. We also compare the algorithms based on quantum walks and classical random walks from the perspective of time complexity. Then we introduce their applications in the field of computer science. Finally we discuss the open issues from the perspectives of efficiency, main-memory volume, and computing time of existing algorithms. This study aims to contribute to this growing area of research by exploring random walks and quantum walks together. © 2017 IEEE.
Predictive representation learning in motif-based graph networks
- Zhang, Kaiyuan, Yu, Shuo, Wan, Liangtian, Li, Jianxin, Xia, Feng
- Authors: Zhang, Kaiyuan , Yu, Shuo , Wan, Liangtian , Li, Jianxin , Xia, Feng
- Date: 2019
- Type: Text , Book chapter
- Relation: AI 2019: Advances in Artificial Intelligence Chapter 15 p. 177-188
- Full Text: false
- Reviewed:
- Description: Link prediction is an important task for analyzing social networks which also has other applications such as bioinformatics and e-commerce. Network representation learning (NRL), which can significantly enhance the performance for link prediction, has attracted much attention in recent years. However, the existing NRL methods mainly focus on observed network structures without considering hidden prediction knowledge in the representation space. Meanwhile, some random walk based NRL methods are dissatisfactory to learn link knowledge in dense networks with large scales. In this paper, we propose a predictive representation learning (PRL) model, which unifies node representations and motif-based structures, to improve prediction ability of NRL. We firstly enhance node representations based on motif-biased random walks and then employ L2-SVM to learn motif-connected node-pairs. By jointly optimizing two objectives of existent and nonexistent edges representations, we preserve more information of nodes in representation space based on supervised learning. To evaluate the performance of our proposed model, we implement experiments on 5 real data sets. Simulation results illustrate that our proposed model achieves better link prediction performance compared with other state-of-the-arts methods.
Not every couple is a pair : a supervised approach for lifetime collaborator identification
- Wang, Wei, Wan, Liangtian, Kong, Xiangjie, Gong, Zhiguo, Xia, Feng
- Authors: Wang, Wei , Wan, Liangtian , Kong, Xiangjie , Gong, Zhiguo , Xia, Feng
- Date: 2019
- Type: Text , Conference paper
- Relation: 23rd Pacific Asia Conference on Information Systems: Secure ICT Platform for the 4th Industrial Revolution, PACIS 2019, Xian, 8-12 July 2019
- Full Text: false
- Reviewed:
- Description: While scientific collaboration can be critical for a scholar, some collaborator(s) can be more significant than others, a.k.a. lifetime collaborator(s). This work-in-progress aims to investigate whether it is possible to predict/identify lifetime collaborators given a junior scholar's early profile. For this purpose, we propose a supervised approach by leveraging scholars' local and network properties. Extensive experiments on DBLP digital library demonstrate that lifetime collaborators can be accurately predicted. The proposed model outperforms baseline models with various predictors. Our study may shed light on the exploration of scientific collaborations from the perspective of life-long collaboration. © Proceedings of the 23rd Pacific Asia Conference on Information Systems: Secure ICT Platform for the 4th Industrial Revolution, PACIS 2019.
Graph learning : a survey
- Xia, Feng, Sun, Ke, Yu, Shuo, Aziz, Abdul, Wan, Liangtian, Pan, Shirui, Liu, Huan
- Authors: Xia, Feng , Sun, Ke , Yu, Shuo , Aziz, Abdul , Wan, Liangtian , Pan, Shirui , Liu, Huan
- Date: 2021
- Type: Text , Journal article , Review
- Relation: IEEE Transactions on Artificial Intelligence Vol. 2, no. 2 (2021), p. 109-127
- Full Text:
- Reviewed:
- Description: Graphs are widely used as a popular representation of the network structure of connected data. Graph data can be found in a broad spectrum of application domains such as social systems, ecosystems, biological networks, knowledge graphs, and information systems. With the continuous penetration of artificial intelligence technologies, graph learning (i.e., machine learning on graphs) is gaining attention from both researchers and practitioners. Graph learning proves effective for many tasks, such as classification, link prediction, and matching. Generally, graph learning methods extract relevant features of graphs by taking advantage of machine learning algorithms. In this survey, we present a comprehensive overview on the state-of-the-art of graph learning. Special attention is paid to four categories of existing graph learning methods, including graph signal processing, matrix factorization, random walk, and deep learning. Major models and algorithms under these categories are reviewed, respectively. We examine graph learning applications in areas such as text, images, science, knowledge graphs, and combinatorial optimization. In addition, we discuss several promising research directions in this field. Impact Statement—Real-world intelligent systems generally rely on machine learning algorithms handling data of various types. Despite their ubiquity, graph data have imposed unprecedented challenges to machine learning due to their inherent complexity. Unlike text, audio and images, graph data are embedded in an irregular domain, making some essential operations of existing machine learning algorithms inapplicable. Many graph learning models and algorithms have been developed to tackle these challenges. This article presents a systematic review of the state-of-the-art graph learning approaches as well as their potential applications. The article serves multiple purposes. First, it acts as a quick reference to graph learning for researchers and practitioners in different areas such as social computing, information retrieval, computer vision, bioinformatics, economics, and e-commence. Second, it presents insights into open areas of research in the field. Third, it aims to stimulate new research ideas and more interests in graph learning. © IEEE Transactions on Artificial Intelligence 2020.
- Authors: Xia, Feng , Sun, Ke , Yu, Shuo , Aziz, Abdul , Wan, Liangtian , Pan, Shirui , Liu, Huan
- Date: 2021
- Type: Text , Journal article , Review
- Relation: IEEE Transactions on Artificial Intelligence Vol. 2, no. 2 (2021), p. 109-127
- Full Text:
- Reviewed:
- Description: Graphs are widely used as a popular representation of the network structure of connected data. Graph data can be found in a broad spectrum of application domains such as social systems, ecosystems, biological networks, knowledge graphs, and information systems. With the continuous penetration of artificial intelligence technologies, graph learning (i.e., machine learning on graphs) is gaining attention from both researchers and practitioners. Graph learning proves effective for many tasks, such as classification, link prediction, and matching. Generally, graph learning methods extract relevant features of graphs by taking advantage of machine learning algorithms. In this survey, we present a comprehensive overview on the state-of-the-art of graph learning. Special attention is paid to four categories of existing graph learning methods, including graph signal processing, matrix factorization, random walk, and deep learning. Major models and algorithms under these categories are reviewed, respectively. We examine graph learning applications in areas such as text, images, science, knowledge graphs, and combinatorial optimization. In addition, we discuss several promising research directions in this field. Impact Statement—Real-world intelligent systems generally rely on machine learning algorithms handling data of various types. Despite their ubiquity, graph data have imposed unprecedented challenges to machine learning due to their inherent complexity. Unlike text, audio and images, graph data are embedded in an irregular domain, making some essential operations of existing machine learning algorithms inapplicable. Many graph learning models and algorithms have been developed to tackle these challenges. This article presents a systematic review of the state-of-the-art graph learning approaches as well as their potential applications. The article serves multiple purposes. First, it acts as a quick reference to graph learning for researchers and practitioners in different areas such as social computing, information retrieval, computer vision, bioinformatics, economics, and e-commence. Second, it presents insights into open areas of research in the field. Third, it aims to stimulate new research ideas and more interests in graph learning. © IEEE Transactions on Artificial Intelligence 2020.
- «
- ‹
- 1
- ›
- »