CenGCN : centralized convolutional networks with vertex imbalance for scale-free graphs
- Xia, Feng, Wang, Lei, Tang, Tao, Chen, Xin, Kong, Xiangjie, Oatley, Giles, King, Irwin
- Authors: Xia, Feng , Wang, Lei , Tang, Tao , Chen, Xin , Kong, Xiangjie , Oatley, Giles , King, Irwin
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Knowledge and Data Engineering Vol. 35, no. 5 (2023), p. 4555-4569
- Full Text:
- Reviewed:
- Description: Graph Convolutional Networks (GCNs) have achieved impressive performance in a wide variety of areas, attracting considerable attention. The core step of GCNs is the information-passing framework that considers all information from neighbors to the central vertex to be equally important. Such equal importance, however, is inadequate for scale-free networks, where hub vertices propagate more dominant information due to vertex imbalance. In this paper, we propose a novel centrality-based framework named CenGCN to address the inequality of information. This framework first quantifies the similarity between hub vertices and their neighbors by label propagation with hub vertices. Based on this similarity and centrality indices, the framework transforms the graph by increasing or decreasing the weights of edges connecting hub vertices and adding self-connections to vertices. In each non-output layer of the GCN, this framework uses a hub attention mechanism to assign new weights to connected non-hub vertices based on their common information with hub vertices. We present two variants CenGCN_D and CenGCN_E, based on degree centrality and eigenvector centrality, respectively. We also conduct comprehensive experiments, including vertex classification, link prediction, vertex clustering, and network visualization. The results demonstrate that the two variants significantly outperform state-of-the-art baselines. © 1989-2012 IEEE.
- Authors: Xia, Feng , Wang, Lei , Tang, Tao , Chen, Xin , Kong, Xiangjie , Oatley, Giles , King, Irwin
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Knowledge and Data Engineering Vol. 35, no. 5 (2023), p. 4555-4569
- Full Text:
- Reviewed:
- Description: Graph Convolutional Networks (GCNs) have achieved impressive performance in a wide variety of areas, attracting considerable attention. The core step of GCNs is the information-passing framework that considers all information from neighbors to the central vertex to be equally important. Such equal importance, however, is inadequate for scale-free networks, where hub vertices propagate more dominant information due to vertex imbalance. In this paper, we propose a novel centrality-based framework named CenGCN to address the inequality of information. This framework first quantifies the similarity between hub vertices and their neighbors by label propagation with hub vertices. Based on this similarity and centrality indices, the framework transforms the graph by increasing or decreasing the weights of edges connecting hub vertices and adding self-connections to vertices. In each non-output layer of the GCN, this framework uses a hub attention mechanism to assign new weights to connected non-hub vertices based on their common information with hub vertices. We present two variants CenGCN_D and CenGCN_E, based on degree centrality and eigenvector centrality, respectively. We also conduct comprehensive experiments, including vertex classification, link prediction, vertex clustering, and network visualization. The results demonstrate that the two variants significantly outperform state-of-the-art baselines. © 1989-2012 IEEE.
Network embedding : taxonomies, frameworks and applications
- Hou, Mingliang, Ren, Jing, Zhang, Da, Kong, Xiangjie, Zhang, Dongyu, Xia, Feng
- Authors: Hou, Mingliang , Ren, Jing , Zhang, Da , Kong, Xiangjie , Zhang, Dongyu , Xia, Feng
- Date: 2020
- Type: Text , Journal article , Review
- Relation: Computer Science Review Vol. 38, no. (2020), p.
- Full Text:
- Reviewed:
- Description: Networks are a general language for describing complex systems of interacting entities. In the real world, a network always contains massive nodes, edges and additional complex information which leads to high complexity in computing and analyzing tasks. Network embedding aims at transforming one network into a low dimensional vector space which benefits the downstream network analysis tasks. In this survey, we provide a systematic overview of network embedding techniques in addressing challenges appearing in networks. We first introduce concepts and challenges in network embedding. Afterwards, we categorize network embedding methods using three categories, including static homogeneous network embedding methods, static heterogeneous network embedding methods and dynamic network embedding methods. Next, we summarize the datasets and evaluation tasks commonly used in network embedding. Finally, we discuss several future directions in this field. © 2020 Elsevier Inc.
- Authors: Hou, Mingliang , Ren, Jing , Zhang, Da , Kong, Xiangjie , Zhang, Dongyu , Xia, Feng
- Date: 2020
- Type: Text , Journal article , Review
- Relation: Computer Science Review Vol. 38, no. (2020), p.
- Full Text:
- Reviewed:
- Description: Networks are a general language for describing complex systems of interacting entities. In the real world, a network always contains massive nodes, edges and additional complex information which leads to high complexity in computing and analyzing tasks. Network embedding aims at transforming one network into a low dimensional vector space which benefits the downstream network analysis tasks. In this survey, we provide a systematic overview of network embedding techniques in addressing challenges appearing in networks. We first introduce concepts and challenges in network embedding. Afterwards, we categorize network embedding methods using three categories, including static homogeneous network embedding methods, static heterogeneous network embedding methods and dynamic network embedding methods. Next, we summarize the datasets and evaluation tasks commonly used in network embedding. Finally, we discuss several future directions in this field. © 2020 Elsevier Inc.
- «
- ‹
- 1
- ›
- »