Graph augmentation learning
- Yu, Shuo, Huang, Huafei, Dao, Minh, Xia, Feng
- Authors: Yu, Shuo , Huang, Huafei , Dao, Minh , Xia, Feng
- Date: 2022
- Type: Text , Conference paper
- Relation: 31st ACM Web Conference, WWW 2022, Virtual, online, 25 April 2022, WWW 2022 - Companion Proceedings of the Web Conference 2022 p. 1063-1072
- Full Text:
- Reviewed:
- Description: Graph Augmentation Learning (GAL) provides outstanding solutions for graph learning in handling incomplete data, noise data, etc. Numerous GAL methods have been proposed for graph-based applications such as social network analysis and traffic flow forecasting. However, the underlying reasons for the effectiveness of these GAL methods are still unclear. As a consequence, how to choose optimal graph augmentation strategy for a certain application scenario is still in black box. There is a lack of systematic, comprehensive, and experimentally validated guideline of GAL for scholars. Therefore, in this survey, we in-depth review GAL techniques from macro (graph), meso (subgraph), and micro (node/edge) levels. We further detailedly illustrate how GAL enhance the data quality and the model performance. The aggregation mechanism of augmentation strategies and graph learning models are also discussed by different application scenarios, i.e., data-specific, model-specific, and hybrid scenarios. To better show the outperformance of GAL, we experimentally validate the effectiveness and adaptability of different GAL strategies in different downstream tasks. Finally, we share our insights on several open issues of GAL, including heterogeneity, spatio-temporal dynamics, scalability, and generalization. © 2022 ACM.
- Authors: Yu, Shuo , Huang, Huafei , Dao, Minh , Xia, Feng
- Date: 2022
- Type: Text , Conference paper
- Relation: 31st ACM Web Conference, WWW 2022, Virtual, online, 25 April 2022, WWW 2022 - Companion Proceedings of the Web Conference 2022 p. 1063-1072
- Full Text:
- Reviewed:
- Description: Graph Augmentation Learning (GAL) provides outstanding solutions for graph learning in handling incomplete data, noise data, etc. Numerous GAL methods have been proposed for graph-based applications such as social network analysis and traffic flow forecasting. However, the underlying reasons for the effectiveness of these GAL methods are still unclear. As a consequence, how to choose optimal graph augmentation strategy for a certain application scenario is still in black box. There is a lack of systematic, comprehensive, and experimentally validated guideline of GAL for scholars. Therefore, in this survey, we in-depth review GAL techniques from macro (graph), meso (subgraph), and micro (node/edge) levels. We further detailedly illustrate how GAL enhance the data quality and the model performance. The aggregation mechanism of augmentation strategies and graph learning models are also discussed by different application scenarios, i.e., data-specific, model-specific, and hybrid scenarios. To better show the outperformance of GAL, we experimentally validate the effectiveness and adaptability of different GAL strategies in different downstream tasks. Finally, we share our insights on several open issues of GAL, including heterogeneity, spatio-temporal dynamics, scalability, and generalization. © 2022 ACM.
Robust graph neural networks via ensemble learning
- Lin, Qi, Yu, Shuo, Sun, Ke, Zhao, Wenhong, Alfarraj, Osama, Tolba, Amr, Xia, Feng
- Authors: Lin, Qi , Yu, Shuo , Sun, Ke , Zhao, Wenhong , Alfarraj, Osama , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 8 (2022), p.
- Full Text:
- Reviewed:
- Description: Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. However, most existing GNNs suffer from the nonrobustness issues, which poses a great challenge for applying GNNs into sensitive scenarios. Some researchers concentrate on constructing an ensemble model to mitigate the nonrobustness issues. Nevertheless, these methods ignore the interaction among base models, leading to similar graph representations. Moreover, due to the deterministic propagation applied in most existing GNNs, each node highly relies on its neighbors, leaving the nodes to be sensitive to perturbations. Therefore, in this paper, we propose a novel framework of graph ensemble learning based on knowledge passing (called GEL) to address the above issues. In order to achieve interaction, we consider the predictions of prior models as knowledge to obtain more reliable predictions. Moreover, we design a multilayer DropNode propagation strategy to reduce each node’s dependence on particular neighbors. This strategy also empowers each node to aggregate information from diverse neighbors, alleviating oversmoothing issues. We conduct experiments on three benchmark datasets, including Cora, Citeseer, and Pubmed. GEL outperforms GCN by more than 5% in terms of accuracy across all three datasets and also performs better than other state-of-the-art baselines. Extensive experimental results also show that the GEL alleviates the nonrobustness and oversmoothing issues. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Lin, Qi , Yu, Shuo , Sun, Ke , Zhao, Wenhong , Alfarraj, Osama , Tolba, Amr , Xia, Feng
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 8 (2022), p.
- Full Text:
- Reviewed:
- Description: Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. However, most existing GNNs suffer from the nonrobustness issues, which poses a great challenge for applying GNNs into sensitive scenarios. Some researchers concentrate on constructing an ensemble model to mitigate the nonrobustness issues. Nevertheless, these methods ignore the interaction among base models, leading to similar graph representations. Moreover, due to the deterministic propagation applied in most existing GNNs, each node highly relies on its neighbors, leaving the nodes to be sensitive to perturbations. Therefore, in this paper, we propose a novel framework of graph ensemble learning based on knowledge passing (called GEL) to address the above issues. In order to achieve interaction, we consider the predictions of prior models as knowledge to obtain more reliable predictions. Moreover, we design a multilayer DropNode propagation strategy to reduce each node’s dependence on particular neighbors. This strategy also empowers each node to aggregate information from diverse neighbors, alleviating oversmoothing issues. We conduct experiments on three benchmark datasets, including Cora, Citeseer, and Pubmed. GEL outperforms GCN by more than 5% in terms of accuracy across all three datasets and also performs better than other state-of-the-art baselines. Extensive experimental results also show that the GEL alleviates the nonrobustness and oversmoothing issues. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- «
- ‹
- 1
- ›
- »