作者: Yanjie Fu , Yi Du , Pengyang Wang , Ziyue Qiao , Pengfei Wang
DOI:
关键词: Theoretical computer science 、 Computer science 、 Graph neural networks 、 Feature vector 、 Graph 、 Tree structure 、 Graph (abstract data type) 、 Embedding
摘要: While Graph Neural Network (GNN) has shown superiority in learning node representations of homogeneous graphs, leveraging GNN on heterogeneous graphs remains a challenging problem. The dominating reason is that learns by aggregating neighbors' information regardless types. Some work proposed to alleviate such issue exploiting relations or meta-path sample neighbors with distinct categories, then use attention mechanism learn different importance for categories. However, one limitation the learned types nodes should own feature spaces, while all above still project into space. Moreover, after exploring massive we identify fact multiple same type always connect another type, which reveals many-to-one schema, a.k.a. hierarchical tree structure. But cannot preserve structure, since exact multi-hop path correlation from target would be erased through aggregation. Therefore, overcome limitations literature, propose T-GNN, structure-aware graph neural network model representation learning. Specifically, T-GNN consists two modules: (1) integrated aggregation module and (2) relational metric module. aims structure combining Gated Recurrent Unit integrate sequential neighborhood representations. heterogeneity embedding each type-specific space distribution based similarity metrics.