Since their introduction, graph attention networks achieved outstanding results in graph representation learning tasks. However, these networks consider only pairwise relations between features associated to the nodes and then are unable to fully exploit higher-order and long-range interactions present in many real world data-sets. In this paper, we introduce a neural architecture operating on data defined over the nodes and the edges of a graph, represented as the 1-skeleton of a regular cell complex, able to capture insightful higher-order and long-range interactions. In particular, we exploit the lower and upper neighborhoods, as encoded in the cell complex, to design two independent masked self-attention mechanisms, thus generalizing the conventional graph attention strategy. The approach used is hierarchical and it incorporates the following steps: i) a lifting algorithm that learns (additional) edge features from node features; ii) a cell attention mechanism to find the optimal combination of edge features over both lower and upper neighbors; iii) a hierarchical edge pooling mechanism to extract a compact meaningful set of features. The experimental results show that this method compares favorably with state of the art results on graph-based learning tasks while maintaining a low complexity.

Cell Attention Networks

Sardellitti, Stefania;
2023-01-01

Abstract

Since their introduction, graph attention networks achieved outstanding results in graph representation learning tasks. However, these networks consider only pairwise relations between features associated to the nodes and then are unable to fully exploit higher-order and long-range interactions present in many real world data-sets. In this paper, we introduce a neural architecture operating on data defined over the nodes and the edges of a graph, represented as the 1-skeleton of a regular cell complex, able to capture insightful higher-order and long-range interactions. In particular, we exploit the lower and upper neighborhoods, as encoded in the cell complex, to design two independent masked self-attention mechanisms, thus generalizing the conventional graph attention strategy. The approach used is hierarchical and it incorporates the following steps: i) a lifting algorithm that learns (additional) edge features from node features; ii) a cell attention mechanism to find the optimal combination of edge features over both lower and upper neighbors; iii) a hierarchical edge pooling mechanism to extract a compact meaningful set of features. The experimental results show that this method compares favorably with state of the art results on graph-based learning tasks while maintaining a low complexity.
2023
978-1-6654-8867-9
topological deep learning
geometric deep learning
attention networks
cell complexes
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12606/7653
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
social impact