Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Higher-order Sparse Convolutions in Graph Neural Networks

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Contributors:
      Multimédia (MM); Laboratoire Traitement et Communication de l'Information (LTCI); Institut Mines-Télécom Paris (IMT)-Télécom Paris-Institut Mines-Télécom Paris (IMT)-Télécom Paris; Département Images, Données, Signal (IDS); Télécom ParisTech; Khalifa University; Information Technology University Lahore (ITU); Centre de vision numérique (CVN); Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay; OPtimisation Imagerie et Santé (OPIS); Inria Saclay - Ile de France; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de vision numérique (CVN); Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-CentraleSupélec-Université Paris-Saclay; Université Paris-Saclay; Mathématiques, Image et Applications - EA 3165 (MIA); La Rochelle Université (ULR)
    • Publication Information:
      HAL CCSD
      IEEE
    • Publication Date:
      2023
    • Collection:
      HAL - Université de La Rochelle
    • Subject Terms:
    • Subject Terms:
      Rhodes Island, Greece, Greece
    • Abstract:
      International audience ; Graph Neural Networks (GNNs) have been applied to many problems in computer sciences. Capturing higher-order relationships between nodes is crucial to increase the expressive power of GNNs. However, existing methods to capture these relationships could be infeasible for large-scale graphs. In this work, we introduce a new higher-order sparse convolution based on the Sobolev norm of graph signals. Our Sparse Sobolev GNN (S-SobGNN) computes a cascade of filters on each layer with increasing Hadamard powers to get a more diverse set of functions, and then a linear combination layer weights the embeddings of each filter. We evaluate S-SobGNN in several applications of semi-supervised learning. S-SobGNN shows competitive performance in all applications as compared to several state-of-the-art methods.
    • Relation:
      info:eu-repo/semantics/altIdentifier/arxiv/2302.10505; hal-04368752; https://hal.science/hal-04368752; https://hal.science/hal-04368752/document; https://hal.science/hal-04368752/file/2302.10505.pdf; ARXIV: 2302.10505
    • Accession Number:
      10.1109/ICASSP49357.2023.10096494
    • Online Access:
      https://doi.org/10.1109/ICASSP49357.2023.10096494
      https://hal.science/hal-04368752
      https://hal.science/hal-04368752/document
      https://hal.science/hal-04368752/file/2302.10505.pdf
    • Rights:
      info:eu-repo/semantics/OpenAccess
    • Accession Number:
      edsbas.31EA33AA