Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Separating common from salient patterns with Contrastive Representation Learning

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Contributors:
      IFR49 - Neurospin - CEA; Commissariat à l'énergie atomique et aux énergies alternatives (CEA); Image, Modélisation, Analyse, GEométrie, Synthèse (IMAGES); Laboratoire Traitement et Communication de l'Information (LTCI); Institut Mines-Télécom Paris (IMT)-Télécom Paris-Institut Mines-Télécom Paris (IMT)-Télécom Paris; Département Images, Données, Signal (IDS); Télécom ParisTech; Service NEUROSPIN (NEUROSPIN); Université Paris-Saclay-Institut des Sciences du Vivant Frédéric JOLIOT (JOLIOT); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA); CEA- Saclay (CEA); Building large instruments for neuroimaging: from population imaging to ultra-high magnetic fields (BAOBAB); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay-Institut des Sciences du Vivant Frédéric JOLIOT (JOLIOT); Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Centre National de la Recherche Scientifique (CNRS)
    • Publication Information:
      HAL CCSD
    • Publication Date:
      2024
    • Collection:
      HAL-CEA (Commissariat à l'énergie atomique et aux énergies alternatives)
    • Subject Terms:
    • Abstract:
      International audience ; Contrastive Analysis is a sub-field of Representation Learning that aims at separating common factors of variation between two datasets, a background (i.e.,healthy subjects) and a target (i.e., diseased subjects), from the salient factors ofvariation, only present in the target dataset. Despite their relevance, current models based on Variational Auto-Encoders have shown poor performance in learningsemantically-expressive representations. On the other hand, Contrastive Representation Learning has shown tremendous performance leaps in various applications (classification, clustering, etc.). In this work, we propose to leverage the ability of Contrastive Learning to learn semantically expressive representations welladapted for Contrastive Analysis. We reformulate it under the lens of the InfoMaxPrinciple and identify two Mutual Information terms to maximize and one to minimize. We decompose the first two terms into an Alignment and a Uniformity term,as commonly done in Contrastive Learning. Then, we motivate a novel MutualInformation minimization strategy to prevent information leakage between common and salient distributions. We validate our method, called SepCLR, on threevisual datasets and three medical datasets, specifically conceived to assess thepattern separation capability in Contrastive Analysis.
    • Relation:
      hal-04537203; https://telecom-paris.hal.science/hal-04537203; https://telecom-paris.hal.science/hal-04537203/document; https://telecom-paris.hal.science/hal-04537203/file/Final.pdf
    • Online Access:
      https://telecom-paris.hal.science/hal-04537203
      https://telecom-paris.hal.science/hal-04537203/document
      https://telecom-paris.hal.science/hal-04537203/file/Final.pdf
    • Rights:
      info:eu-repo/semantics/OpenAccess
    • Accession Number:
      edsbas.C2189DBE