Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Event-based Semantic-aided Motion Segmentation

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Contributors:
      Heuristique et Diagnostic des Systèmes Complexes Compiègne (Heudiasyc); Université de Technologie de Compiègne (UTC)-Centre National de la Recherche Scientifique (CNRS); Laboratoire d'InfoRmatique en Image et Systèmes d'information (LIRIS); Université Lumière - Lyon 2 (UL2)-École Centrale de Lyon (ECL); Université de Lyon-Université de Lyon-Université Claude Bernard Lyon 1 (UCBL); Université de Lyon-Institut National des Sciences Appliquées de Lyon (INSA Lyon); Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS); dotation stage Heudiasyc; SCITEVENTS; INSTICC; SCITEPRESS
    • Publication Information:
      HAL CCSD
    • Publication Date:
      2024
    • Collection:
      Université de Lyon: HAL
    • Subject Terms:
    • Abstract:
      International audience ; Event cameras are emerging visual sensors inspired by biological systems. They capture intensity changes asynchronously with a temporal precision of up to µs, in contrast to traditional frame imaging techniques running at a fixed frequency of tens of Hz. However, effectively utilizing the data generated by these sensors requires the development of new algorithms and processing. In light of event cameras' significant advantages in capturing high-speed motion, researchers have turned their attention to event-based motion segmentation. Building upon (Mitrokhin et al., 2019) framework, we propose leveraging semantic segmentation enable the end-to-end network not only to segment moving objects from background motion, but also to achieve semantic segmentation of distinct moving objects. Remarkably, these capabilities are achieved while maintaining the network's low parameter count of 2.5M. To validate the effectiveness of our approach, we conduct experiments using the EVIMO dataset and the new and more challenging EVIMO2 dataset (Burner et al., 2022). The results demonstrate improvements attained by our method, showcasing its potential in event-based multi-objects motion segmentation.
    • Relation:
      hal-04355661; https://hal.science/hal-04355661; https://hal.science/hal-04355661/document; https://hal.science/hal-04355661/file/VISAPP_2024_57_CR.pdf
    • Online Access:
      https://hal.science/hal-04355661
      https://hal.science/hal-04355661/document
      https://hal.science/hal-04355661/file/VISAPP_2024_57_CR.pdf
    • Rights:
      http://creativecommons.org/licenses/by-nc-nd/ ; info:eu-repo/semantics/OpenAccess
    • Accession Number:
      edsbas.5A4BE814