Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request
Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request
Conference

Full Gradient Deep Reinforcement Learning for Average-Reward Criterion

Subjects: Markov decision processes; average reward; Full Gradient DQN algorithmUnited StatesPhiladelphia, United States

  • Source: Proceedings of Machine Learning Research (PMLR) ; L4DC - The 5th Annual Learning for Dynamics and Control Conference ; https://inria.hal.science/hal-04372096 ; L4DC - The 5th Annual Learning for

Record details

×
Conference

Beyond Average Return in Markov Decision Processes

Subjects: Markov Decision Process; Dynamic Programming; statistical functionnalsNew Orleans; United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; Neurips 2023 ; https://hal.science/hal-04264487 ; Neurips 2023, Dec 2023, New Orleans, United States

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; Thirty-seventh Conference on Neural Information Processing Systems, Dec 2023, New Orleans

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; Thirty-seventh Conference on Neural Information Processing Systems, Dec 2023, New Orleans

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; Thirty-seventh Conference on Neural Information Processing Systems, Dec 2023, New Orleans

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; Thirty-seventh Conference on Neural Information Processing Systems, Dec 2023, New Orleans

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; Thirty-seventh Conference on Neural Information Processing Systems, Dec 2023, New Orleans

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; Thirty-seventh Conference on Neural Information Processing Systems, Dec 2023, New Orleans

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; NeurIPS 2023 - Thirty-seventh Conference on Neural Information Processing

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; Thirty-seventh Conference on Neural Information Processing Systems, Dec 2023, New Orleans

Record details

×
Conference

Does a sparse ReLU network training problem always admit an optimum?

Subjects: Topology; Best Approximation Property; Sparse ReLU Neural NetworkNew Orleans (Lousiane); United States

  • Source: Thirty-seventh Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04108849 ; Thirty-seventh Conference on Neural Information Processing Systems, Dec 2023, New Orleans

Record details

×
Conference

Exploiting hidden structures in non-convex games for convergence to Nash equilibrium

Subjects: Non-convex games; Hidden games; Stochastic algorithmsNew Orleans (LA); United States

  • Source: Proceedings of the 37th International Conference on Neural Information Processing Systems ; NeurIPS 2023 - 37th Conference on Neural Information Processing Systems ;

Record details

×
Conference

Riemannian stochastic optimization methods avoid strict saddle points

Subjects: Optimization on manifolds; Stochastic approximation; Saddle-point avoidanceNew Orleans (LA); United States

  • Source: Riemannian stochastic optimization methods avoid strict saddle points ; NeurIPS 2023 - 37th Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04307876 ; NeurIPS 2023

Record details

×
Conference

The equivalence of dynamic and strategic stability under regularized learning in games

Subjects: Regularized learning; Strategic stability; Asymptotic stabilityNew Orleans (LA); United States

  • Source: Proceedings of the 37th International Conference on Neural Information Processing Systems ; NeurIPS 2023 - 37th Conference on Neural Information Processing Systems ;

Record details

×
Conference

GloptiNets: Scalable Non-Convex Optimization with Certificates

Subjects: Non-convex optimization; Kernel Sum-of-Squares; Polynomial optimizationNew Orleans; United States

  • Source: NeurIPS 2023 - 37th Conference on Neural Information Processing Systems ; https://inria.hal.science/hal-04138843 ; NeurIPS 2023 - 37th Conference on Neural Information Processing Systems, Dec 2023,

Record details

×
Conference

Payoff-based learning with matrix multiplicative weights in quantum games

Subjects: Quantum games; Nash equilibrium; Matrix multiplicative weightsNew Orleans (LA); United States

  • Source: Proceedings of the 37th International Conference on Neural Information Processing Systems ; NeurIPS 2023 - 37th Conference on Neural Information Processing Systems ;

Record details

×
Conference

An Optimization Framework for Dynamic Population Evacuation Problem Using a Vehicular Ad Hoc Network for Emergency Communication

Subjects: Network evacuation disaster online management Telecommunication network VANET shelter allocation dynamic traffic assignment; Network evacuation; disaster online managementWashington D.C.; United StatesWashington D.C., United States

  • Source: TRB 2023, Transportation Research Board 102nd Annual Meeting ; https://univ-eiffel.hal.science/hal-04489613 ; TRB 2023, Transportation Research Board 102nd Annual Meeting, Jan 2023, Washington D.C.,

Record details

×
Conference

An Optimization Framework for Dynamic Population Evacuation Problem Using a Vehicular Ad Hoc Network for Emergency Communication

Subjects: Network evacuation disaster online management Telecommunication network VANET shelter allocation dynamic traffic assignment; Network evacuation; disaster online managementWashington D.C.; United StatesWashington D.C., United States

  • Source: TRB 2023, Transportation Research Board 102nd Annual Meeting ; https://univ-eiffel.hal.science/hal-04489613 ; TRB 2023, Transportation Research Board 102nd Annual Meeting, Jan 2023, Washington D.C.,

Record details

×
  • 1-25 of  1,129 results for ""Mathematics""