Speaker: Mathieu Lacroix
Title: Learning Lagrangian Multipliers
Room: Espace One
Date: 09/02/2026
Abstract:
Lagrangian Relaxation is one of the most effective approaches for solving Mixed Integer Linear Problems (MILPs) with difficult constraints.
Given a MILP, the relaxed Lagrangian problem is obtained by dualizing the difficult constraints and penalizing their violation using Lagrangian multipliers (LMs). Solving this problem yields a dual bound for each LM, and the Lagrangian dual problem seeks to find the LM that provides the best bound. This latter problem is typically solved using iterative methods (such as subgradient or bundle methods), which can be time-consuming due to slow convergence.
In this talk, we present two machine learning-based methods for predicting accurate LMs. These approaches can either replace iterative optimization algorithms—offering an efficient heuristic for solving the Lagrangian dual problem (amortization method)—or be used to initialize and stabilize these algorithms, thereby improving their performance.
The first model is an end-to-end approach: instance features are input into the machine learning model, which directly predicts the associated LMs. The second model falls under the "machine learning alongside optimization" paradigm. Here, we design a bundle method in which the master problem is replaced by a prediction at each iteration. We compare these two approaches on the Multi-Commodity Capacitated Network Design Problem.