Scope and Topics

The AAAI Workshop on Learnable Optimization (LEANOPT) builds on the momentum that has been directed over the past 6 years, in both the operations research (OR) and machine learning (ML) communities, towards establishing modern ML methods as a “first-class citizen” at all levels of the OR toolkit.

While much progress has been made, many challenges remain due in part to data uncertainty, the hard constraints inherent to OR problems, and the high stakes involved. LEANOPT will serve as an interdisciplinary forum for researchers in OR and ML to discuss technical issues at this interface and present new ML approaches and software tools that accelerate classical optimization algorithms (e.g., for continuous, combinatorial, mixed-integer, stochastic optimization) as well as novel applications.

Topics

LEANOPT will place particular emphasis on:
  1. Learning to optimize (L2O) methods for solving constrained optimization problems.
  2. Predict-then-optimize/decision-focused learning.
  3. ML for heuristic and exact algorithms.
  4. New Graph Neural Networks for solving constrained optimization problems.
  5. MReinforcement Learning approaches for dynamic decision-making.
  6. New applications that can benefit from learnable optimization under uncertainty.

Format

While we are planning an in-person workshop to be held at AAAI-23. LEANOPT will be a one-day workshop consisting of a mix of events: multiple invited talks by recognized speakers from both OR and ML covering central theoretical, algorithmic, and practical challenges at this intersection; a poster session for accepted abstracts; and hands-on programming session featuring two open-source libraries NeuroMANCER and PyEPO.

Attendance

We aim to accommodate an audience of up to 50 attendees. The attendees will be a mix of workshop organizers, invited speakers, and invited researchers with accepted abstracts.

Important Dates

  • December 15, 2023 December 22, 2023 – Submission Deadline [Extended]
  • January 12, 2024 – Acceptance Notification
  • February 26 – Workshop Date

Submission Information

Submission Types

We invite researchers to submit extended abstracts (2 pages including references) describing novel contributions and preliminary results, respectively, to the topics above. Submissions tackling new problems or more than one of the aforementioned topics simultaneously are encouraged.

  • Submission email
  • Registration

    Registration in each workshop is required by all active participants, and is also open to all interested individuals. For more information please refer to AAAI-24 Workshop page.

    Schedule

    All times are in PST (UCT-8:00)
    • [09:00-09:05]: Workshop Opening
    • [09:05-09:50]: Invited Talk 1 Bistra Dilkina (University of Southern California)
    • [9:50-10:35]: Invited Talk 2 Bartolomeo Stellato (Princeton University)
    • [10:35-11:00]: Break (Light refreshments available near session rooms)
    • [11:00-12:30]: Poster Session
    • [12:30-14:00]: Lunch (on your own; no sponsored lunch provided)
    • [14:00-14:45]: Invited Talk 3 Andrea Lodi (Cornell Tech)
    • [14:45-15:30]: Invited Talk 4 Simone Garatti (Politecnico di Milano)
    • [15:30-16:00]: Break (Light refreshments available near session rooms)
    • [16:00-18:00]: Open-source Code Tutorial Session

    Invited Speakers

    Bistra Dilkina

    Bistra Dilkina

    University of Southern California

    Title: Contrastive Learning for ML-guided MIP search [09:05-09:50]

    Abstract: Recent research has demonstrated the ability to significantly improve MIP solving by integrating ML-guided components and learning policies tailored to specific problem distributions. This benefits directly the real-world deployment of Mixed Integer Programming models as they are often used to solve repeatedly similar problems, arising within a specific application context. We show that one important aspect of designing an effective ML integration in MIP solving is the training loss utilized to tune the ML model parameters. In particular, many important (heuristic) tasks in MIP solving involve choosing subsets of variables, and we demonstrate that Contrastive Loss is particularly well-suited for this setting by learning from both positive and negative examples of candidate sets. We show the successful application of contrastive loss training in the context of Large Neighborhood Search for MIP, as well as Backdoor selection for MIP, resulting in significant speed-ups over multiple domains.

    Bio: Dr. Bistra Dilkina is an associate professor of computer science at the University of Southern California, co-director of the USC Center of AI in Society, and the inaugural Dr. Allen and Charlotte Ginsburg Early Career Chair at the USC Viterbi School of Engineering. Her research and teaching are centered around the integration of machine learning and discrete optimization, with a strong focus on AI applications in computational sustainability and social good. She received her Ph.D. from Cornell University in 2012 and was a post-doctoral associate at the Institute for Computational Sustainability. Her research has contributed significant advances to machine-learning-guided combinatorial solving including mathematical programming and planning, as well as decision-focused learning where combinatorial reasoning is integrated in machine learning pipelines. Her applied research in Computational Sustainability spans using AI for wildlife conservation planning, using AI to understand the impacts of climate change in terms of energy, water, habitat and human migration, and using AI to optimize fortification of lifeline infrastructures for disaster resilience. Her work has been supported by the National Science Foundation, National Institute of Health, DHS Center of Excellence Critical Infrastructure Resilience Institute, Paul G. Allen Family Foundation, Microsoft, and Qualcomm, among others. She has over 90 publications and has co-organized or served as a chair to numerous workshops, tutorials, and special tracks at major conferences.

    Bartolomeo Stellato

    Bartolomeo Stellato

    Princeton University

    Title: Learning Decision-Focused Uncertainty Sets for Robust Optimization [09:50-10:35]

    Abstract: We propose a data-driven technique to automatically learn the uncertainty sets in robust optimization based on the performance and constraint satisfaction guarantees of the optimal solutions. Our method reshapes the uncertainty sets by minimizing the expected performance across a family of problems while guaranteeing constraint satisfaction. We learn the uncertainty sets using a stochastic augmented Lagrangian method that relies on differentiating the solutions of the robust optimization problems with respect to the parameters of the uncertainty set. We show finite-sample probabilistic guarantees of constraint satisfaction using empirical process theory. Our approach is very flexible and can learn a wide variety of uncertainty sets while preserving tractability. Numerical experiments show that our method outperforms traditional approaches in robust and distributionally robust optimization in terms of out-of-sample performance and constraint satisfaction guarantees. We implemented our method in the open-source package LROPT.

    Bio: Bartolomeo Stellato is an Assistant Professor in the Department of Operations Research and Financial Engineering at Princeton University. Previously, he was a Postdoctoral Associate at the MIT Sloan School of Management and Operations Research Center. He received a DPhil (PhD) in Engineering Science from the University of Oxford, a MSc in Robotics, Systems and Control from ETH Zürich, and a BSc in Automation Engineering from Politecnico di Milano. He is the developer of OSQP, a widely used solver in mathematical optimization. Bartolomeo Stellato's awards include the NSF CAREER Award, the Franco Strazzabosco Young Investigator Award from ISSNAF, the Princeton SEAS Innovation Award in Data Science, the Best Paper Award in Mathematical Programming Computation, and the First Place Prize Paper Award in IEEE Transactions on Power Electronics. His research focuses on data-driven computational tools for mathematical optimization, machine learning, and optimal control.

    Andrea Lodi

    Andrea Lodi

    Cornell Tech

    Title: Structured Pruning of Neural Networks for Constraints Learning [14:00-14:45]

    Abstract: The last decade has witnessed the impressive development of machine learning (ML) techniques - successfully applied to traditional statistical learning tasks as image recognition and leading to breakthroughs like the famous AlphaGo system. Motivated by those successes, many scientific disciplines have started to investigate the potential for the use of a large amount of data crunched by ML techniques in their context. Combinatorial optimization (CO) has been no exception to this trend, and the ML use in CO has been analyzed from many different angles with various levels of success. Today, we will discuss a tight integration between learning and optimization that is developed in three steps. First, Neural Networks (NNs) are used to learn the representation of some constraints of a CO problem. Second, mathematical programming techniques are used to prune the NNs to obtain a more manageable constraint representation. Third, the resulting CO problem with learned constraints is solved by a solver, in the specific case Gurobi. This is joint work with M. Cacciola and A. Frangioni.

    Bio: Andrea Lodi is an Andrew H. and Ann R. Tisch Professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion since 2021. He received his PhD in System Engineering from the University of Bologna in 2000 and he has been Herman Goldstine Fellow at the IBM Mathematical Sciences Department, full professor of Operations Research at the University of Bologna, and Canada Excellence Research Chair at Polytechnique Montréal. His main research interests are in Mixed-Integer Linear and Nonlinear Programming and Data Science. He has been recognized by IBM and Google faculty awards, the 2021 Farkas Prize by the INFORMS Optimization Society and as 2023 INFORMS Fellow. Andrea Lodi has been network coordinator and principal investigator of EU and Canadian projects and consultant of the IBM CPLEX research and development team (2006-2021).

    Simone Garatti

    Simone Garatti

    Politecnico di Milano

    Title: Optimization meets AI: trustworthy decisions via the scenario approach [14:45-15:30]

    Abstract: Model-based design approaches are increasingly proving insufficient to cope with the growing complexity of contemporary science and engineering. This has given way to the ascendancy of learning-based methods, able to leverage a posteriori knowledge coming from observations to make designs without the need to reconstruct the underlying data generation mechanism. However, ensuring the reliability of solutions obtained through learning-based approaches necessitates the development of truly new theoretical foundations. This presentation aims to introduce the scenario approach, a relatively recent, yet firmly established, framework for learning-based optimization and decision-making. Within this framework, recent developments have unveiled a profound and broadly applicable connection between the "risk" - defined as the probability of underperforming on new, out-of-sample data - and an observable quantity, called "complexity". While this result reveals that data contain more information than expected, it also enables the attainment of tight assessments of the risk, opening the door to a dependable usage of learning-based methods in automated, human-free, decision-making.

    Bio: Simone Garatti received both his M.S. and Ph.D. in Information Technology from the the Politecnico di Milano, Italy, in 2000 and 2004, respectively. After graduating, he joined the Faculty of the Politecnico di Milano, where he currently holds a position of Associate Professor in the Automatic Control area at the Dipartimento di Elettronica, Informazione e Bioingegneria. He also held visiting positions at some prestigious foreign universities, like the University of California San Diego (UCSD) (as winner of a fellowship for the short-term mobility of researchers from the National Research Council of Italy), the Massachusetts Institute of Technology (MIT), and the University of Oxford. From 2013 to 2019 he served for the EUCA Conference Editorial Board, while he is currently member of the IEEE-CSS Conference Editorial Board and Associate Editor for the International Journal of Adaptive Control and Signal Processing and for the Machine Learning and Knowledge Extraction journal. He is also member of the IFAC Technical Committee on Modeling, Identification and Signal Processing, of the IEEE-CSS Technical Committee on Robust and Complex Systems, and of the IEEE-CSS Technical Committee on System Identification and Adaptive Control. Simone Garatti is one the founders of the theory of the scenario approach, a unitary framework to make designs where the effect of uncertainty is controlled by knowledge drawn from past experience, and in recognition of his contributions, he was invited speakers at various workshops, he was keynote speaker in the IEEE 3rd Conference on Norbert Wiener in the 21st Century in 2021, and he gave a semi-plenary address in the 2022 European Conference on Stochastic Optimization and Computational Management Science (ECSO-CMS). Simone Garatti is the author/co-author of the book "Introduction to the Scenario Approach" published by SIAM in 2018 and of more than 100 contributions in international journals, international books, and proceedings of international conferences. Besides data-driven optimization and decision-making, his research interests also include system identification, uncertainty quantification, and machine learning.

    Accepted Posters

    • Benedikt Schesch, Large Scale Constrained Clustering With Reinforcement Learning
    • Heavy-ball and Nesterov’s Accelerations for Communication-efficient Exact Diffusion Method
    • Zouitine Mehdi, Learning Heuristics for Combinatorial Optimization Problems on K-Partite Hypergraphs
    • Saurabh Mishra, Reducing Predict and Optimize to Convex Feasibility
    • Defeng Liu, Reducing Constraint Violations in MIP Large Neighborhood Search
    • Jinzhao Li, Solving Optimization Problems As Satisfiability Modulo Counting with Guarantees
    • Chris Cameron, Synthesizing SAT Solvers via Monte Carlo Forest Search
    • Truong Nghiem, Learning-enabled Framework for Communication-efficient Distributed Optimization
    • Dravyansh Sharma, Shifting regret for tuning combinatorial algorithms with applications to clustering
    • Dravyansh Sharma, Accelerating data-driven algorithm design using output-sensitive techniques
    • Sungwook Yang, Towards an Adaptable and Generalizable Optimization Engine in Decision and Control: A Meta Reinforcement Learning Approach
    • Kevin Zhou, Understanding and Improving Composite Bayesian Optimization
    • Junyang Cai, Learning Backdoors for Mixed Integer Programs with Contrastive Learning
    • Bo Tang, CaVE: A Cone-Aligned Approach for Fast Predict-then-optimize with Binary Linear Programs
    • Hai Xia, Enhancing MaxSAT-Based Bayesian Network Learning with Real-Time Tuning
    • Justin Dumouchelle, NEUR2RO: Neural Two-Stage Robust Optimization
    • Taoan Huang, Contrastive Predict-and-Search for Mixed Integer Linear Programs
    • Joaquim Masset Lacombe Dias Garcia, Application-Driven Learning: A Closed-Loop Prediction and Optimization Approach Applied to Dynamic Reserves and Demand Forecasting

    Open-source Code Tutorial Session

    This two-hour long session will provide hands-on code tutorials in the form of well documented jupyter notebooks introducing two popular open-source libraries integrating constrained optimization with deep learning.

    PyEPO - Bo Tang [16:00-17:00]

    Summary: PyEPO (PyTorch-based End-to-End Predict-then-Optimize Tool) is a Python-based, open-source software that supports modeling and solving predict-then-optimize problems with the linear objective function. The core capability of PyEPO is to build optimization models with GurobiPy, Pyomo, or any other solvers and algorithms, then embed the optimization model into an artificial neural network for the end-to-end training. For this purpose, PyEPO implements various methods as PyTorch autograd modules. PyEPO Slides.

    NeuroMANCER - Madelyn Shapiro and Jan Drgona [17:00-18:00]

    Summary: Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations (NeuroMANCER) is an open-source differentiable programming (DP) library for solving parametric constrained optimization problems, physics-informed system identification, and parametric model-based optimal control. NeuroMANCER is written in PyTorch and allows for systematic integration of machine learning with scientific computing for creating end-to-end differentiable models and algorithms embedded with prior knowledge and physics. Neuromancer Slides.

    Workshop Chairs

    Elias B. Khalil

    University of Toronto

    kha...@mie.utoronto.ca

    Ján Drgoňa

    Pacific Northwest National Laboratory

    jan...@pnnl.gov

    Ferdinando Fioretto

    Syracuse University

    ffi...@syr.edu

    Draguna Vrabie

    Pacific Northwest National Laboratory

    drag...@pnnl.gov