IJCAI 2026 · Tutorial ProgramBremen · August 15–17, 2026

LLMs for Optimization: From Automated Modeling to Algorithmic Discovery

A half-day tutorial surveying how large language models are reshaping mathematical optimization — translating natural-language problems into formal models, configuring solvers, and discovering new algorithms. Designed for a broad AI audience, with no prior optimization expertise required.

Venue
IJCAI 2026, Bremen, Germany
Dates
August 15–17, 2026
Length
Half day · 3 hours 30 minutes
Format
Two 1.5-hour sessions with a 30-minute break
§ 01 · Abstract

Mathematical optimization is a foundational pillar of modern AI, underpinning decision-making in supply chains, energy systems, finance, and scheduling. Despite its importance, building and deploying optimization models remains a challenging, expert-driven process that requires significant domain knowledge and technical expertise.

This tutorial surveys the emerging interface between LLMs and optimization along two synergistic themes. First, we examine how LLMs can act as copilotsacross the optimization pipeline — assisting with problem formulation, model construction, solver configuration, and validation. Second, we explore the growing role of LLMs in algorithmic discovery, generating, refining, and discovering new optimization algorithms and heuristics.

The tutorial bridges machine learning and optimization, covering foundational concepts, surveying state-of-the-art methods and systems, and highlighting key challenges such as correctness, robustness, and handling ambiguous problem specifications.

3.5 h
Total duration
5
Program parts
20+
References
4
Presenters
§ 02 · Who & Why

Objectives and audience

The tutorial bridges AI researchers new to optimization, OR researchers curious about how LLMs can assist modeling and algorithm design, and practitioners building AI-enabled decision-support systems.

Learning outcomes
  1. Understand the core optimization pipeline and where LLMs can intervene.
  2. Survey the state of the art in LLM-based auto-formulation, from prompting to fine-tuning to agentic systems.
  3. Reason about correctness and equivalence for LLM-generated optimization models.
  4. Recognize how LLMs can assist solver configuration without large training datasets.
  5. Map the taxonomy of LLM-driven algorithmic discovery and its open research directions.
Intended audience
AI researchers

Graduate students and researchers with little or no formal background in optimization.

OR researchers

Optimization and operations research researchers curious about how LLMs can assist modeling and algorithm design.

Practitioners

Builders of AI-enabled decision-support systems in logistics, planning, scheduling, and resource allocation.

Prerequisites
  1. Basic mathematical maturity (variables, constraints, functions).
  2. General exposure to modern AI / machine learning.
  3. No prior expertise in integer programming, solver engineering, or LLM training required.
§ 03 · Program

Detailed schedule

Total duration 3 hours 30 minutes with a 30-minute coffee break. Start times are placeholders relative to a 09:00 session start — click any row to expand.

09:00 · Part 1
15 min

Introduction

A broad overview of mathematical optimization, centered on mixed integer linear programming (MILP) as a core AI and OR technique.

  • The optimization pipeline: understand → formulate → tune → validate.
  • Where current bottlenecks lie, and where LLMs can help.
  • Framing the two halves of the tutorial: copilot + discovery.
All
+
09:15 · Part 2
45 min

Optimization Model Formulation

Translating informal problem descriptions into precise optimization models — a central bottleneck for domain experts without formal training in optimization.

  • Agentic frameworks: OptiMUS, LEAN-LLM-OPT, Chain-of-Experts, MCTS-based approaches.
  • Fine-tuned models specialized for optimization: ORLM, LLM-OPT.
  • Benchmark datasets: textbook-style, synthetic, and real-world collections such as IndustryOR.
Lawless
+
10:00 · Part 3
30 min

Optimization Model Evaluation

How do we know a generated formulation actually solves the right problem? Recent frameworks for assessing correctness and equivalence.

  • Graph-isomorphism-based approaches (Xing et al., 2024).
  • Execution-based accuracy metrics (AhmadiTeshnizi et al., 2024).
  • EquivaMap: formal equivalence checking (Zhai et al., 2025).
Lawless
+
10:30 · Break
30 min

Coffee Break

11:00 · Part 4
45 min

Optimization Model Solving

Modern solvers (Gurobi, CPLEX) expose many configuration parameters whose tuning is time-consuming even for experts. Can LLMs help?

  • LLMs leveraging documentation, code, and prior research for cold-start configuration.
  • Contrast with traditional data-driven configuration methods.
  • Strengths, limitations, and open challenges in deployment.
Vitercik
+
11:45 · Part 5
45 min

Algorithmic Discovery with LLMs

LLMs as a new paradigm for automating algorithm design — lowering the barrier to entry and exploring design spaces hard to navigate manually.

  • A taxonomy: LLM as optimizer · extractor · predictor · designer.
  • Methods: EoH, ReEvo, Llamea, HSEvo, MEoH, MLES.
  • Applications across combinatorial optimization, black-box optimization, ML, and scientific discovery.
  • Open challenges: domain LLMs, benchmarking, human–AI collaboration.
Liu
+
§ 04 · Organizers

Presenters

Four researchers working at the boundary of machine learning, operations research, and human–computer interaction.

CL

Connor Lawless

Stanford University

Postdoc, Stanford, Connor Lawless is a Human-Centered AI postdoctoral researcher at Stanford, with a PhD in Operations Research and Information Engineering from Cornell. His work blends machine learning, computational optimization, and human–computer interaction to create human-centered artificial intelligence.

FL

Fei Liu

University of Zurich · ETH Zurich

Postdoc, UZHÐ Zurich, Researcher focused on automated algorithm design, evolutionary algorithms, neural combinatorial optimization, and multiobjective optimization. Lead author on [LLM4AD](https://github.com/Optima-CityU/LLM4AD).

HQ

Hanzhang Qin

National University of Singapore

Assistant Professor at NUS. Operations researcher working on optimization, revenue management, and the application of large language models to large-scale optimization workflows.

EV

Ellen Vitercik

Stanford University

Assistant Professor at Stanford with a joint appointment between the Management Science & Engineering and Computer Science departments. Her research—which has been recognized with a Schmidt Sciences AI2050 Early Career Fellowship and an NSF CAREER award, among other honors—spans machine learning and discrete optimization.

§ 05 · Reading

Key references

A curated subset of the growing literature at the LLM × optimization interface. Filter by theme.

Formulation

OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large Language Models

AhmadiTeshnizi et al., 2024
ICML 2024
Formulation

Autoformulation of Mathematical Optimization Models Using LLMs

Astorga et al., 2025
ICML 2025
Formulation

Chain-of-Experts: When LLMs Meet Complex Operations Research Problems

Xiao et al., 2023
ICLR 2023
Formulation

ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling

Huang et al., 2025
Operations Research 2025
Formulation

LLMOPT: Learning to Define and Solve General Optimization Problems from Scratch

Jiang et al., 2025
ICLR 2025
Formulation

Large-Scale Optimization Model Auto-Formulation: Harnessing LLM Flexibility via Structured Workflow

Liang et al., 2026
arXiv 2026
Evaluation

Towards Human-Aligned Evaluation for Linear Programming Word Problems

Xing et al., 2024
LREC-COLING 2024
Evaluation

EquivaMap: Leveraging LLMs for Automatic Equivalence Checking of Optimization Formulations

Zhai et al., 2025
ICML 2025
Solving

LLMs for Cold-Start Cutting Plane Separator Configuration

Lawless et al., 2025
CPAIOR 2025
Discovery

Large Language Models as Optimizers

Yang et al.
ICLR
Discovery

Large Language Model-Enhanced Algorithm Selection: Towards Comprehensive Algorithm Representation

Wu et al., 2024
IJCAI 2024
Discovery

LLM-TPF: Multiscale Temporal Periodicity-Semantic Fusion LLMs for Time Series Forecasting

Pan et al., 2025
IJCAI 2025
Discovery

Evolution of Heuristics (EoH): Towards Efficient Automatic Algorithm Design Using LLMs

Liu et al., 2024
ICML 2024
Discovery

ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution

Ye et al., 2024
NeurIPS 2024
Discovery

Llamea: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics

Van Stein & Bäck, 2024
IEEE TEVC 2024
Discovery

HSEvo: Elevating Automatic Heuristic Design with Diversity-Driven Harmony Search and Genetic Algorithm Using LLMs

Dat et al., 2025
AAAI 2025
Discovery

Multi-Objective Evolution of Heuristics Using Large Language Models

Yao et al., 2025
AAAI 2025
Discovery

Multimodal LLM-Assisted Evolutionary Search for Programmatic Control Policies

Hu et al., 2026
ICLR 2026
Survey

A Systematic Survey on Large Language Models for Algorithm Design

Liu et al., 2026
ACM Computing Surveys 2026
Background

Integer Programming

Conforti et al., 2014
Springer
§ 06 · History

Previous editions

A preliminary version was delivered at AAAI 2026. Content has been expanded through invited talks and updated research for IJCAI 2026.

2026 · Jan
AAAI 2026 Tutorial

Preliminary version delivered, ~200 attendees.

↗ materials
2025
CEC 2025 · IJCCI 2025 · IEEE Web Seminar

Invited talks on automated algorithm design with LLMs; materials online.

↗ materials
2026 · Aug
IJCAI 2026 Tutorial

Expanded version with algorithmic discovery track.

§ 07 · Responsible use

Ethics and oversight

LLM-generated optimization artifacts are proposals, not authoritative solutions.

Incorrect or hallucinated formulations can lead to unsafe or harmful decisions in high-stakes applications — logistics, energy, finance, healthcare. LLMs may also inherit biases from training data and from problem descriptions, which can propagate into objectives, constraints, and recommendations.

The tutorial discusses these risks directly. We emphasize evaluation, validation, transparency, reproducibility, and human oversight, and highlight responsible-use considerations for deploying these methods in real-world decision systems.