Papers
arxiv:2604.12955

Modeling Copilots for Text-to-Model Translation

Published on Apr 16
Authors:
,
,

Abstract

Large language models are applied to translate natural language combinatorial problems into formal models, with a unified architecture supporting both satisfaction and optimization tasks across domains.

AI-generated summary

There is growing interest in leveraging large language models (LLMs) for text-to-model translation and optimization tasks. This paper aims to advance this line of research by introducing Text2Model and Text2Zinc. Text2Model is a suite of copilots based on several LLM strategies with varying complexity, along with an online leaderboard. Text2Zinc is a cross-domain dataset for capturing optimization and satisfaction problems specified in natural language, along with an interactive editor with built-in AI assistant. While there is an emerging literature on using LLMs for translating combinatorial problems into formal models, our work is the first attempt to integrate both satisfaction and optimization problems within a unified architecture and dataset. Moreover, our approach is solver-agnostic unlike existing work that focuses on translation to a solver-specific model. To achieve this, we leverage MiniZinc's solver-and-paradigm-agnostic modeling capabilities to formulate combinatorial problems. We conduct comprehensive experiments to compare execution and solution accuracy across several single- and multi-call strategies, including; zero-shot prompting, chain-of-thought reasoning, intermediate representations via knowledge-graphs, grammar-based syntax encoding, and agentic approaches that decompose the model into sequential sub-tasks. Our copilot strategies are competitive, and in parts improve, recent research in this domain. Our findings indicate that while LLMs are promising they are not yet a push-button technology for combinatorial modeling. We contribute Text2Model copilots and leaderboard, and Text2Zinc and interactive editor to open-source to support closing this performance gap.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.12955
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.12955 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.12955 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.12955 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.