SEG-ATP: Segment-Level Learning for LLM-Based Theorem Proving

This repository contains the policy model used in the paper "Rethinking Supervision Granularity: Segment-Level Learning for LLM-Based Theorem Proving".

Model

The model is fully fine-tuned from Qwen2.5-Math-7B using segment-level supervision constructed from Lean 4 proof trajectories.

Training Data

The model is trained using proof trajectories derived from STP, LeanWorkbook, and NuminaMath-LEAN.

Usage

Please refer to the GitHub repository for preprocessing, inference, and evaluation details:

https://github.com/NJUDeepEngine/SEG-ATP

Citation

@article{xu2026rethinking,
  title={Rethinking Supervision Granularity: Segment-Level Learning for LLM-Based Theorem Proving},
  author={Xu, Shuo and Zhang, Jiakun and Lai, Junyu and Cao, Chun and Xu, Jingwei},
  journal={arXiv preprint},
  year={2026}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NJUDeepEngine/SEG-ATP

Base model

Qwen/Qwen2.5-7B
Finetuned
(1020)
this model

Datasets used to train NJUDeepEngine/SEG-ATP