Model Overview

Description:

Qwen3-Nemotron-235B-A22B-GenRM-2603 is a Generative Reward Model (GenRM) that leverages Qwen3-235B-A22B-Thinking-2507 as the foundation and is fine-tuned to evaluate the quality of assistant's responses.

Given a conversation history, a new user request, and two candidate assistant responses, it produces an individual helpfulness score for each response and a ranking score.

This GenRM is used in the Reinforcement Learning from Human Feedback training of NVIDIA-Nemotron-3-Super-120B-A12B-BF16.

For training details, see the Nemotron 3 Super technical report (coming soon).

This model is ready for commercial/non-commercial use.

License/Terms of Use:

The model is licensed with Apache 2.0.

Deployment Geography

Global

Release Date:

HuggingFace 2026-03-11 via https://huggingface.co/nvidia/Qwen3-Nemotron-235B-A22B-GenRM-2603

References:

Model Architecture:

Architecture Type: Transformer
Network Architecture: Qwen3

We developed this model using Qwen/Qwen3-235B-A22B-Thinking-2507 as its foundation. This model contains 235 billion parameters.

Input:

Input Type(s): Text
Input Format: String
Input Parameters: One Dimensional (1D)
Other Properties Related to Input: Max of 128k tokens

Output:

Output Type(s): Text
Output Format: String
Output Parameters: One-Dimensional (1D)

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Hopper

Supported Operating System(s): Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Quick Start

The model shares the same architecture as Qwen3-235B-A22B-Thinking-2507. It can be served with vLLM.

python3 -m vllm.entrypoints.openai.api_server \
  --model "nvidia/Qwen3-Nemotron-235B-A22B-GenRM-2603" \
  --trust-remote-code \
  --seed=1 \
  --host="0.0.0.0" \
  --port=5000 \
  --served-model-name "nvidia/Qwen3-Nemotron-235B-A22B-GenRM-2603" \
  --tensor-parallel-size=8 \
  --max-model-len=40000 \
  --gpu-memory-utilization=0.95

Now you can query the model, here is an example:

from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:5000/v1", api_key="dummy")

msg = [
  {"role": "user", "content": "What is 1+1?"}, 
  {"role": "assistant", "content": "1+1=2"}, 
  {"role": "user", "content": "What about 1+2?"},
  {"role": "response_1", "content": "1+2=4"},
  {"role": "response_2", "content": "1+2=3"}
]

completion = client.chat.completions.create(
    model="nvidia/Qwen3-Nemotron-235B-A22B-GenRM-2603",
    messages=msg,
    temperature=0.6,
    top_p=0.95,
    max_tokens=16384,
    stream=False
)
output = completion.choices[0].message.content
print(output.split("</think>")[-1].strip())

Note that the conversation history should be presented in "user" and "assistant" roles, where the last turn is user turn. The responses to be judged should be in "response_1" and "response_2" roles.

Interpretation of Scores

For individual helpfulness score, it ranges from 1 to 5, where higher means better.

For ranking score, it ranges from 1 to 6, where:

  • 1 = Response 1 is much better than Response 2
  • 2 = Response 1 is better than Response 2
  • 3 = Response 1 is slightly better than Response 2
  • 4 = Response 2 is slightly better than Response 1
  • 5 = Response 2 is better than Response 1
  • 6 = Response 2 is much better than Response 1

Model Version:

v1.0

Training Datasets:

Dataset Name: Subset of Nemotron-Post-Training-v3 containing samples from HelpSteer3, lmarena-ai/arena-human-preference-140k (commercial-friendly models only) and additional chat and safety preference data.

Dataset Link: Nemotron-Post-Training-v3

Data Collection Method

  • [Hybrid: Human, Synthetic]

Labeling Method

  • [Hybrid: Human,Synthetic]

Inference:

Engine: PyTorch
Test Hardware: H100

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety and Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Citation

If you find this model useful, please cite the following works:

@misc{wang2025helpsteer3preferenceopenhumanannotatedpreference,
      title={Help{S}teer3-{P}reference: Open Human-Annotated Preference Data across Diverse Tasks and Languages},
      author={Zhilin Wang and Jiaqi Zeng and Olivier Delalleau and Hoo-Chang Shin and Felipe Soares and Alexander Bukharin and Ellie Evans and Yi Dong and Oleksii Kuchaiev},
      year={2025},
      eprint={2505.11475},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.11475}, 
}
@misc{wang2025rlbffbinaryflexiblefeedback,
      title={RLBFF: Binary Flexible Feedback to bridge between Human Feedback & Verifiable Rewards}, 
      author={Zhilin Wang and Jiaqi Zeng and Olivier Delalleau and Ellie Evans and Daniel Egert and Hoo-Chang Shin and Felipe Soares and Yi Dong and Oleksii Kuchaiev},
      year={2025},
      eprint={2509.21319},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.21319}, 
}
Downloads last month
2,648
Safetensors
Model size
235B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nvidia/Qwen3-Nemotron-235B-A22B-GenRM-2603

Finetuned
(20)
this model

Collection including nvidia/Qwen3-Nemotron-235B-A22B-GenRM-2603

Papers for nvidia/Qwen3-Nemotron-235B-A22B-GenRM-2603