GPT-OSS-20B STEM Reasoning (Merged)
This is the merged full model (LoRA weights merged into base model) for standalone use.
Author: Khadim Hussain
For the LoRA adapter version (smaller download), see: khadim-hussain/gpt-oss-20b-stem-reasoning
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"khadim-hussain/gpt-oss-20b-stem-reasoning-merged",
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
"khadim-hussain/gpt-oss-20b-stem-reasoning-merged",
trust_remote_code=True,
)
# GPT-OSS uses Harmony format
prompt = """<|start|>system<|message|>You are a helpful assistant trained by OpenAI.
Reasoning: medium
# Valid channels: analysis, commentary, final. Channel must be included for every message.<|end|><|start|>user<|message|>What is DNA?<|end|><|start|>assistant"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
Model Details
| Property | Value |
|---|---|
| Base Model | openai/gpt-oss-20b |
| Type | Merged (LoRA merged into base weights) |
| Size | ~41GB (bf16) |
| Parameters | 21B total, 3.6B active (MoE) |
| Format | OpenAI Harmony |
| Training | QLoRA fine-tuning on STEM Q&A dataset |
Harmony Response Format
GPT-OSS uses the OpenAI Harmony format with channels:
<|channel|>analysis- Chain-of-thought reasoning (thinking)<|channel|>final- Final user-facing response
Example output:
<|start|>assistant<|channel|>analysis<|message|>DNA stands for deoxyribonucleic acid. It is a molecule...<|end|>
<|start|>assistant<|channel|>final<|message|>DNA is a molecule that carries genetic instructions used in growth, development, and reproduction.<|return|>
Training Metrics
| Metric | Value |
|---|---|
| Train Loss | 1.087 |
| Eval Loss | 0.837 |
| Training Examples | 4,260 |
| Evaluation Examples | 474 |
| Epochs | 1 |
GGUF Version
A GGUF quantized version (Q8_0, ~21GB) is available for use with Ollama/llama.cpp:
# Using Ollama
ollama run khadim-hussain/gpt-oss-20b-stem-reasoning
See: khadim-hussain/gpt-oss-20b-stem-reasoning-GGUF
Acknowledgments
- OpenAI - GPT-OSS-20B base model
- Unsloth - Fast fine-tuning framework
- Hugging Face - TRL, PEFT, Transformers
- llama.cpp - GGUF conversion tools
Citation
If you use this model, please cite:
@misc{hussain2026gptoss-stem,
author = {Hussain, Khadim},
title = {GPT-OSS-20B STEM Reasoning: Fine-tuned for Science Q&A with Chain-of-Thought},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/khadim-hussain/gpt-oss-20b-stem-reasoning-merged}
}
Also cite the original GPT-OSS model:
@misc{openai2025gptoss,
author = {OpenAI},
title = {GPT-OSS: Open-Weight Language Models},
year = {2025},
url = {https://github.com/openai/gpt-oss}
}
License
Apache 2.0 (inherited from GPT-OSS)
- Downloads last month
- 6
Model tree for khadim-hussain/gpt-oss-20b-stem-reasoning-merged
Base model
openai/gpt-oss-20b