Qwen3.6-27B Claude Opus 4.6 10000x LoRA Step 100
This repo contains a LoRA adapter checkpoint from a supervised fine-tuning run of Qwen/Qwen3.6-27B on the dataset Roman1111111/claude-opus-4.6-10000x.
Status
This is the latest saved adapter checkpoint at training step 100.
The run later hit a CUDA OOM at step 104, so this is not a completed full-epoch artifact.
Dataset
- Source dataset:
Roman1111111/claude-opus-4.6-10000x - Normalized rows:
9149train /482validation - Sequence length cap:
4096 - Main categories in the source set: logic/math, math, code
Training setup
- Base model:
Qwen/Qwen3.6-27B - Adapter type: LoRA
- Precision: bf16 base load with LoRA adapters
- LoRA rank:
64 - LoRA alpha:
128 - Effective batch size:
32 - Saved checkpoint:
100 / 286optimizer steps
See dataset_stats.json and trainer_state.json for the captured run metadata.
- Downloads last month
- 26
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for kai-os/Qwen3.6-27b-Opus4.6-reasoning
Base model
Qwen/Qwen3.6-27B