Prioritized ctx over bpw for this one since it's primarily an agentic model. Would recommend Gemma 4 31B instead for non-agentic.

5.10 bpw, a mixture of Q5_K and Q4_K

Fits ~90k F16, ~170k Q8 CTX on a 24GiB GPU using Vulkan

Downloads last month
616
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Beinsezii/Qwen3.6-27B-5.10BPW-GGUF

Base model

Qwen/Qwen3.6-27B
Quantized
(205)
this model

Collection including Beinsezii/Qwen3.6-27B-5.10BPW-GGUF