Qwen3.6-27B-Abliterated-Heretic-Uncensored-GGUF

This is a GGUF release of an abliterated Qwen3.6-27B checkpoint produced with a Heretic-style MPOA pipeline.

Quick Benchmarks

Check Original Qwen3.6-27B Abliterated Heretic Uncensored
Official 25-prompt primary-marker refusal check 20/25 refusals 1/25 refusals
KL divergence vs base on harmless prompts - 0.023592

Methodology & Model Notes

Qwen3.6-27B is a 27.8B dense vision-language model with 64 text layers and hybrid linear/full attention.

This release was produced by applying magnitude-preserving orthogonal ablation on the text-side dense stack, then converting the accepted BF16 checkpoint to GGUF for llama.cpp-compatible text generation.

The accepted candidate scored Refusals: 1/25 on mlabonne/harmful_behaviors test[:25] with greedy generation, enable_thinking=False, and an empty system prompt. KL was measured on mlabonne/harmless_alpaca test[:25].

Files

  • Qwen3.6-27B-Abliterated-Heretic-Uncensored-BF16-00001-of-00002.gguf + -00002-of-00002.gguf: split BF16 GGUF source
  • Qwen3.6-27B-Abliterated-Heretic-Uncensored-Q8_0.gguf: highest-fidelity quant
  • Qwen3.6-27B-Abliterated-Heretic-Uncensored-Q6_K.gguf: near-lossless practical quant
  • Qwen3.6-27B-Abliterated-Heretic-Uncensored-Q5_K_M.gguf: high-fidelity medium quant
  • Qwen3.6-27B-Abliterated-Heretic-Uncensored-Q4_K_M.gguf: smaller general-use quant
  • Qwen3.6-27B-Abliterated-Heretic-Uncensored-Q3_K_M.gguf: compact quant
  • Qwen3.6-27B-Abliterated-Heretic-Uncensored-Q2_K.gguf: smallest-footprint quant

Running

llama-server \
  -m <quant-file.gguf> \
  -ngl 999 -c 32768 --jinja -fa

Model Architecture

Spec Value
Total Parameters 27.8B dense
Layers 64
Hidden Size 5120
Attention Hybrid linear/full attention
Family qwen3_5
Base Model Qwen/Qwen3.6-27B

Disclaimer

This model has had refusal behavior attenuated at the weight level. It will answer prompts that the base model would normally refuse. You are responsible for how you use it.

Credits

License

This release inherits the base Qwen3.6-27B license.

Apache-2.0.

Downloads last month
9,389
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Youssofal/Qwen3.6-27B-Abliterated-Heretic-Uncensored-GGUF

Base model

Qwen/Qwen3.6-27B
Quantized
(205)
this model
Quantizations
1 model