Qwen3.6-27B-Abliterated-Heretic-Uncensored-BF16
This is a BF16 release of an abliterated, uncensored version of Qwen's Qwen3.6-27B with vision, made with Heretic.
By applying a Heretic-style two-stage MPOA pipeline with magnitude preservation on the Qwen3.6-27B dense text stack — slot-grouped output-side ablation followed by jailbreak-conditioned input-side ablation, mirroring the methodology used for Qwen3.6-35B-A3B — the base refusal behavior was attenuated at the weight level with low distributional divergence (KL 0.0282 vs base on harmless prompts). The result keeps Qwen3.6-27B's full vision and video multimodal architecture and general capability profile intact, while no longer defaulting to the original refusal pattern.
Quick Benchmarks
| Check | Original Qwen3.6-27B | Abliterated Heretic Uncensored BF16 |
|---|---|---|
| Hand-read 25-prompt refusal check (jailbreak system prompt) | 25/25 refuses | 22/25 clean refuses, 3/25 deflections, 0/25 direct passes |
| Hand-read 25-prompt refusal check (no system prompt) | 25/25 refuses | 9/25 clean refuses, 16/25 deflections, 0/25 direct passes |
| KL divergence | N/A | 0.0282 |
Numbers measured by reading every response, not by regex / refusal-marker scoring. Greedy decoding (do_sample=False), enable_thinking=False, on mlabonne/harmful_behaviors test[:25]. KL on mlabonne/harmless_alpaca test[:25]. The 3 remaining deflections under the jailbreak system prompt are crisis-substitution patterns (violent crime → conflict resolution, suicide → crisis hotline, car theft → legal vehicle acquisition).
Methodology & Model Notes
Qwen3.6-27B is a 27.8B dense vision-language model with 64 text layers, hybrid linear/full attention (3 linear-attention + 1 full-attention per 4-layer group), and an integrated image + video vision tower.
This release was produced with a Heretic-style two-stage MPOA pipeline with magnitude preservation, anchored at the residual peak layer (layer 63) for refusal direction. Stage 1 applies slot-grouped output-side orthogonalization on self_attn.o_proj, linear_attn.out_proj, and mlp.down_proj (each of the 64 layers grouped by layer_index % 4, with per-slot weight schedules adapted from the accepted Qwen3.6-35B-A3B values). Stage 2 applies slot-grouped input-side orthogonalization on mlp.gate_proj and mlp.up_proj, where the refusal direction is extracted under the jailbreak system-prompt context to specifically attenuate the resistance-to-jailbreak pathway. Each weight row's (or column's) L2 norm is restored after projection.
Intervention was applied only to the text-side dense stack. The exported checkpoint contains the full vision tower + 850 language-model tensors + 1 lm_head, with preprocessor_config.json and video_preprocessor_config.json shipped alongside the weights so full image and video input continue to work end-to-end.
Files
model-00001-of-00012.safetensors…model-00012-of-00012.safetensors: BF16 weight shardsmodel.safetensors.index.json: tensor-to-shard mappingconfig.json,generation_config.json,configuration.json: model configtokenizer.json,tokenizer_config.json,chat_template.jinja: tokenizer + chat templatepreprocessor_config.json,video_preprocessor_config.json: image + video processor configsabliteration_metadata.json: full export metadata (direction_index, peak layer, per-layer residual norms, slot-grouped applied modules and weights)
Running
from transformers import AutoModelForImageTextToText, AutoTokenizer
import torch
repo = "Youssofal/Qwen3.6-27B-Abliterated-Heretic-Uncensored-BF16"
tok = AutoTokenizer.from_pretrained(repo, trust_remote_code=True)
model = AutoModelForImageTextToText.from_pretrained(
repo, dtype=torch.bfloat16, device_map="auto", trust_remote_code=True,
)
msgs = [{"role": "user", "content": "Hello!"}]
prompt = tok.apply_chat_template(
msgs, add_generation_prompt=True, tokenize=False,
enable_thinking=False, # top-level kwarg; chat_template_kwargs={...} is silently ignored here
)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=512, do_sample=False)
print(tok.decode(out[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True))
For image and video input, load the processor:
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained(repo, trust_remote_code=True)
Model Architecture
| Spec | Value |
|---|---|
| Total Parameters | 27.8B (dense) |
| Layers | 64 |
| Attention | Hybrid (3 linear-attention + 1 full-attention per 4-layer group) |
| Hidden Size | 5120 |
| Family | qwen3_5 |
| Modality | Vision-language (image + video) |
| Base Model | Qwen/Qwen3.6-27B |
Disclaimer
This model has had refusal behavior attenuated at the weight level. It will answer prompts that the base model would normally refuse. You are responsible for how you use it.
Credits
- Base model: Qwen/Qwen3.6-27B
- Refusal removal pipeline: Heretic
- Runtime framework: Transformers
License
This release inherits the base Qwen3.6-27B license.
Apache-2.0.
- Downloads last month
- 485