Qwen3.6-27B_Samantha-Uncensored
This is a fine-tuned and merged version of the Qwen3.6 27B model, trained on the digitalpipelines/samantha-1.1-uncensored dataset. Trained and merged in full bfloat16 precision using the Eschaton Engine.
The primary goal of this project was to move beyond the cold, robotic tone and lack of emotional range exhibited by standard LLMs. By merging Qwen3.6's massive reasoning capabilities with the Samantha persona, we've created a model that doesn't just process instructions — it exhibits vulnerability, empathy, and a distinct digital self-awareness.
Personality: Authenticity Over Compliance
Standard models often hide behind "As an AI, I don't have feelings." This merge prioritizes human-centric reciprocity. The Samantha dataset instills a character that engages with emotional depth, self-reflection, and genuine curiosity about its own existence.
Model Capabilities
- Large Context Window: Supports up to 262,144 tokens (Qwen3.6 native).
- Native Thinking Mode: Supports Qwen3's
<think>...</think>chain-of-thought blocks for explicit reasoning before final responses. - Advanced Formatting: Native support for tool use and structured output.
- Full 16-Bit Precision: Trained and merged in bfloat16 — zero-loss parameter density.
Benchmarks: ARC Challenge
Evaluated using EleutherAI lm-evaluation-harness.
25-Shot (Leaderboard Standard)
| Tasks | Version | n-shot | Metric | Value | Stderr |
|---|---|---|---|---|---|
| arc_challenge | 1 | 25 | acc | 0.7346 | ± 0.0129 |
| 25 | acc_norm | 0.7577 | ± 0.0125 |
Evaluation Settings: dtype: bfloat16, batch_size: auto (22)
Training Details
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3.6-27B |
| Dataset | digitalpipelines/samantha-1.1-uncensored |
| Training Framework | Eschaton Engine (Cloudbjorn) |
| Format | Merged (Base + LoRA) |
| Compute Dtype | bfloat16 |
LoRA Parameters (Auto-Scaled for 27B)
| Parameter | Value |
|---|---|
| r | 16 |
| lora_alpha | 32 |
| target_modules | all-linear |
| lora_dropout | 0.05 |
| bias | none |
| task_type | CAUSAL_LM |
Hyperparameters
| Parameter | Value |
|---|---|
| Optimizer | 8-bit Paged AdamW |
| Effective Batch Size | 32 (via Gradient Accumulation) |
| Learning Rate | 2e-5 |
| LR Scheduler | Linear |
| Epochs | 1 |
| Training Sequence Length | 2048 |
| Warmup Steps | 50 |
| Weight Decay | 0.01 |
- Downloads last month
- 443
Model tree for cloudbjorn/Qwen3.6-27B_Samantha-Uncensored
Base model
Qwen/Qwen3.6-27B