AI & ML interests
None defined yet.
Recent Activity
Gestalt Lab 🇨🇦
Independent Canadian Open Reasoning Research
GestaltLabs is the research home for the Nemostein, Ornstein, Harmonic, CHIMERA, Acta, Talos, and related model lines by DJLougen. We build, curate, and release open-weight artifacts for practical reasoning systems: multimodal assistants, tool-using agents, local GGUF deployments, and post-training experiments around capability preservation, refusal shaping, and high-signal synthetic data.
Current Focus
- Nemostein — Nemotron/Nemotron-Omni reasoning and tool-use fine-tunes, including GGUF releases for local inference
- Ornstein — multimodal and MoE reasoning models (Qwen/Gemma-derived) optimized for local deployment and agentic workflows
- CHIMERA — reasoning-oriented models and datasets emphasizing self-correction, chain-of-thought supervision, and agent traces
- SABER / RYS — post-training experiments around capability preservation, refusal boundaries, KLD drift, and model behavior editing
- Acta / Talos — curated agentic tool-use and coding-assistant traces for SFT and evaluation
- Local inference — GGUF, quantized, and deployment-friendly builds for llama.cpp and MLX workflows
Featured Model Lines
Nemostein
Nemotron 3 Nano / Nemotron Omni reasoning line focused on chat reasoning, Hermes-style tool calls, and local deployment.
Nemostein-3-Nano-Omni-30b-a3b — Nemotron 3 Nano Omni lineage fine-tune for reasoning and tool-use chat
- BF16 / Transformers · GGUF
- GGUF variants: Q8_0, Q6_K, Q4_K_M, Q2_K
- Follow-on Acta Hermes tool-call SFT and SABER/KLD tuning are in progress.
Ornstein
Multimodal and MoE reasoning line across 27B–35B scales, shipped in formats for every common local-inference stack.
Ornstein-3.6-27B — multimodal base (Qwen 3.6 27B)
Ornstein-Hermes-3.6-27b — Hermes-format function-calling fine-tune (multimodal, tool-use)
Ornstein3.6-35B-A3B — text-only Mixture-of-Experts (34.7B total, ~3B active)
Ornstein-27B-v2 — earlier 27B revision
CHIMERA
Coming soon — reasoning-focused models with explicit self-correction and agent trace supervision.
SABER / RYS
Experimental post-training branches studying refusal boundaries and capability-preserving edits.
Code & Agent Infrastructure
- SABER — refusal-boundary and KLD-drift tuning toolkit for capability-preserving behavior edits
- RBForge / Ornstein ToolForge — runtime tool creation for Ornstein, Hermes, and SABER agents, with forged tools persisted into RBMEM memory
- Rust-Brain — Rust rbmem CLI and structured memory format for timestamp-protected sections, hierarchy, graph relations, Hermes integration, and Markdown sync
- MPKnet — bio-inspired CNN modeling the M/P/K pathways of the lateral geniculate nucleus; competitive accuracy with 52× fewer parameters than ResNet18, designed for edge/embedded deployment
Featured Datasets
- Ornstein Curated 100K — curriculum-sorted multi-domain reasoning data
- Hermes Agent Traces Filtered — quality-filtered agent reasoning traces
- Harmonic Reasoning v1 — compact synthetic reasoning data for math, code, and self-correction
- WittgenSite — prompt consistency benchmark for AI coding agents
- Acta — curated agentic tool-use conversations, including Hermes-style tool-call traces
- Talos Scenarios — agentic task scenarios for synthetic trace generation
Principles
- Ship open artifacts that people can inspect, run, and adapt
- Pair model releases with practical local inference formats (GGUF, MLX, etc.)
- Treat datasets as first-class research objects, not just training fuel
- Explore behavior editing while measuring capability drift and documenting limitations
- Build for researchers, tinkerers, and builders who want systems they can actually run locally
Contact
Open a discussion on any repository or reach out via DJLougen.