COLM β€” Complex Oscillating Language Model

Paper: Zenodo (PDF) | Code: GitHub | Dataset: edeneldith/DCDM | Predecessor: WiggleGPT (Zenodo)

Author: Phillip C. O'Brien β€” ORCID 0009-0007-3961-1182

What is COLM?

COLM is a novel autoregressive language model that operates entirely in the complex number plane using oscillatory neurons. It replaces the transformer's quadratic-complexity self-attention with an O(N) causal recurrence driven by complex-valued gates, and replaces all learned linear transformations in its core blocks with fixed unitary rotations and element-wise complex oscillatory activations.

Zero nn.Linear layers in the processing blocks β€” all transformation is performed by the oscillating activation sin(W * Z + B) * tanh(Z) where W, B are complex-valued, routed through fixed energy-preserving complex mixers.

Key Results

Metric Value
Parameters 498,214
Best validation loss 1.1449
Creativity score (GPT-5.4 blind eval) 4.83 / 10
Age group estimate 84% rated age 13-16
Training time 8.7 hours
Hardware Single RTX 5060 Ti 16GB
Tokenizer 499-token word+character hybrid (396 word tokens, 98 character fallback)
Domain Theological-philosophical prose

At 498k parameters β€” roughly half the size of TinyStories' smallest coherent model β€” COLM generates thematically coherent philosophical prose at temperature 1 with no spell correction.

Architecture

Component COLM
State Native torch.cfloat throughout
Activation sin(W * Z + B) * tanh(Z), complex W, B
Sequence routing O(N) causal recurrence via torch.cumsum
MLP/FFN Fixed unitary mixer -> Oscillator -> mixer -> Oscillator
Residual Complex sinc resonance coupling
Normalisation ComplexRMSNorm (phase-preserving)
Sparsity Learnable sigmoidal gate on magnitude

Model Configuration

{
  "n_embd": 324,
  "n_layer": 16,
  "embed_dim": 66,
  "block_size": 128,
  "vocab_size": 499
}

Files

File Description
colm_best_Final.pt Best checkpoint (step 860,000, val loss 1.1449)
colm_config.json Full training and architecture configuration
colm_tokenizer.json 499-token word+character hybrid tokenizer vocabulary
model.py All nn.Module classes needed to load the model

Usage

import torch
import json
from model import COLM

# Load config
with open("colm_config.json") as f:
    config = json.load(f)

arch = config["architecture"]
model = COLM(
    vocab_size=arch["vocab_size"],
    n_embd=arch["n_embd"],
    n_layer=arch["n_layer"],
    block_size=arch["block_size"],
    embed_dim=arch["embed_dim"],
)

# Load weights
checkpoint = torch.load("colm_best_Final.pt", map_location="cpu")
model.load_state_dict(checkpoint["model_state_dict"])
model.eval()

See the GitHub repository for full training, generation, and evaluation scripts.

Training Data

Trained on the DCDM dataset β€” 47 million tokens of synthetic theological-philosophical prose generated from 93 public domain works through a locally-run Gemma 3 12B pipeline.

Limitations

  • Spelling: The 499-token vocabulary contains 396 whole-word tokens covering common English and corpus-specific domain words; words outside this vocabulary require character-level assembly, producing spelling variation on out-of-vocabulary terms
  • Single trained model: The released checkpoint has only generated text in the DCDM theological-philosophical register; cross-domain output from the trained model is untested. The data generation pipeline has been validated across approximately 894,000 tokens of private source material spanning archaeology, theology, mythology, philosophy, political history, intelligence studies, science fiction, and AI research.
  • Batch size: Final run used batch_size=4 rather than intended 32 β€” results are a lower bound

Citation

@misc{obrien2026colm,
  author = {O'Brien, Phillip C.},
  title = {COLM: Complex Oscillating Language Model β€” Coherent Language from Sub-500k Parameter Oscillatory Models},
  year = {2026},
  publisher = {Zenodo},
  url = {https://github.com/Eden-Eldith/COLM}
}

Licence

MIT License. Copyright (c) 2025-2026 Phillip C. O'Brien.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train edeneldith/COLM