Instructions to use MediaStreamAI/MOTHER_CORE_V2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MediaStreamAI/MOTHER_CORE_V2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MediaStreamAI/MOTHER_CORE_V2", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("MediaStreamAI/MOTHER_CORE_V2", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MediaStreamAI/MOTHER_CORE_V2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MediaStreamAI/MOTHER_CORE_V2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MediaStreamAI/MOTHER_CORE_V2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/MediaStreamAI/MOTHER_CORE_V2
- SGLang
How to use MediaStreamAI/MOTHER_CORE_V2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MediaStreamAI/MOTHER_CORE_V2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MediaStreamAI/MOTHER_CORE_V2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MediaStreamAI/MOTHER_CORE_V2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MediaStreamAI/MOTHER_CORE_V2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use MediaStreamAI/MOTHER_CORE_V2 with Docker Model Runner:
docker model run hf.co/MediaStreamAI/MOTHER_CORE_V2
MOTHER CORE V2 โ chunk 600 (W2.8 cutover base)
Sovereign UK AI built from scratch by MediaStream AI Limited (MSAI).
This is MOTHER CORE BASE โ the frozen foundation checkpoint at chunk 600 of the W2.7 โ W2.8 training programme. All downstream MOTHER models (DEFENCE, ROBOTICS, LLM, CODE) build on this base.
- Founder & CEO and Lead AI Architect: Christopher Kenna
- Parameters: 6.88B (FP32 source, BF16 weights here)
- Architecture: 48 layers, dim 3072, 24 heads, 6 KV heads (GQA 4:1), RoPE ฮธ=10000, RMS norm, tied embeddings
- Context: 4096 tokens
- Training: From-scratch sovereign UK build โ no fine-tuning of external models
- Source SHA256:
0b1ef35ec60af4a7ad0648498de8526cb775a19501dda94dfbda1713e0475b60
Training journey
| Milestone | Eval (105-question harness) |
|---|---|
| Chunk 450 (initial W2.7 baseline) | 47/105 (45%) |
| Chunk 506 (post LR-fix rollback) | 44/105 (42%) |
| Chunk 550 (recovery, LR-capped) | 46/105 (44%) |
| Chunk 600 (BASE freeze) | 49/105 (47%) |
Scope
MOTHER CORE handles: math, science, reasoning, chain-of-thought, UK knowledge, MOTHER identity, tool calling (agents, RAG, memory, workflows), multilingual responses (English, Welsh, Irish, Scottish Gaelic), safety refusals.
MOTHER CORE does NOT handle (separate sister models):
- MOTHER CODE โ software engineering, code generation
- MOTHER LLM โ long-form creative writing, instruction-tuned content
- MOTHER DEFENCE โ defence reasoning and strategy (W3 programme, builds on this BASE)
- MOTHER ROBOTICS โ humanoid robot embodiment (W4 programme, builds on this BASE)
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tok = AutoTokenizer.from_pretrained("MediaStreamAI/MOTHER_CORE_V2")
model = AutoModelForCausalLM.from_pretrained(
"MediaStreamAI/MOTHER_CORE_V2",
torch_dtype=torch.bfloat16,
device_map="auto",
)
prompt = "Question:\n\nWhat is the capital of Wales?\n\nAnswer:"
inputs = tok(prompt, return_tensors="pt", add_special_tokens=True).to(model.device)
out = model.generate(
**inputs,
max_new_tokens=200,
do_sample=False,
repetition_penalty=1.3,
no_repeat_ngram_size=4,
pad_token_id=tok.pad_token_id,
)
print(tok.decode(out[0], skip_special_tokens=True))
Critical inference rules:
- Prompt wrap:
"Question:\n\n{q}\n\nAnswer:"(exact whitespace) - BOS token: 1 (required,
add_bos_token=True) - EOS token: 2
- PAD token: 0
- Use greedy decoding only. Sampling produces gibberish.
- Repetition penalty: 1.3, frequency-scaled
- No-repeat n-gram size: 4
Programme context
- W2.7 (complete) โ Core capability training: math, science, reasoning, identity, UK knowledge, multilingual, agent tool-calling, RAG, chat, memory, workflows
- W2.8 (in progress) โ Document routing, argument validation, agent verifier loops, multi-step orchestration
- W3 โ MOTHER DEFENCE (defence reasoning and strategy)
- W4 โ MOTHER ROBOTICS (embodied awareness for humanoid platforms)
UK sovereign infrastructure: Manchester (HQ), Dundee (flagship DC), Durham. Phase 2 expansion H2 2026 to Dรผsseldorf, South Africa, Jamaica.
License
MSAI Sovereign License. See LICENSE file. Built sovereign in the UK, not derived from any externally-licensed pre-trained model.
Contact
MediaStream AI Limited West Tower, 371 Deansgate, Manchester M15 4UR, United Kingdom
- Downloads last month
- 245