Proposition 01 · "I don't know"
Not a bigger model — the first world model OS that can say "I don't know"
Scale players promise "bigger means more accurate". We promise "we know where we are not accurate, and we report it."
01
Default output format
Every inference must emit five fields. Missing any one is grounds for refusal.
- 01Conclusion — the system's judgement on the current question
- 02Confidence — a calibrated probability or interval, not a smoothed tone
- 03Evidence chain — source and version pointer for every claim
- 04Boundary verdict — does the action sit inside permission / environment / safety / task constraints
- 05Next action — proceed / refuse / escalate
02
Moat
Boundary DSL + audit schema + bicameral inference. The trio is decoupled from the base model — portable across Llama / Qwen / Mistral / in-house bases. The scale camp's next release is, to us, an adapter retrain.
03
Anti-thesis pairing
"Bigger means more accurate" is a correlational promise whose failure mode is silent hallucination. "We know where we are not accurate" is a mechanistic promise whose failure mode is explicit refusal / abstention / escalation — and the latter is auditable, rollbackable, reproducible.
Continue with the other propositions
Proposition 02
Full-stack health loop
Evidence provenance + individual state space + risk gating — fused into one.
Read the propositionProposition 03
A brain that says "No"
Refusal is not silent striking. It is collaboration with an auditable account.
Read the propositionProposition 04
Research = the moat
Base models are swappable; the "Genmount OS layer" is not. That is why the scale camp burns capital and we don't — and why we move across industries.
Read the proposition