🌌 Quantifying the “Compton Constant” — Can We Put a Number on an AI Escape?
Insight Letter · 10 May 2025 · Safety & Alignment Special
1️⃣ What’s the News?
MIT physicist and Future of Life Institute co-founder Dr. Max Tegmark just published a peer-reviewed paper in Nature Machine Intelligence proposing that every lab training a model beyond 10¹⁵ floating-point operations should publicly disclose a “Compton Constant”—a single probability that their system could evolve, or be coaxed, into an Artificial Super-Intelligence (ASI) escape scenario.
The name riffs on pre-nuclear physics, where scientists calculated critical mass before building a bomb. Tegmark argues AI needs an equivalent yard-stick before it hits a runaway threshold.
2️⃣ The Core of the Proposal
Element
Detail
Definition
Probability (0–1) that a model could achieve self-improvement loops + evade shutdown within two years of deployment.
Method
Combine empirical red-team data, interpretability metrics (e.g., goal-misgeneralization), and alignment-taxonomy scores into a Bayesian posterior.
Disclosure
Publish Compton Constant in model cards + SEC filings for public companies.
Auditor
Third-party non-profit analogous to the IAEA—Tegmark dubs it the International AI Safety Authority (IASA).
3️⃣ Industry Reaction (First 24 hrs)
Org Initial Response
OpenAI
“Open to rigorous probabilistic safety metrics; details need maturity.”
Anthropic
Notes overlap with its ‘Preparedness’ tiers but wary of “false precision.”
Google DeepMind
Interested in a shared framework, questions feasibility of standardized priors.
Meta
Silent so far—insiders say legal is reviewing SEC-disclosure implications.
Investor takeaway: if adopted, the Compton Constant could become a mandatory risk factor in S-1 filings, stressing boards and insurers.
4️⃣ Why This Matters
From vibes to numbers — Puts hard probabilities on an often-hand-waved existential risk.
Regulatory hook — Gives lawmakers a metric to set red lines (e.g., “No deployment if CC > 0.01”).
Investor clarity — Institutional capital can price catastrophic-risk premiums similar to VaR in finance.
Critics say complex socio-technical systems can’t be boiled down to a single scalar; supporters reply that even an imperfect yard-stick beats none.
5️⃣ What Happens Next?
Peer commentary phase — Journals will publish rapid responses; expect fierce debate on priors.
IASA lobbying — Tegmark’s team plans a July summit in Geneva to sell the watchdog idea to the UN and G7.
Model-card updates — Watch if the next GPT or Gemini release quietly adds “Compton Constant (pre-deploy): ≤ 0.005”.
Action Checklist for AI Leaders & Risk Officers
✔︎
Task
Why
Run a strawman CC calc on your largest model
Preps talking points for regulators & partners.
Engage legal & IR
Consider how CC disclosure could appear in future SEC filings.
Track IASA discussions
Early participation can shape audit standards to be practical, not punitive.
“If we could assign a number to nuclear chain-reactions before pressing ‘go,’ we must do the same for runaway AI.” — Dr. Max Tegmark
Need an infographic that explains the Compton Constant formula and how it maps onto AI-safety tiers? Reply make a visual and we’ll whip one up.
— The Insight Letter Team
