Robert Miles AI Safety
AI Safety, Alignment Theory, and Neural Networks with a focus on technical risk analysis and academic research.
Nutrition Label
Robert Miles provides high-fidelity breakdowns of AI safety research, translating dense academic papers into accessible, rigorous explainers. Viewers can expect deep dives into alignment theory, mesa-optimization, and instrumental convergence, often illustrated with precise analogies and code. His content prioritizes educational accuracy and technical nuance over hype.
Strengths
- +
- +
- +
Notes
- !Citations are rigorous; check video descriptions for direct links to the academic papers and datasets discussed.
- !Content focuses on theoretical alignment and safety risks rather than consumer product reviews or tutorials.
Why this score
“Once you have AI systems intelligently pursuing their own goals, you have the first ever technology which isn't just about enabling people to get what they want, but about the technology itself getting what it wants.”
Demonstrates precise command of alignment theory concepts (instrumental convergence, agency) and synthesizes them accurately without jargon overload.
Open receiptTrust Breakdown
Mixed / General Lens: Scored with the default trust weighting.
Confidence pending. Based on 10 long-form videos.
These six Trust Core outputs drive the public creator rating. Communication affects discovery ranking separately. Methodology →
Recent Videos

Tech is Good, AI Will Be Different

AI Safety Career Advice! (And So Can You!)

Using Dangerous AI, But Safely?

AI Ruined My Year

Why Does AI Lie, and What Can We Do About It?

Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...

The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment

We Were Right! Real Inner Misalignment

Intro to AI Safety, Remastered

Quantilizers: AI That Doesn't Try Too Hard
Shareable card
A compact ReReview card and short URL for sharing this trust score.