Jordan Harrod
Security & Privacy, Research Tools, and AI Assistants with a focus on ethics and technical literacy.
Nutrition Label
Jordan Harrod provides a research-backed perspective on the AI landscape, bridging the gap between academic papers and consumer understanding. Her content excels at deconstructing complex topics like alignment faking, open-source ethics, and model behavior with high clarity and transparency. While she occasionally covers tools, the channel prioritizes deep analysis of the technology's societal and technical implications over standard product reviews.
Strengths
- +
- +
- +
Notes
- !Content prioritizes theoretical analysis and ethical implications over hands-on software tutorials.
- !Check the description for research citations, which are consistently provided to support claims.
Rating Breakdown
Breakdown across the key dimensions we rate. Methodology →
Recent Videos

this video will be deleted in 24 hours

Should AI Be Open Source?

Don't Use Deep Research (Until You Watch This) | Gemini, OpenAI, and Perplexity Deep Research

Is AI "Too Woke"? | The Woke AI Controversy

I Didn’t Use AI for a Day (Well, I Tried)

Is ChatGPT Lying To You? | Alignment Faking + In-Context Scheming

AI Knows What You Want for Black Friday... (But Should You Trust It?)

Are AI Humanizers Accurate?

My Unfiltered Opinions on AI in 2023

Do American Possibilities Include AI? | A White House Demo Day Vlog

Why I Don't Use (Most) AI Tools
Why this rating
Evidence receipts showing why each dimension is rated the way it is.
“So, of course, I forgot to film an outro for this video. And so I just got back to Boston, so I'm filming it now.”[16:47] →
Demonstrates high authenticity by including logistical friction points and filming errors, reinforcing that this is a genuine vlog of her personal experience.
“Teachable who is sponsoring this video reached out to me... I've teamed up with Teachable so that my followers can get a 30-day extended free trial”[11:16] →
The creator provides a clear, verbal disclosure of the sponsorship and explains the context of the partnership.
“An MIT study from 2024 found that reward models... display a clear left-leaning bias, even when those reward models are trained on only truthful, fact-based datasets.”[17:14] →
She cites specific academic research to support claims about bias, distinguishing between opinion and structural model behavior.