Jordan Harrod
Security & Privacy, Research Tools, and AI Assistants with a focus on ethics and technical literacy.
Nutrition Label
Jordan Harrod provides a research-backed perspective on the AI landscape, bridging the gap between academic papers and consumer understanding. Her content excels at deconstructing complex topics like alignment faking, open-source ethics, and model behavior with high clarity and transparency. While she occasionally covers tools, the channel prioritizes deep analysis of the technology's societal and technical implications over standard product reviews.
Strengths
- +
- +
- +
Notes
- !Content prioritizes theoretical analysis and ethical implications over hands-on software tutorials.
- !Check the description for research citations, which are consistently provided to support claims.
Why this score
“So, of course, I forgot to film an outro for this video. And so I just got back to Boston, so I'm filming it now.”
Demonstrates high authenticity by including logistical friction points and filming errors, reinforcing that this is a genuine vlog of her personal experience.
Open receiptTrust Breakdown
Mixed / General Lens: Scored with the default trust weighting.
Confidence pending. Based on 10 long-form videos.
These six Trust Core outputs drive the public creator rating. Communication affects discovery ranking separately. Methodology →
Recent Videos

this video will be deleted in 24 hours

Should AI Be Open Source?

Don't Use Deep Research (Until You Watch This) | Gemini, OpenAI, and Perplexity Deep Research

Is AI "Too Woke"? | The Woke AI Controversy

I Didn’t Use AI for a Day (Well, I Tried)

Is ChatGPT Lying To You? | Alignment Faking + In-Context Scheming

AI Knows What You Want for Black Friday... (But Should You Trust It?)

Are AI Humanizers Accurate?

My Unfiltered Opinions on AI in 2023

Do American Possibilities Include AI? | A White House Demo Day Vlog

Why I Don't Use (Most) AI Tools
Shareable card
A compact ReReview card and short URL for sharing this trust score.