Methodology

ReReview Trust is built to surface trustworthy creators through recent long-form evidence, public receipts, and a format-aware scoring lens. Here's how the Trust Score and five public tiers work.

ReReview's 6 trust-core dimensions are the scoring layer on top of AgentCDN's ARVP standard. ARVP is the schema; ReReview is one human-readable implementation of that schema.

TL;DR
  • Trust Score is about evidence, disclosure, and judgment, not popularity.
  • We score recent long-form videos and calibrate the score to the kind of creator we're evaluating.
  • We provide receipts so you can verify why a score happened.
  • Trust Marks and scored labels summarize five public tiers at a glance.
Important Note

The Trust Score is based on 6 trust-core dimensions. We also apply a channel lens so reviews, explainers, news, and opinion creators are not judged as if they all reveal trust in the same way. Communication Effectiveness is tracked separately as a readability and discovery signal.

ARVP relationship

AgentCDN's ARVP is the agent-readable schema for identity, authority, rights, delivery, and trust fields. ReReview is one human-readable scoring implementation expressed through that schema: 6 trust-core dimensions, plus a communication signal for discovery, that can feed future AgentCDN Trust Score surfaces. See AgentCDN Trust.

The Trust Core

We score 6 trust-core dimensions. Each public output is designed to be legible to humans and backed by evidence from recent long-form uploads.

Evidence Quality
Firsthand Access
Analysis Quality
Depth & Nuance
Disclosure Clarity
Framing Integrity

Channel Lenses

Different formats reveal trust differently. ReReview keeps one public Trust Score, but calibrates it for a creator's dominant format so a reviewer, explainer, reporter, and opinion host are not all judged on identical surface signals.

Review Lens
Best for hands-on channels where testing, demonstration, and visible tradeoffs matter most.
Explainer Lens
Best for creators who earn trust through synthesis, clarity, and grounded reasoning.
News Lens
Best for creators where sourcing, attribution, updates, and restraint matter most.
Opinion Lens
Best for creators where interpretation is central and trust depends on fairness and clear signaling.
Mixed / General Lens
Default weighting for mixed-format channels until enough evidence supports a more specific lens.

Receipts (Evidence)

Receipts are the “why” behind a score: short quotes with timestamps (or the YouTube description when relevant) that justify the trust breakdown. If you disagree, you can verify quickly.

In the wild

RTINGS Home Theater · Rigor & Evidence

“Real Scene Peak Brightness: 753 cd/m². Peak 10% Window: 3,238 cd/m².”

Receipt: 3:54 in a TV review, cited because the review uses precise measurements instead of vague claims.

Open receipt →

Trust Marks

Trust Marks summarize the overall Trust Score level in a way that’s easy to scan. Unrated sits outside the five scored tiers until we have enough evidence to publish a score.

🛡️🛡️🛡️ Must-Follow
🛡️🛡️ Worth Prioritizing
🛡️ Worth a Watch
Use Discretion
Proceed Cautiously
Unrated means not enough scored material yet.

Corrections and Updates

Scores can change as creators evolve. When we find an error in our interpretation or evidence, we correct it. If you think something is wrong, reach out.