The first individual AI governance score for people leaders. One number. Six dimensions. Honest answers.
When an AI system makes a decision about someone on your team — and something goes wrong — your board will ask whether a leader governed that decision. Today, nobody has a clean answer. HARI L™ is that answer.
"Did your leaders personally govern what AI decided about your people?"
30 questions. 6 dimensions. One personal score, produced with an ICF-credentialed coach. Board-presentable. Globally valid. Confidential.
Three reverse-coded items ensure leaders cannot perform a good score. HARI L™ measures what you do, not what you plan to.
When an AI people-decision goes wrong, whose name is on it? The question leaders answer last — and must answer first.
The gap between "I review AI recommendations" and "I catch the ones that are wrong." An 18-point gap on average.
Scrolling AI newsletters is not learning AI. The universally lowest dimension — and the most addressable.
When your team disagrees with an AI recommendation, do they tell you? D4 predicts whether bias surfaces — or stays hidden.
Can you explain an AI decision in plain language to the person it affected? If not, you cannot defend it.
Not whether you value equity. Whether you audit for demographic drift at a defined cadence. A behavior, not a belief.
AI readiness assessments tell you where your organization sits. HARI L™ tells you where your leaders sit, personally, right now.
HARI L™ opens Thursday, April 30, 2026 at 7:30 AM ET.