AI fluency now has a number.
Measure yours across five domains through a single conversation. Get a personalized roadmap to close the gaps.
When asked to rate their own AI skills, most professionals say seven out of ten. The average assessment score is 4.2.
A conversation, not a quiz. Then:
01
Your real score
Five domains, each rated 1 to 10. Every number backed by something you actually said.
02
The gap, documented
The gap between what you think you’re doing and what you’re actually doing.
03
Your single bottleneck
The one thing that, if you fixed it, would move every other score.
04
A 90-day roadmap
Twelve weeks of specific exercises. Not “learn more about prompting” — actual tasks with deadlines.
Five domains, weighted by real-world impact.
Measuring a team?
Give every member the same assessment. See where real capability sits, find your operators, and get targeted upskill plans — backed by evidence, not self-reporting.
Learn about AIsance for TeamsFind out if it’s right.
Twenty-five minutes. Five domains. Every score backed by something you said — not a multiple-choice quiz. A week-by-week roadmap.
Built for product, engineering, and consulting teams who need to know where they actually stand.