Skip to content

AI fluency now has a number.

Measure yours across five domains through a single conversation. Get a personalized roadmap to close the gaps.

When asked to rate their own AI skills, most professionals say seven out of ten. The average assessment score is 4.2.

0domains
0levels
0score
Sample conversation
AIsance

Tell me about the last time you used AI to solve a real problem at work. What was the context, and what specifically did you do?

I used Claude to draft a competitive analysis. I gave it our product specs and asked it to compare against three competitors.

AIsance

Interesting. When you reviewed the output, how did you verify the competitive claims were accurate? Did you have a systematic process for that?

Honestly, I skimmed it and fixed the parts that felt off. I didn't have a formal checklist or anything.

AIsance

That's a very common pattern. Let's dig into that — when something "felt off," what specifically were you looking for?

AIsance

What you walk away with

A conversation, not a quiz. Then:

01

Your real score

Five domains, each rated 1 to 10. Every number backed by something you actually said.

02

The gap, documented

The gap between what you think you’re doing and what you’re actually doing.

03

Your single bottleneck

The one thing that, if you fixed it, would move every other score.

04

A 90-day roadmap

Twelve weeks of specific exercises. Not “learn more about prompting” — actual tasks with deadlines.

What we measure

Five domains, weighted by real-world impact.

40%

Prompt Mastery

Getting the right output on the first try — and knowing why it worked.

20%

Practical Application

AI embedded in daily work, not just opened occasionally.

15%

Technical Understanding

Diagnosing what went wrong instead of just retrying.

15%

Critical Evaluation

Distinguishing confident AI output from correct AI output.

10%

Workflow Design

Documented processes a colleague could replicate without you.

Sample result

A product manager who uses AI every day scored 5.4 out of 10.

5.4/10
Systematic approachMedium improvement velocity
Prompt Mastery
7
Practical Application
5
Technical Understanding
6
Critical Evaluation
4
Workflow Design
3

Daily AI user. Three years of experience with LLMs.

The brutal truth

You prompt well but verify poorly. Your iteration process is informal — you know when output "feels right" but can't articulate why. The biggest risk: you're shipping AI-assisted work without a systematic quality check, which means errors are a matter of time, not probability.

Your first 48 hours

Pick your 3 most-used AI prompts. For each, write down: what context you gave, what constraints you set, and how you verified the output. If you can't answer all three, you've found your gap.

Week 1 exercise from the 90-day roadmap

Document 3 prompts you used this week. For each: what context did you provide? What constraints? How many iterations? Write this down — it's the foundation for everything that follows.

Full roadmap with 12 weeks of exercises included in your results

Measuring a team?

Give every member the same assessment. See where real capability sits, find your operators, and get targeted upskill plans — backed by evidence, not self-reporting.

Learn about AIsance for Teams

You already have an opinion about your AI skills. Find out if it’s right.

Twenty-five minutes. Five domains. Every score backed by something you said — not a multiple-choice quiz. A week-by-week roadmap.

Built for product, engineering, and consulting teams who need to know where they actually stand.