Methodology · Open and audited

How the AI Ready Score is actually computed.

Most workforce tools are black boxes. Ours is not. This page documents the full methodology: the data graph underneath, the four pillars of the score, the formula that combines them, and the limitations we're candid about. If you're a CHRO evaluating a vendor, you should be able to read this page and know exactly what you'd be buying.

Foundation
6 open data sources, fully attributed
Score inputs
4 pillars, weighted explainably
Task coverage
~20,000 O*NET tasks, all mapped
Refresh
Quarterly with AEI data drops
01 The data graph

Six authoritative sources, one working graph.

Every score we produce starts with a normalized graph. We ingest six public or commercially-licensable data sources and project them into a single graph where nodes are occupations, tasks, and skills, and edges carry weights for transferability, exposure, and adjacency.

O*NET provides the occupational backbone. ESCO adds the multilingual skills layer. Lightcast brings real-time job posting signal. The Anthropic Economic Index provides actual task-level AI adoption evidence at scale. WEF supplies macro context. BLS grounds the wage and employment base in public record.

The graph is where the crosswalks live. O*NET to ISCO to ESCO to Lightcast. Wage bands from BLS joined to role nodes. AEI task usage patterns joined to task nodes. That's why one-source products are weaker than this: a skills taxonomy alone can't tell you about exposure, and exposure data alone can't tell you where to route.

O*NET 30.2 DATABASE Occupations + tasks AEI ANTHROPIC Live task-level AI use ESCO EU COMMISSION Skills · 28 languages Lightcast OPEN SKILLS Live posting signal WEF FUTURE OF JOBS Macro direction BLS U.S. LABOR STATS Wages + employment JobRoute graph AI ENRICHMENT
02 The four pillars

The score is built from four measurable things.

Pillar 01

Task exposure

Weight · 35%

For each task in your role (from O*NET), we compute the share of that task's observed execution that already shows automation or augmentation patterns in the Anthropic Economic Index. Higher exposure means higher score, which means higher risk.

Inputs
  • O*NET task inventory for your role
  • AEI task usage patterns (quarterly)
  • Automation vs augmentation split
Pillar 02

Skill durability

Weight · 25%

Per-skill durability based on Lightcast posting trend direction and ESCO skill category. Skills losing demand in postings compound the score. Skills gaining demand mitigate it. Stakeholder communication trends up, report drafting trends down.

Inputs
  • Lightcast skill posting delta
  • ESCO skill taxonomy categorization
  • Skill complement vs substitute signal
Pillar 03

Role trajectory

Weight · 20%

BLS employment projections for your occupation family and wage band direction over the last 8 quarters, cross-checked against WEF macro signals. A role with shrinking employment and flat wages gets a worse trajectory score than one growing in both.

Inputs
  • BLS OES wage series
  • BLS employment projections
  • WEF macro transformation vector
Pillar 04

Adjacency breadth

Weight · 20%

How many adjacent roles you can transition to with limited retraining, weighted inversely by their own exposure. A worker with many low-exposure escape hatches has a better score than one with few high-exposure ones. This is the optimism pillar.

Inputs
  • Role-to-role skill overlap graph
  • Lightcast transition frequency signal
  • Retraining pathway cost estimates
03 · The formula

The score is a linear combination, not a black box.

We deliberately do not use an opaque neural network to produce the top-line score. The four pillars are weighted and combined in a single transparent formula so that any visitor, CHRO, or auditor can reproduce the math and challenge the weights.

The neural work happens underneath the pillars, in the graph enrichment layer. The top-line score stays legible.

AIReadyScore =
  0.35 × TaskExposure share of your tasks showing AI usage in AEI
+ 0.25 × SkillDurability inverse of skill-posting decline in Lightcast
+ 0.20 × RoleTrajectory BLS wage & employment direction for occupation
+ 0.20 × AdjacencyRisk inverse of reachable low-exposure alternatives
Each pillar normalized 0–100
Final score 0–100, lower is better
04 Validation and limitations

What it gets right, and what it does not.

✓ What it gets right

Task-level grounding from real usage.

The exposure pillar uses Anthropic's anonymized analysis of millions of real Claude conversations mapped to O*NET tasks. This is evidence of how AI is actually used, not how vendors claim it will be used. That's a meaningful upgrade from consultant narratives.

  • Task exposure signal refreshed every AEI drop (roughly quarterly)
  • Skills signal refreshed every 2 weeks via Lightcast
  • All inputs are publicly auditable against the original sources
  • The combine formula is published on this page, not hidden
! Honest limitations

It is a directional read, not a prophecy.

AI capability is advancing faster than any data source can refresh. The score is best understood as a well-grounded estimate of near-term displacement risk, not a long-horizon forecast. It cannot predict specific layoffs, capture firm-specific dynamics, or replace a conversation with your manager.

  • AEI reflects Claude usage, not all AI; can underweight non-English or non-coding tasks
  • Lightcast posting data lags by 2-4 weeks and skews English-language markets
  • Score does not incorporate your individual performance or career capital
  • Enterprise use requires firm-specific calibration we are still building

How often the underlying data refreshes.

Transparency on freshness matters as much as transparency on method. Different inputs refresh at different cadences. The score inherits the slowest one.

Every 2 weeks

Lightcast Open Skills

Skill posting trends, real-time demand signal, new skill emergence.

Quarterly

Anthropic Economic Index

Task-level usage patterns. New data drops accompany each AEI report release.

Quarterly

O*NET 30.2 · BLS

Occupation taxonomy updates, wage bands, employment projections.

Have a methodology question we did not address?

We'd rather answer the hard questions before the procurement cycle than during it. If you're a CHRO, workforce analyst, or researcher with a specific challenge to our approach, we want to hear it.