AI Readiness Diagnostic

The reason AI stalls inside your firm is rarely tool access.
It is organizational absorption.

A senior-led readiness assessment for asset managers, mid-market banks, insurers, and PE-backed financial firms. Ten dimensions. Five maturity levels. One executive conversation that surfaces where AI is actually getting stuck, and what to do about it in the next ninety days.

Format
Half-day workshop
Audience
Executive team, in the room
Output
Executive readout, ≤ 3 pages
Built for
Mid-size FS firms

The Premise

Tool access is the easy lever. Absorption capacity, the culture, manager fluency, talent practices, workflow design, and operating model around the tools, is what separates firms whose AI investment compounds from firms where it disperses.

01

Licenses are not adoption. The firms with the highest AI license penetration are not the firms with the highest measurable AI value. We see the gap on every engagement.

02

Manager fluency is the throttle. Front-line managers who cannot coach AI use are the constraint on adoption, not employees, not infrastructure.

03

Workflow redesign beats workflow acceleration. Speeding up a broken process is a small win. Redesigning the work around the model is where compounding value sits.

04

Operating model decides the ceiling. If no business leader owns the outcome of AI, no amount of platform investment closes the gap.

What We Measure

Ten dimensions of organizational absorption.

The diagnostic is twenty questions across the ten dimensions below. We publish the dimensions, the signal we look for, and the executive question that surfaces it. The instrument itself (the full question set, rubric, and scoring math) is the proprietary part we run live with you.

01
Leadership Alignment

A documented executive thesis tied to enterprise strategy, with named accountability, not a delegation to IT.

Workshop tell

If we removed every AI tool tomorrow, which three business outcomes would suffer first?

02
Culture & Change Readiness

Active permission to redesign work, not just to use tools as-is. Anchored in real workflows, not generic training.

Workshop tell

When did someone last change how they worked because of AI, and was that change recognized?

03
Manager Support

Front-line managers who can coach, not just authorize. Manager fluency drives adoption more than user training does.

Workshop tell

Could your managers, today, hold a credible one-on-one about how someone should be using AI?

04
Talent & Skills

A current view of which roles are most exposed to AI redesign over the next 18 months, with a plan tied to that view.

Workshop tell

Which three roles will look most different two years from now, and what are you doing about that today?

05
Workflow Redesign

Use cases framed around the workflow and the outcome, with the AI capability chosen second. Not tool-first.

Workshop tell

Walk us through one workflow you would redesign first if AI were free and unlimited. Why that one?

06
Data & Knowledge Access

A defensible data foundation with clear ownership and quality baselines for the domains AI actually needs.

Workshop tell

If you built a high-value AI use case tomorrow, would the data exist, be accessible, and be trustworthy?

07
Governance & Risk Controls

Right-sized governance: enough to defend, not so much that nothing ships. Tiered by risk, not blanket-applied.

Workshop tell

How long does it take, end to end, for a new AI use case to get approved here? Is that the right answer?

08
AI Tool Adoption

Selective, role-aware deployment with adoption signals tied to real workflows, not licenses purchased.

Workshop tell

Of the AI licenses you have purchased, what percentage are actively used each week, and by whom?

09
Measurement & Value Tracking

Use cases with named value owners and metrics that surface in the business reviews that actually matter.

Workshop tell

Show us your most successful AI use case. Where does its value show up on a P&L or operating metric?

10
Operating Model Ownership

A named AI operating model (who decides, who builds, who runs, who governs) accepted across business and technology.

Workshop tell

Who, by name, owns the business outcome of your most important AI initiative?

Where Firms Land

Five maturity levels, named for the executive conversation.

Most mid-size financial services firms we meet sit at Level II or Level III. Very few are at Level IV; almost none at Level V. The level itself is less interesting than the dimension-level pattern that produces it, which is why the readout we leave behind is dimensional, not a single number.

I
Level I
Fragmented
ProfileAI activity exists in pockets. No firm-level thesis. Tooling, governance, and ownership are inconsistent or absent.
Recommended moveEstablish an executive AI thesis. Name a single accountable executive for outcomes, not delivery.
II
Level II
Experimenting
ProfileVisible pilots and a recognized intent to do more. Coordination, value tracking, and operating model are still informal.
Recommended moveMove from pilots to a managed portfolio with named owners. Equip managers explicitly. Generic training does not move adoption.
III
Level III
Coordinated
ProfileA defined AI position with named ownership. Adoption is real but uneven across business units. Governance is in place.
Recommended moveWorkflow redesign in the lagging functions. Calibrate governance for speed: tier approvals so low-risk use cases ship fast.
IV
Level IV
Scaling
ProfileAI is embedded in priority workflows, with measurable outcomes and active manager engagement. The remaining work is institutionalization.
Recommended moveBake AI expectations into hiring, performance, and promotion. Reduce key-person risk by formalizing what champions carry informally.
V
Level V
AI-ready Operating Model
ProfileAI is part of how the firm runs. Built into hiring, performance, governance, and business reviews. Not dependent on champions.
Recommended moveShift from adoption to advantage. Identify where AI-enabled work design becomes a competitive moat.

Note: we deliberately do not publish the score-to-level math, the per-question rubric, or the question set itself. Those are the parts of the instrument that produce a defensible result, and they belong inside the engagement. What we publish is the framework you can use to talk to your own leadership team about where you think you sit.

How It's Run

Three ways to deploy it.

The diagnostic is the same instrument in each mode. What changes is the depth of evidence we collect alongside it. Most engagements start with a workshop and either stop there or convert into a paid readiness assessment with executive interviews and a formal readout.

i.

Sales conversation 30-45 min

Walk a leadership team through 5-8 questions live. The conversation itself surfaces the gaps and the disagreements, which are the most useful finding.

ii.

Executive workshop Half-day

The full assessment with the leadership team in the room. Score together; the variance on scores is the alignment finding worth capturing.

iii.

Engagement kickoff 1-2 weeks

The diagnostic as the baseline output of a paid readiness assessment. Includes interviews, the workshop, and the formal executive summary.

Scoring, Practically

The disagreement is the finding.

i.

We score with the executive team in the room

Not from interviews compiled later. The conversation is the evidence; the variance across the table is the most useful data we collect.

ii.

We take the lower score when scores diverge

Disagreement on a dimension is an alignment gap. Until that gap is closed, the higher scorer’s position is not the firm’s position.

iii.

We read the pattern, not the total

A firm at Level IV overall but Level II on governance has a specific, addressable problem. The dimension-level shape is what gets written into the readout.

What You Walk Away With

An executive readout you can use on Monday.

Three pages or fewer, written in the language your board already speaks. Designed to be forwarded between executives, not reread by you alone.

Headline finding

One sentence the executive team can repeat in their next board update. Specific, defensible, and actionable.

Strengths and blockers

The three things this organization is meaningfully ahead on, and the three places it is most exposed. Ranked by impact, with evidence.

Dimension-level map

A 10-dimension readout showing where the firm is consistent, where it is uneven, and where the executive team disagrees with itself.

30-day actions

Five named moves the executive team can make in the next month without a full transformation program.

90-day roadmap

The structural moves (Align, Prioritize, Operationalize) sequenced so each phase makes the next one easier.

Sample readout · illustrative
Mid-cap asset manager, 280 staff
Maturity: Level II, Experimenting
readout · v3.2 · sample
Lead
Cult
Mgr
Tlnt
Wflw
Data
Gov
Adopt
Meas
OpMdl

Adoption is concentrated in two desks. Manager-level fluency is the binding constraint, not data, and not governance, though governance will become one inside six months if the use-case pipeline is not actively managed.

Top blocker
Manager Support · Talent & Skills
Top strength
Governance & Risk Controls
Recommended path
Manager track + 3 use cases, 90 days

Get In Touch

Run the diagnostic with your leadership team.

Half a day, in the room with your executive team. We bring the instrument and the moderator; you bring the leaders whose disagreement actually matters. You leave with a readout your board will read.

Start the conversation

Direct line

AD
Adam Davis
Co-founder · Data & AI
adam@leap-ts.com516-526-0890
HV
Hortense Viard
Co-founder · Risk & Governance
hortense@leap-ts.com347-559-9448