Capability Gap Analysis
A structured assessment of the skills and capabilities a communications team has today versus what it needs for effective AI-powered communications work — producing a clear picture of gaps, learning priorities, and a development plan.
What it is
Technology adoption without skills development fails. The most common way AI adoption falls short in communications teams is not that the tools don’t work — it’s that the people using them don’t have the skills to use them well. Effective AI-powered communications requires a different capability set from traditional communications practice: the ability to write effective prompts, critically evaluate AI outputs, integrate AI into professional workflows, manage the governance and risk considerations, and — increasingly — think strategically about where AI adds value and where it doesn’t.
The Capability Gap Analysis maps the current capability of a team (or individual) against the capabilities needed for effective AI-powered communications work. It covers five areas: AI literacy and tool skills, prompt design and output quality, critical evaluation and editorial judgement, workflow integration, and strategic understanding of AI in communications.
The output is a development plan: a prioritised picture of which skills to build, in which order, through what means. It is not a performance assessment — it is a growth map, and it works best when approached as one.
When to use it
Use this template when:
- You’re about to begin an AI adoption programme and want to understand the baseline before designing training
- You’re commissioning training and want to ensure it addresses actual gaps rather than assumed ones
- A team member’s AI tool usage isn’t generating the expected quality or efficiency, and you want to understand why
- You’re presenting a learning and development investment case to leadership
- You’re running a consulting engagement and assessing a client team’s AI capability
Don’t use this template when:
- You’re assessing the whole function’s readiness across technology, process, and culture (use the AI Readiness Assessment for that)
- You’re trying to make a performance management judgement — this template is for development, not appraisal
- You need to assess only one very specific skill in depth (this is a broader capability map)
- You don’t have time to act on the findings — a gap analysis without a development response is a pointless exercise
Inputs needed
- Willingness to be honest: from individuals completing a self-assessment, and from managers completing an assessment of their team
- Context on the AI adoption plan: which tools and workflows are being prioritised, so capability gaps can be mapped to near-term needs
- Time for at least 2–3 qualitative conversations to supplement the structured assessment — some of the most important capability insights don’t appear in scorecards
- Optional: existing role profiles or job descriptions, to compare current expectations with AI-era requirements
The template
Capability Gap Analysis
Organisation: [Name] Person or team assessed: [Name(s) and role(s)] Completed by: [Name and role — self / manager / both] Date: [Date] Context: [What prompted this assessment? What will it feed into?]
How to use this assessment
Score each capability on a 1–4 scale — and where possible, complete both a self-assessment and a manager assessment for each item. Gaps between self and manager scores are often the most informative data points.
- 1 — No foundation: Not yet developed; would need significant support to use this capability
- 2 — Early stage: Basic awareness; can apply with guidance but not independently
- 3 — Developing: Can apply independently in familiar contexts; needs development for more complex or novel situations
- 4 — Confident: Applies this capability consistently and can help others develop it
Capability Area 1: AI literacy and foundational knowledge
Does the person understand what AI tools are, how they work at a conceptual level, what they can and can’t do, and the professional context in which they operate?
| Capability | Self score (1–4) | Manager score (1–4) | Gap | Notes |
|---|---|---|---|---|
| Can explain what large language models do (without technical jargon) | ||||
| Understands the difference between AI tool types (generative, analytical, monitoring, etc.) | ||||
| Aware of AI limitations: hallucination, knowledge cutoff, bias, inconsistency | ||||
| Understands the copyright and ownership implications of AI-generated content | ||||
| Aware of GDPR and data handling considerations for content entered into AI tools | ||||
| Understands the concept of prompt engineering and why input quality affects output quality |
Area 1 average (self): /4 | Area 1 average (manager): /4
Key observations:
Capability Area 2: Prompt design and output quality
Can the person write prompts that produce outputs of professional quality, consistently, across a range of communications tasks?
| Capability | Self score (1–4) | Manager score (1–4) | Gap | Notes |
|---|---|---|---|---|
| Writes clear, specific prompts that define the task, audience, tone, and constraints | ||||
| Uses context effectively in prompts (provides relevant background, brand information, examples) | ||||
| Iterates prompts when outputs aren’t right, rather than accepting inadequate results | ||||
| Uses few-shot prompting (providing examples within the prompt) to improve output consistency | ||||
| Adapts prompt approach for different output types (long-form vs. short-form, analytical vs. creative) | ||||
| Maintains a library of effective prompts for recurring tasks |
Area 2 average (self): /4 | Area 2 average (manager): /4
Key observations:
Capability Area 3: Critical evaluation and editorial judgement
Does the person apply appropriate critical evaluation to AI outputs — neither accepting them uncritically nor being so cautious they fail to benefit from them?
| Capability | Self score (1–4) | Manager score (1–4) | Gap | Notes |
|---|---|---|---|---|
| Fact-checks claims in AI outputs before using them | ||||
| Identifies when AI outputs don’t match the brand voice or established positioning | ||||
| Detects AI-generated content patterns (generic phrasing, plausible-but-wrong specifics) | ||||
| Can edit AI output to professional standard efficiently, without over-reliance or under-use | ||||
| Maintains original editorial voice and perspective when working with AI drafts | ||||
| Understands when to use AI output as a starting point vs. when to start from scratch |
Area 3 average (self): /4 | Area 3 average (manager): /4
Key observations:
Capability Area 4: Workflow integration
Does the person use AI tools effectively within professional workflows — not as a novelty, but as a consistent, integrated part of how they work?
| Capability | Self score (1–4) | Manager score (1–4) | Gap | Notes |
|---|---|---|---|---|
| Uses AI tools consistently in appropriate workflows (not just occasionally) | ||||
| Has clear processes for when AI tools are and aren’t used in their work | ||||
| Uses AI tools at the right stage of a workflow (not just for final drafts) | ||||
| Maintains quality and accuracy standards when AI is part of the workflow | ||||
| Can explain to colleagues how they integrate AI into their specific tasks | ||||
| Identifies new opportunities to integrate AI as they encounter them |
Area 4 average (self): /4 | Area 4 average (manager): /4
Key observations:
Capability Area 5: Strategic thinking about AI in communications
Does the person think strategically about AI’s role in communications — understanding what it enables, where its limits lie, and how to lead others through AI adoption?
| Capability | Self score (1–4) | Manager score (1–4) | Gap | Notes |
|---|---|---|---|---|
| Can articulate where AI adds most value in communications work | ||||
| Understands the reputational and governance risks of AI use in professional communications | ||||
| Can advise colleagues or clients on AI adoption approaches | ||||
| Stays current with developments in AI tools relevant to communications | ||||
| Can design or evaluate AI-integrated workflows, not just use tools within them | ||||
| Thinks critically about AI capability claims rather than accepting vendor narratives |
Area 5 average (self): /4 | Area 5 average (manager): /4
Key observations:
Summary profile
| Capability area | Self average /4 | Manager average /4 | Self/manager gap | Priority for development | |----------------|----------------|--------------------|-----------------|| | 1. AI literacy and knowledge | | | | High / Medium / Low | | 2. Prompt design and output quality | | | | | | 3. Critical evaluation and editorial judgement | | | | | | 4. Workflow integration | | | | | | 5. Strategic thinking about AI | | | | | | Overall | | | | |
Highest capability areas: [Where the person/team is strongest — this is where they’re ready to go deeper or support others]
Biggest gaps: [Where the gap between current capability and what’s needed is greatest — development priority]
Most critical gap for near-term AI adoption: [Given the specific workflows and tools being prioritised, which single gap is most important to close first?]
Self-manager gap analysis
Significant differences between self-assessment and manager assessment (>1 point on any item) often reveal more than the scores themselves.
| Area | Self vs. manager gap | Interpretation | Action |
|---|---|---|---|
| Person overestimates / underestimates their capability | Calibration conversation needed / Validation needed | ||
Development plan
Priority 1 (address in next 30 days):
| Gap | Development action | Method | Time required | Owner | Success measure |
|---|---|---|---|---|---|
| Training / Coaching / Practice / Peer learning |
Priority 2 (address in 30–90 days):
| Gap | Development action | Method | Time required | Owner | Success measure |
|---|---|---|---|---|---|
Priority 3 (address in 90 days+):
| Gap | Development action | Method | Time required | Owner | Success measure |
|---|---|---|---|---|---|
Team-wide training needs identified: [Any gaps that appear across multiple team members and warrant a shared training response]
Reassessment date: [When will this assessment be revisited to measure progress?]
AI prompt
Base prompt
I've completed a capability gap analysis for a communications team and want to turn the findings into a practical development plan.
Team context: [DESCRIBE: size, seniority mix, current AI tool usage]
AI adoption context: [WHAT workflows and tools are being prioritised for adoption]
Assessment findings summary:
Capability Area 1 (AI literacy): [TEAM AVERAGE AND KEY GAPS]
Capability Area 2 (Prompt design): [TEAM AVERAGE AND KEY GAPS]
Capability Area 3 (Critical evaluation): [TEAM AVERAGE AND KEY GAPS]
Capability Area 4 (Workflow integration): [TEAM AVERAGE AND KEY GAPS]
Capability Area 5 (Strategic thinking): [TEAM AVERAGE AND KEY GAPS]
Key context:
[Any important individual differences, team dynamics, or constraints on learning time or budget]
Please:
1. Identify the 3 most important gaps to close first, given the AI adoption context
2. Suggest specific training approaches for each gap — not just "do a course" but what type of learning activity is most likely to build this capability
3. Draft a 90-day capability development plan that is realistic for a team with limited learning time
4. Suggest how to measure whether capability has improved at the 90-day mark
5. Identify any gaps that are likely to require external support (training provider, coaching) versus those that can be addressed internally
Be specific. Generic learning and development advice isn't useful. This plan should be actionable by a communications manager with a limited budget.
Prompt variations
Variation 1: Individual development conversation
I'm having a development conversation with a team member about their AI capability. Here's their profile:
Role: [TITLE AND BRIEF DESCRIPTION]
Self-assessment summary: [DESCRIBE SCORES AND PATTERNS]
Manager assessment: [DESCRIBE WHERE MANAGER SEES DIFFERENTLY]
Key gaps identified: [LIST THE 2–3 MOST IMPORTANT GAPS]
This person's specific AI adoption goals: [WHAT THEY NEED TO DO WITH AI IN THEIR ROLE]
Please help me:
1. Frame the development gaps constructively — as growth opportunities, not deficits
2. Suggest 3 specific development activities suited to an individual, not a group training programme
3. Draft 3 questions I could ask in the conversation to understand their perspective on these gaps
4. Suggest how to set shared development goals that are motivating and achievable
5. Recommend how to structure a monthly check-in to track progress
Tone should be supportive and development-focused, not evaluative.
Variation 2: Training programme design
I need to design a training programme for a communications team to close AI capability gaps. Here's what I know:
Team size: [NUMBER]
Team seniority mix: [DESCRIBE]
Budget available: [ROUGH ENVELOPE]
Time available for training: [E.G., 1 day, half-days over 4 weeks, self-directed learning]
Key gaps identified: [LIST THE TOP 3–5 GAPS FROM THE ASSESSMENT]
Current AI tool usage: [WHAT THEY USE NOW]
Please design a training programme outline that:
1. Addresses the identified gaps in priority order
2. Mixes different learning methods (workshop, coaching, practice, peer learning)
3. Is achievable within the time and budget available
4. Includes specific exercises and activities, not just topic headings
5. Has clear outcomes — what participants will be able to do at the end of each session
Make the programme practical and grounded in real communications tasks, not theoretical AI education.
Variation 3: Benchmarking across seniority levels
I want to understand whether AI capability gaps differ by seniority level in the communications team.
Summary of assessment findings:
Senior leaders (Director/VP level): [CAPABILITY PROFILE SUMMARY]
Mid-level managers: [CAPABILITY PROFILE SUMMARY]
Practitioners (executives/associates): [CAPABILITY PROFILE SUMMARY]
Key context:
[Any patterns you've noticed — e.g., senior leaders over-confident on strategic AI understanding, practitioners stronger on practical tool use]
Please:
1. Describe what you'd typically expect to see across seniority levels in capability gap profiles (are our findings typical or unusual?)
2. Identify which gaps are most important to address at each level, given their different roles
3. Suggest differentiated development approaches for each level
4. Identify any gaps that create specific risks when they appear at leadership level (e.g., strategic AI overconfidence in a Director)
The goal is a development plan that is appropriately differentiated, not a one-size-fits-all programme.
Human review checklist
- Both perspectives captured: The assessment includes both self-assessment and manager assessment where possible — one perspective is insufficient
- Gaps are prioritised: The development plan addresses the most critical gaps first, not all gaps simultaneously
- Connection to use cases explicit: The development priorities are linked to the specific AI workflows and tools the team is adopting
- Development actions are specific: “Do a course on AI” is not a development action; “Complete the Faur AI in Communications masterclass, focusing on Module 3 on prompt design” is
- Success measures defined: Each development priority has a way of knowing whether it’s been achieved
- Realistic timelines: Development plans that require 5 hours of learning per week from a busy team are aspirational, not real; calibrate to what’s actually possible
- Individual differences noted: If assessing a team, individual patterns are noted — not just the average
- Honest about low scores: Areas scoring 1–2 are flagged as requiring significant development, not smoothed over
- Reassessment scheduled: A date is set to re-run the assessment and measure progress
- Not a performance appraisal: The framing and language throughout is developmental, not evaluative
Example output
Capability Gap Analysis Team: Comms function, National Energy Regulator Assessed by: Head of Communications (self/manager combined) Date: April 2026 Context: Preparing for AI adoption starting with content drafting and monitoring workflows
Summary profile
| Area | Average /4 | Priority |
|---|---|---|
| 1. AI literacy | 2.8 | Medium |
| 2. Prompt design | 1.9 | High |
| 3. Critical evaluation | 2.4 | Medium |
| 4. Workflow integration | 1.7 | High |
| 5. Strategic thinking | 2.6 | Medium |
Key findings
Strongest area: AI literacy — the team has a reasonable conceptual understanding of AI and its limitations, driven partly by media coverage and partly by informal exploration.
Biggest gaps: Prompt design and workflow integration. Most team members have experimented with AI tools casually but have never been trained to write effective prompts or integrate AI systematically into professional workflows. The result is inconsistent output quality and patchy adoption.
Most critical near-term gap: Prompt design. Without structured prompt skills, even adopting good tools will produce weak outputs. This is the foundation everything else builds on.
90-day development plan
Month 1: Prompt design intensive
- All-team half-day workshop: practical prompt design for communications tasks (research synthesis, first-draft writing, report summarisation)
- Individual practice: each team member completes 3 real work tasks using AI with structured prompt templates
- Output: shared prompt library for common team tasks
Month 2: Workflow integration
- Map 3 core workflows for AI integration (using the Comms Workflow Audit template)
- Introduce AI into the monitoring brief workflow as the first live integration
- Manager coaches each team member through one AI-integrated workflow
Month 3: Critical evaluation and quality
- Focused session on AI output quality assessment: what to check, how to edit, what to reject
- Review of month 1–2 AI outputs against quality standards
- Identify 1–2 team members to become “AI champions” — advanced users who can support peers
Related templates
- AI Readiness Assessment — The broader functional assessment that this template’s skills dimension expands on
- Comms Workflow Audit — Identifies which workflows the skills need to serve — essential context for prioritising the development plan
- AI Tool Evaluation Framework — Once capability gaps are understood, tool selection can match team readiness
Tips for success
Run it as a conversation, not just a form The structured scoring is useful but the real insight often comes from the conversation around it: “Where do you feel most confident? Where do you feel out of your depth? Where have you had a bad experience that made you cautious?” Combine the assessment with at least a brief 1:1 conversation.
Self-assessment runs high on knowledge, low on practice People tend to overrate their conceptual understanding of AI and underrate their practical skill at using it. This shows up as higher scores on Area 1 (literacy) and lower scores on Areas 2 and 4 (prompt design and workflow integration). If you see this pattern, it’s likely accurate.
Map gaps to roles, not just to the abstract A junior content producer needs different AI capabilities than a communications director. When assessing a team, consider what each role actually needs to do with AI — the priority gaps differ by role even when the scores are similar.
Development plans fail if learning isn’t protected The most common reason capability development plans don’t deliver is that learning time is consumed by day-to-day work. When designing the plan, identify how learning time will be protected. “Half a day per month” written into diaries in advance is more reliable than “find time when you can.”
Celebrate progress, not perfection AI capability development is a continuing journey. There is no “done”. Reassessing every six months and recognising how much has changed is more motivating — and more accurate — than measuring against an aspirational endpoint that keeps moving.
Common pitfalls
Mistaking confidence for competence High self-assessment scores sometimes reflect confidence built on limited experience. Someone who has used ChatGPT to draft a few emails may score themselves a 4 on prompt design when their actual skill level is a 2. Where self-scores seem surprisingly high, probe with a practical test: “Can you show me how you’d prompt the tool for this specific task?”
Addressing all gaps at once A development plan that tries to close every gap simultaneously closes none. Capability builds in layers. Start with the foundations (AI literacy and prompt design) before building more advanced capabilities. A sequenced plan that finishes each layer before starting the next delivers more than a broad-fronted approach that spreads learning too thin.
Ignoring the culture context Capability development assumes people want to develop the relevant capability. If there is cultural resistance to AI adoption in the team — anxiety about job security, scepticism about the technology, distrust of leadership’s intentions — those concerns need addressing before development can land. Training into a resistant culture doesn’t work.
Not revisiting after six months Capability changes. A team that has been actively using AI tools for six months will look different from the assessment conducted before adoption. Reassess regularly, update the development plan, and make the progress visible. Progress that isn’t acknowledged is progress that loses momentum.
Treating this as a compliance exercise If the assessment is completed only to satisfy a requirement — to show it was done, not to act on it — the team will know, and it will undermine trust in any subsequent development activity. Only run this assessment if there is genuine commitment to responding to what it reveals.
Related templates
Need this implemented in your organisation?
Faur helps communications teams build frameworks, train teams, and embed consistent practices across channels.
Get in touch ↗