Skip to main content
Governance & Approvals Advanced 30 minutes for audit; 2–3 hours for full documentation

Brand Voice Audit Checklist

A structured audit and documentation process for assessing the consistency of your organisation's communications voice – and building guidelines to maintain it.

Version 1.0 Updated 25 March 2026

What it is

A two-part tool: first, a systematic audit of your organisation’s current communications voice across channels; second, a documentation framework for capturing and codifying voice guidelines that teams can actually use.

Most voice and tone guides are written once, filed in a shared drive, and never read again. This template is designed to produce a living document – short enough to be used, specific enough to be useful, and structured so it can be updated as the organisation evolves.

When to use it

Use when:

  • Your communications feel inconsistent across different channels or contributors
  • You are onboarding new writers, agencies, or communications team members
  • You are going through a rebrand, repositioning, or leadership change
  • Your AI-assisted content output is drifting from your human voice
  • You have a voice guide already but it is not being used – and you need to understand why
  • A client or stakeholder has raised concerns about tone inconsistency

Don’t use when:

  • You are in the middle of a crisis and tone can wait – address facts first
  • Your organisation has no existing communications output to audit from (build voice from scratch instead – this audit assumes existing materials exist)
  • Voice inconsistency is actually a structural problem – different teams publishing without coordination – which requires governance, not just a style guide

Inputs needed

Before starting, gather:

  1. Communications sample set: 20–30 pieces of recent content across at least four channels (email, LinkedIn, press releases, website copy, internal comms – whatever your main channels are)
  2. Any existing voice documentation: Even informal notes, brand guidelines, or style guides
  3. Stakeholder input: Ideally brief 2–3 internal stakeholders on what they think the voice currently is and should be
  4. Comparison materials: 3–5 pieces of content you consider exemplary – from your own archive or external reference points
  5. Time: The audit itself takes 30 minutes; documentation takes 2–3 hours if done properly

The template

Part 1: Voice Audit

Section A: Sample Collection and Scoring

Collect your 20–30 sample pieces and score each against the five dimensions below. Use a simple 1–5 scale (1 = strongly disagree, 5 = strongly agree).


Dimension 1: Clarity

The communication is easy to understand on first read. The main point is clear without re-reading.

  • Sample 1: __/5
  • Sample 2: __/5
  • Average across samples: __/5

Dimension 2: Consistency

The tone and style feel consistent with other materials from the same organisation. A reader could identify this as coming from the same source.

  • Sample 1: __/5
  • Sample 2: __/5
  • Average across samples: __/5

Dimension 3: Specificity

The communication uses specific examples, numbers, or concrete details rather than generalised claims.

  • Sample 1: __/5
  • Sample 2: __/5
  • Average across samples: __/5

Dimension 4: Human warmth

The communication feels like it was written by a person, not generated by committee. It has some warmth, character, or personality.

  • Sample 1: __/5
  • Sample 2: __/5
  • Average across samples: __/5

Dimension 5: Audience appropriateness

The tone and language are right for the intended audience. It doesn’t talk down or over the head of its reader.

  • Sample 1: __/5
  • Sample 2: __/5
  • Average across samples: __/5

Section B: Pattern Analysis

After scoring, look across your sample set and answer the following:

Where are you consistently strong? (Score average 4+ across most samples):

[DIMENSION]: [NOTES ON WHY]

Where are you consistently weak? (Score average below 3 across most samples):

[DIMENSION]: [NOTES ON WHY]

Where is there most variation? (Wide range of scores across samples):

[DIMENSION]: [THIS INDICATES INCONSISTENCY – INVESTIGATE FURTHER]


Section C: Language Pattern Inventory

From your sample set, catalogue the following:

Words and phrases used frequently (positive – these reflect your voice):

Words and phrases used frequently (negative – these are habits to break):

Sentence length average: Are most sentences under 20 words, 20–30 words, or over 30 words? _____

Active vs passive voice ratio: Estimate the proportion of sentences using active voice: _____%

Jargon inventory: List technical or industry terms used without explanation:

Formatting habits: What punctuation, capitalisation, or structural patterns appear consistently?


Section D: Channel Comparison

For organisations that communicate across multiple channels, note how voice varies:

ChannelClarity (avg)Consistency (avg)Specificity (avg)Warmth (avg)Appropriate (avg)
Website
Email
LinkedIn
Press releases
Internal comms
Other:

Key insight: Where is the gap between your highest and lowest scoring channel? That gap is where voice guidance will have the most impact.


Part 2: Voice Documentation

Use the audit findings to complete this voice guide. The completed document should be no more than two pages – if it’s longer, it won’t be used.


Our voice in three words

Choose three adjectives that describe how your communications should sound – not aspirationally, but in practice. These should be words you could test content against.

Examples: Direct. Warm. Specific. | Confident. Human. Straightforward. | Expert. Accessible. Grounded.

Our three words: __________ | __________ | __________


What we sound like

Write 3–5 sentences describing your voice in practice. Use concrete terms, not abstract ideals. Reference your scoring findings.

Example: We write in short, direct sentences. We lead with the point and support it with evidence – not the other way round. We use technical language where the audience expects it, but always explain what it means in practice. We acknowledge uncertainty rather than overstating confidence. We are warm without being informal.


What we never sound like

The inverse is often more useful than the ideal. List 4–6 specific things your voice is not.

Examples:

  • We don’t open with the organisation’s name or “we are pleased to announce”
  • We don’t use superlatives without evidence (“world-class”, “industry-leading”)
  • We don’t use passive voice to obscure accountability (“mistakes were made”)
  • We don’t use jargon without explanation
  • We don’t use AI-generated content markers: contrast patterns, excessive bullet points, corporate warmth

Our never-list:

  • We don’t…
  • We don’t…
  • We don’t…
  • We don’t…

Words we use / words we avoid

UseAvoidWhy
[EXAMPLE][EXAMPLE][REASON]

UK / US English note: Confirm your standard. UK English uses “organisation” not “organization”, “analyse” not “analyze”, “realise” not “realize”. Consistent with this throughout.


Channel adaptations

Not all channels need the same tone. Document where your voice shifts legitimately – and where it should not.

ChannelHow voice adaptsWhat stays constant
Website
Email
LinkedIn
Press releases
Internal comms

The “Does this sound like us?” test

A practical checklist for anyone reviewing content before publication. Keep this to 6–8 items maximum or it won’t be used.

  • Is the main point clear in the first two sentences?
  • Is there a specific example or proof point?
  • Is it free of superlatives without evidence?
  • Is it in active voice throughout?
  • Does it use our standard UK English?
  • Would a reader mistake this for a competitor?
  • [ADD YOUR SPECIFIC CHECK]
  • [ADD YOUR SPECIFIC CHECK]

AI content note

If your organisation uses AI tools to assist with content creation, add specific guidance here on how to check AI-generated drafts against voice standards.

Suggested language: AI-assisted drafts should be reviewed against the “Does this sound like us?” checklist before publication. Specific patterns to check and remove include: contrast constructions (“We don’t just X, we Y”), excessive bullet formatting, vague warmth without specificity, and any phrase that would not sound natural if spoken aloud by a real person.


AI prompt

Base prompt

You are helping conduct a brand voice audit.

Here is a sample of our recent communications:
[PASTE 3–5 SAMPLES OF DIFFERENT LENGTHS AND CHANNELS]

Score each sample on these five dimensions (1–5):
1. Clarity – main point clear on first read
2. Consistency – feels like same organisation across samples
3. Specificity – concrete details rather than generic claims
4. Human warmth – feels written by a person, not a committee
5. Audience appropriateness – tone and language right for the reader

After scoring, identify:
- The three strongest patterns across all samples
- The three weakest patterns
- Five specific words or phrases that appear frequently (positive or negative)
- One concrete recommendation for improving voice consistency

Do not make general observations. Be specific about what you found in the actual samples provided.

Prompt variations

Variation 1: Rapid channel comparison

Compare these two samples from different channels of the same organisation: [SAMPLE A – CHANNEL] and [SAMPLE B – CHANNEL]. How consistent is the voice? What specific differences exist? Which sounds more like a real person? Give three concrete observations and one recommendation.

Variation 2: Voice guide first draft

Based on these communications samples [PASTE SAMPLES] and this brief description of our organisation [DESCRIPTION], draft a two-page voice guide. Include: voice in three words, what we sound like, what we don't sound like, a words-to-use and words-to-avoid list, and a practical "does this sound like us?" checklist. Be specific, not aspirational. The guide should be usable by a new writer on day one.

Variation 3: AI content voice check

Review this content for AI-generated language patterns: [PASTE CONTENT]. Flag any phrases that signal AI generation: contrast patterns, excessive bullet points, corporate warmth without specificity, vague superlatives. Then rewrite the three weakest sentences to sound more human and specific. Explain each change.

Variation 4: Before and after rewrite

Our voice guide says we should be [THREE WORDS]. Here is a piece of content that doesn't match: [PASTE]. Rewrite it to match our voice. Show the before and after side by side. Add three notes explaining the specific changes you made and why.

Human review checklist

  • Sample size adequate: Is the audit based on at least 15–20 samples? Fewer will skew results
  • Channels represented: Does the sample set include all major channels, not just the ones you’re proud of?
  • Stakeholder input gathered: Have 2–3 internal stakeholders confirmed the audit findings match their experience?
  • Voice guide length appropriate: Is the completed guide two pages or fewer? If longer, it will not be used
  • “Never” list specific: Are the “never” items concrete enough to test against, or are they vague ideals?
  • Words-to-use list genuinely useful: Could a new writer consult this list on day one and actually use it?
  • AI content section included: If AI tools are used in content production, is there specific guidance on checking AI output?
  • UK English confirmed: Is the English standard explicitly stated and applied throughout?
  • Owner assigned: Who is responsible for maintaining and updating this guide?
  • Next review date set: Voice guides go stale – is there a date in the diary to revisit?

Example output

Example voice profile extract

Our three words: Direct. Specific. Grounded.

What we sound like: We write in plain, active sentences. We lead with the point – never with the organisation name or a preamble about how pleased we are. We use concrete examples and numbers where we have them. We acknowledge what we don’t know rather than overstating certainty. We write as if speaking to a smart, time-pressed professional who doesn’t need flattering.

What we never sound like:

  • We don’t open with “We are delighted/thrilled/pleased to announce”
  • We don’t use “world-class”, “industry-leading”, or “cutting-edge” without evidence
  • We don’t hide behind passive voice: “it was decided” becomes “we decided” or “[NAME] decided”
  • We don’t use bullet points for everything – prose reads as more considered

Words we use: “because”, “specifically”, “evidence shows”, “in practice”, “here’s what that means” Words we avoid: “leverage” (as a verb), “synergy”, “journey”, “exciting”, “space” (meaning sector), “ecosystem” (unless literal)


Related templates

Need this implemented in your organisation?

Faur helps communications teams build frameworks, train teams, and embed consistent practices across channels.

Get in touch