Using AI Safely for Comms Work
A practical guide to what to share, what to keep out, and which tools to use for which tasks. This isn't a lecture — it's the decision-making framework experienced comms professionals use.
"If you wouldn't email it to a journalist, don't paste it into a public AI tool."
Public AI tools (including free tiers of ChatGPT, Claude, Gemini, and others) may use your inputs to improve their models. Even where they don't, you've shared data with a third-party service. Treat them accordingly.
What not to put into AI
This list covers the content types that should never go into a public AI tool — and should only go into an enterprise tool if your organisation has reviewed the terms and your legal or compliance team has signed off.
Embargoed announcements
Unannounced products, funding rounds, partnerships, or executive changes before public disclosure.
Personal data
Named individuals — employees, customers, journalists, stakeholders — including contact details, performance information, or anything that would trigger GDPR obligations.
Pre-announcement financial information
Revenue figures, trading updates, or any information that would be price-sensitive if released early.
Legal documents and advice
Draft contracts, settlement terms, litigation strategy, regulatory correspondence, or legal opinions.
Crisis communications content
Holding statements, incident notes, or any communication related to an active crisis — before the situation has been managed and facts confirmed.
NDA-protected information
Client details, partnership terms, or proprietary information covered by a non-disclosure agreement.
Medical or health information
Patient data, employee health records, or any individually identifiable health information.
Security incident details
Information about an active security breach, vulnerability, or incident response — even after containment.
Which tool for which task
Not all AI use carries the same risk. Use this framework to decide which type of tool is appropriate for each task.
Public information, general research, non-sensitive first drafts
Research on public topics, drafting from publicly available information, editing generic copy, brainstorming messaging frameworks, generating placeholder content.
Tool: Any AI tool (ChatGPT, Claude, Gemini, Copilot, etc.) — free or paid tier.
Internal documents, draft strategies, non-sensitive stakeholder communications
Internal briefings, communications strategies, messaging frameworks containing non-public company direction, stakeholder mapping containing role-level (not personal) information.
Tool: Enterprise version of your preferred AI tool (Claude for Enterprise, ChatGPT Team/Enterprise, Copilot for Microsoft 365). Confirm your organisation has a Data Processing Agreement in place and that inputs are not used for model training.
Crisis communications, M&A, regulatory, board-level, legally sensitive
Security incident response, crisis holding statements, M&A communications, regulatory submissions, board communications, personal data, legal content.
Tool: Enterprise tool with a DPA in place, reviewed by legal and compliance before use. Consider whether AI is appropriate at all — some high-risk content is better drafted without AI assistance and reviewed by a human expert from the start.
Quick triage guide
Match your situation to the recommended approach before reaching for an AI tool.
| Situation | Recommended approach |
|---|---|
| Generic drafts, public information, no sensitive content | Public AI tool acceptable |
| Internal comms, named colleagues, non-public announcements | Enterprise tool with data processing agreement |
| Crisis content, legal/regulatory matters, financial information | Enterprise tool + legal/comms lead sign-off before any AI use |
| Embargoed announcements, M&A, security incidents | Do not use AI tools — draft manually and review with legal |
| Unsure | Treat as the row above — escalate to your most senior reviewer |
Quick check before you paste
Run through this before putting anything into an AI tool you're not certain is enterprise-secured.
Copy this checklist
[ ] Is this information publicly available already?
[ ] Does this contain any named individuals?
[ ] Is this covered by an NDA or confidentiality obligation?
[ ] Does this relate to an unannounced product, deal, or event?
[ ] Would this cause harm if it appeared in a news article tomorrow?
[ ] Does your organisation have an AI use policy that covers this type of content?
If you answered YES to any of these, use an enterprise tool — or don't use AI for this task.
If your organisation doesn't have an AI policy yet
Most organisations are behind on this. An AI use policy for communications teams doesn't need to be a 40-page governance document — it needs to answer four questions clearly:
- Which AI tools are approved for use, and which are not?
- What categories of content can and cannot go into AI tools?
- Who owns responsibility for reviewing AI-generated output before it goes live?
- What is the escalation path if someone is unsure?
If you need hands-on help building that framework — whether as a policy document, a training session for your team, or a full AI governance review — Faur works with communications teams on exactly this.
You can also request a template — if there's a specific AI governance tool you're looking for that isn't on the site yet, we'll consider building it.