top of page

Your team is already using AI. The question is whether they're using it with enough judgement for the environment they're working in.

In high-stakes communication, the risks are real and they're not theoretical. Errors of fact, inappropriate tone, over-reliance on output that sounds confident but isn't accurate. In environments where reputation, trust and accountability matter, that's not just embarrassing. It's a problem.

WHY I BUILT THIS WORKSHOP

It all started with a lorikeet.

Rainbow Lorikeet.png

Not long ago, I found an injured fledgling lorikeet in my backyard during a heatwave. Wildlife services were overwhelmed, so I turned to AI for guidance on how to care for it. The advice was detailed, confident and reassuring. It told me the bird was ready to be released. I nearly followed it.

​

Something didn't sit right. The bird's wings were short. Its tail was underdeveloped. My instinct said the advice didn't match what I was seeing. I sought a second opinion from a licensed wildlife carer who looked at the photos and said immediately: this bird is too young. It would not have survived release.

​

The AI hadn't lied. It had pattern-matched to the most common scenario and delivered its answer with complete confidence. It never flagged what it didn't know. It never said "escalate." And if I hadn't questioned it, a living thing would have died.

​

That experience is now a case study I use in every workshop. Because what happened in my backyard happens in organisations every day, just with higher stakes. A media release that goes out with an error. A briefing that misrepresents a policy position. A constituent response that gets the tone catastrophically wrong.

 

AI doesn't know what it doesn't know. That's your team's job. And most teams haven't been shown how to do it.

THE WORKSHOP

Silver Paper Airplane

Responsible and Practical Use of AI for High-Stakes Communication

A practical, hands-on workshop for small groups working in environments where accuracy, tone and reputation are non-negotiable. This is not a technology training. It's a judgement training, using AI as the lens.

​

Already delivered for parliamentary offices and professional services teams. Fully tailored to your organisation, your environment and your team's specific risks.

"What I needed in that moment was not reassurance. It was judgement."

Who this is for

Government & Public Sector

Local councils, ministerial offices, electorate teams, public-facing agencies. Environments where every communication carries accountability and tone can never be an afterthought.

Aged Care and Health

Sectors where communication errors carry human cost, not just reputational risk. Where tone of voice, accuracy and empathy all have to work together under pressure.

Corporate Communications 

Marketing, communications and PR professionals who need to move faster without sacrificing the quality and accuracy their stakeholders expect.

Professional Services

Legal, financial, consulting and advisory firms where client trust is built on precision, and where a confident-sounding error is far more dangerous than an obvious one.

WHAT WE COVER

Five practical modules, built around real scenarios from your environment.

Setting foundations and guardrails: What AI is and isn't. Confidence versus accuracy. Human judgment, accountability and verification. Where the line is and why it matters in your context.

Research, fact-checking and verification: Using AI to summarise and research without outsourcing judgment. Identifying assumptions, gaps and errors. Practical verification techniques your team can use immediately.

Prompts, roles and written communication: Structuring effective prompts using context, role, tone and constraints. Building reusable prompt templates for your most common tasks. Maintaining consistency of voice across AI-assisted output.

Visual and public-facing content: Using AI to support visual communication without generating inappropriate or inaccurate imagery. Assessing output for suitability, tone and audience.

Applied practice and real scenarios: Hands-on exercises using your team's real, non-confidential examples. Facilitated discussion of risks, judgment calls and the decisions that actually matter in your environment.

WHAT YOU GET

Participants will learn

ai-generated-IMAGE.jpg

01

The RAPID framework

A practical prompting structure that helps teams use AI more effectively while building in the verification and judgement steps that high-stakes work demands.

02

Guardrails they can apply immediately

Clear boundaries on when AI should and shouldn't be used, and a human-in-the-loop decision process that works in the real pressure of day-to-day work.

03

Improved risk literacy

A working understanding of hallucinations, bias, tone risk and privacy exposure, not as abstract concepts but as things they now know how to spot and manage.

04

Confidence, not compliance

Teams leave knowing how to use AI well, not just knowing they shouldn't use it badly. That's the difference between a policy and a capability.

THE DETAIL

Responsible and Practical Use of AI for High-Stakes Communication

The workshop is delivered as a full-day face-to-face session for small groups, typically six to eight participants. Sessions are tailored to your organisation's context, sector and specific risk areas. Pre-session questionnaires ensure the day uses real, relevant examples rather than generic scenarios.

​

Investment starts at $3,500 plus GST for a full-day session of up to six participants. Larger groups and multi-session programs are available on enquiry. Available in Melbourne and across Australia. Virtual delivery available for distributed teams.

If your team is using AI in high-stakes communication, this conversation is overdue.

Let's talk about what responsible AI use looks like in your environment, and what it would take to build that capability in your team.

Media Interview Scene
bottom of page