Further Resources
You can tell a presentation has landed before the applause — by the quiet, thoughtful faces, the scribbled notes, the questions that start to sound practical rather than polite. Presentation effectiveness isn’t a mystical quality reserved for TED-stage magicians. It’s measurable, improvable and, crucially, predictable. Over many years working with boards in Melbourne, workshop groups in Brisbane and nervous sales teams in Perth, I’ve learned to treat presentations like products: they need defined objectives, user testing, performance metrics and iteration. And yes, even a good-looking slide deck won’t save you if you haven’t nailed the fundamentals. Why this matters now Two quick facts to frame why this should be a priority. First: Microsoft’s 2015 attention study — still widely cited — suggested the modern human attention span averages about eight seconds. It’s an imperfect figure and a little dramatic, but the message stands: attention is scarce. Second: the old but telling finding from psychologists studying memory — the forgetting curve — suggests people can lose a large chunk of new information within hours unless it’s reinforced. Practical implication: if your audience forgets what you wanted them to do, your work was cosmetic, not catalytic. Three pillars of effectiveness: audience, message, and the speaker If I had to reduce this to a short brief for a client, I’d hand them three words: audience, message, speaker. Everything else is a tool. - Audience: Know what they already believe, what they’re prepared to learn and what action you want from them. Not “all staff” — a narrow segment: frontline retail managers in Adelaide? Mid-level project leads in Canberra? Different strategies. - Message: Simplicity and coherence win. A presentation should have one clear throughline. If you can’t summarise it in a sentence that matters to the audience, you’re not aligned. - Speaker: Delivery is where intent meets impact. That’s voice, body language, pacing, and the real-time decision to drop or adapt content based on audience cues. This is obvious. But most corporate presentations become a mash-up of status updates, aspirational statements and crowded slides. Which is why measurement matters: it moves opinion to evidence. What does “effective” actually mean? Too many organisations treat effectiveness as “no complaints and nobody fell asleep”. That’s a low bar. I define effectiveness across three outcomes: 1. Comprehension — Did the audience understand the main idea? 2. Retention — Do they still remember it tomorrow, next week, next month? 3. Action — Did they do something different because of it? You can debate the weighting. I’ll admit: I bias towards action. Some readers will disagree — they’ll say knowledge transfer is enough. Fair. But in commercial settings, knowledge without behaviour change rarely moves the needle on KPIs. Quantitative tools that give clarity Numbers matter. They make performance less subjective. - Short post-session surveys: ask three sharp questions — clarity, relevance, and likelihood to act. Use a 1–7 scale rather than 1–5; finer granularity helps spot movement over time. - Knowledge checks: two or three multiple-choice questions tied to the session’s objectives, administered immediately and again after a week. - Behavioural metrics: registrations for follow-up actions, those who download resources, uptake of a new process, conversion rates after a sales presentation. If you’re running frequent briefings, trend the data. We recently ran a monthly board update series across two divisions — simple NPS-style engagement scores showed a steady 12% lift after we replaced text-heavy decks with visual summaries and problem/solution case studies. Qualitative signals that tell the real story Numbers don’t tell everything. Watch hands, eyes, and posture. Listen to the tone of questions. Do attendees ask clarifying, tactical questions — great. Do they ask for definitions or basic information? Not so great; you’ve missed your baseline. Open-ended feedback is gold. Encourage attendees to answer one question: “What will you do differently after this session?” The responses force people to translate insight into action. Also, collect observational notes: was the room full, were people on laptops, did the group skew disengaged at a certain slide? Patterns matter. A balanced approach: triangulating data The most robust assessment blends both worlds. Triangulate: compare survey scores with knowledge checks and behavioural indicators. Add an observer — a neutral colleague or a learning partner — to record non-verbal engagement. If all three point in the same direction, you can be confident. A technique I favour: the “three windows” model. - Window 1: immediate reaction (survey, observations) - Window 2: short-term learning (quiz after one week) - Window 3: medium-term behaviour (measure uptake at one month) If you see a sharp drop from window 1 to window 2, your presentation was engaging but shallow. If window 2 is solid but window 3 collapses, you failed to design for transfer and accountability. Design choices that actually influence outcomes Some design choices matter more than others. - Start with the action. What do you want people to do? Build the narrative backwards. - Use fewer slides and larger visuals. People remember images more easily than bullet points. Yes, I know some trainers will rail against “less content”, but content is not the same as clarity. - Tell a story tied to context. Case studies from local businesses — a retailer in Sydney or a hospital team in Melbourne — land better than generic examples. - Build accountability into the session. Ask attendees to commit to one small, specific action and capture it. Follow up. Controversial opinion #1: long webinars still work if done properly Microlearning is fashionable. Bite-sized is powerful. But stacked microlearning without coherence? Useless. I believe — and some L&D purists won’t like this — that well-structured, longer webinars that incorporate practice, breakouts and accountability beats a dozen unrelated 10-minute modules. Not always, but often. Controversial opinion #2: PowerPoint isn’t the enemy — poor choices are I’ll say it: PowerPoint is fine. It’s the habit of loading slides with text that kills presentations. If you use PowerPoint as a visual support — images, diagrams, highlighted quotes — it remains incredibly effective. That said, you must test. The role of follow-up Too many presenters treat the session as the finish line. It isn’t. Follow-up is the engine of retention and behaviour change. Quick wins: - Send a one-page summary and a checklist within 24 hours. - Circulate the committed actions and owners. - Offer a 10-minute office-hours drop-in for questions. - Schedule a micro-refresher or a checkpoint two weeks later. These are simple but decisive. They convert the ephemeral motivation from a presentation into sustained practice. Common traps and how to avoid them - Trap: Presenting to satisfy internal stakeholders rather than audience needs. Fix: Ask a stakeholder and an end-user — then prioritise the end-user. - Trap: Measuring only satisfaction. Fix: Add knowledge and behaviour metrics. - Trap: Over-reliance on slides. Fix: Use interactivity — polls, breakout tasks, worked examples. - Trap: One-off effort. Fix: Plan a mini-curriculum. Presentations often sit inside longer learning journeys. Practical checklist to assess your next presentation Before you present: - Is there one clear objective? Can it be stated in one sentence? - Who are the five people in the room you most want to influence? - What behaviour will they show differently after this? During the presentation: - Are you getting two-way signals — questions, nods, note-taking? - Is your pace varied? Are you pausing intentionally? - Do you adjust based on audience cues? After the presentation: - Send a short survey (3–5 questions). - Run a knowledge check after one week. - Track one behaviour metric for a month. - Review all feedback and spot patterns. A quick real-world example We worked with a mid-sized manufacturing firm in Geelong. They wanted their production supervisors to spot and solve small process faults before they became costly. The initial workshop was a slide-driven lecture. Engagement scores were mediocre and supervisors didn’t change practice. We restructured: fewer slides, hands-on problem-solving scenarios based on the firm’s plant, immediate commitments captured on sticky notes, and a 14-day follow-up checklist emailed automatically. Behavioural metrics — number of faults flagged by supervisors — doubled in six weeks. The presentation changed from tick-box compliance to practical tool. The cost of not measuring properly If you aren’t measuring, you’re guessing. Guessing means continued wasted time and budget. In many Australian companies, learning spend is under pressure. We need to show return, not just feel-good numbers. That means designing evaluations that link to business results — fewer defects, faster sales cycles, reduced time to competency. Final thought — imperfect but necessary Assessment doesn’t have to be perfect to be useful. Start small. One clear objective. One quick survey. One follow-up. Learn. Improve. Repeat. We often remind clients that a presentation is not a performance — it’s a transaction: you give information, they give attention, and ideally, the organisation gains value. Get that transaction right consistently and the rest — the slides, the speaking notes, the rehearsals — falls into place. You don’t need to be theatrical to be effective. You need to be thoughtful, measured and disciplined about assessment. And if you refuse to measure, don’t be surprised when nothing changes. Sources & Notes - Microsoft Canada. (2015). Attention Spans: Consumer Insights. Microsoft Canada Attention Study, 2015. - Ebbinghaus, H. (1885). On Memory: A Contribution to Experimental Psychology. (Classic research on the forgetting curve.) - Australian Bureau of Statistics. (2020). Survey of Workplace Training — report on training provision and uptake in Australian businesses.