One Message to Rule Them All: Atomic WebSocket Broadcasts and Mode-Aware AI

Paul Pounder
April 8, 2026
7
 min read

We had a problem. During live Premier League matches, our React 19 frontend was silently dropping WebSocket messages. The score would update but the pressure chart wouldn't. Or the match clock would freeze while goals kept coming through. The root cause? We were sending three separate messages per polling cycle — STATE_UPDATE, NEW_EVENT, SCORE_UPDATE — and React's automatic batching was swallowing intermediate frames before they could be processed.

This is the story of how we solved that, and how the fix led us down a path to something much bigger: a broadcasting platform that knows whether it's building anticipation before kick-off, narrating the action, or wrapping up the drama after the final whistle.

What We Built

This release (Phases 1.5 and 2) delivered four major capabilities:

  1. COMPOSITE_UPDATE — A single atomic WebSocket message per fixture per polling cycle, containing state, scores, incremental pressure delta, server-computed elapsed time, match events, and status changes. One message, one React state update, zero dropped frames.
  2. Server-side elapsed computation — Instead of the frontend guessing match time from kickoff timestamps (which breaks badly for stoppage time, half-time, and extra time), the backend now uses SportMonks periods data to compute authoritative elapsed time. The output is a clean object: { minute: 45, extra_time: 3, display: "45+3'" }. A 60-second client-side safety net only fires when the WebSocket goes stale for more than 90 seconds.
  3. Mode-aware AI showrunner — The showrunner_director Lambda now detects whether the current schedule item is PRE_MATCH, LIVE, or POST_MATCH and selects a dedicated prompt template. Pre-match scripts focus on predictions and form guides. Post-match scripts cover reactions and key moments. Live scripts are real-time commentary. Each mode has its own DynamoDB-stored prompt, editable via the Prompt Studio admin UI.
  4. Automatic schedule wrapping — The daily_schedule_analyzer now auto-inserts 30-minute PRE_MATCH and POST_MATCH blocks around every LIVE fixture. When two matches are close together, the gap is split evenly between the post-match of the first and the pre-match of the second. This fills the dead zones that previously left the platform silent.

Key Decisions

Why one message instead of three?

The original design made intuitive sense: separate messages for separate concerns. A state change is different from a new event is different from a score update. But React 19's automatic batching meant that if three messages arrived in quick succession (which they always did — they were sent in the same Lambda invocation), only the last one's state update would survive.

We considered debouncing on the frontend, or using flushSync to force synchronous updates, but both felt like fighting the framework. The cleaner answer was to make the backend speak in complete thoughts: one fixture update, one message, everything the frontend needs to render the current state.

Why compute elapsed on the server?

We tried three approaches to showing match time:

  1. Client-side from kickoff timestamp — Breaks for stoppage time ("45+3'" is unknowable without period data), breaks during half-time (should show "HT" not "47'"), and drifts when the user's device clock is wrong.
  2. From pressure data ticks — Unreliable during extra time (SportMonks returns cumulative real-time minutes, not match minutes) and absent during breaks.
  3. Server-side from periods include — SportMonks provides period start/end timestamps. Combined with the match state (1ST_HALF, INPLAY_2ND_HALF, INPLAY_ET, etc.), we can compute exact match minutes with stoppage time. This is what we shipped.

The compute_elapsed() function handles 13+ match states, including the tricky "BREAK" state (which is ambiguous — could be half-time or the break before extra time, so we check whether the 2nd half has ended). Falls back to pressure max minute if period timestamps are missing.

Why mode-aware prompts instead of one smart prompt?

We could have given the showrunner a single prompt and asked it to adapt based on context. But distinct prompts for each mode have two advantages:

  • Prompt length stays manageable — Each mode's prompt is focused and specific, rather than one mega-prompt trying to cover every scenario
  • Independent iteration — The editorial team (well, me) can tune pre-match tone without risking regression on live commentary. The Prompt Studio admin UI makes this a one-click operation with variable interpolation support

The showrunner was also upgraded from Claude 3.5 Sonnet to Claude Sonnet 4.5 — noticeably better at maintaining presenter voice and generating varied commentary across consecutive invocations.

How It Works

Here's the flow for a typical match evening:

T-30 minutes: The schedule analyzer has already wrapped tonight's fixtures with PRE_MATCH blocks. The showrunner detects the current schedule mode is PRE_MATCH and loads PROMPT#SHOWRUNNER_PRE_MATCH. It generates preview scripts: team form, key players to watch, tactical predictions.

Kick-off: The schedule transitions to LIVE. The poller fires every minute, computes MD5 hashes across events + scores + state + pressure. When a delta is detected, it writes the full fixture state to DynamoDB, computes elapsed from periods data, and broadcasts a single COMPOSITE_UPDATE via WebSocket.

The frontend's use-live-ticker hook receives the composite message and applies everything atomically: state snapshot (status, elapsed, pressure append, scores) and events (goals, cards) in one pass. The vidiprinter now renders five event types — goals (amber), red cards (red), VAR decisions (blue), penalties scored (green), and penalties missed (orange) — with stoppage-time-aware chronological sorting.

Full-time: The schedule transitions to POST_MATCH. The showrunner switches to PROMPT#SHOWRUNNER_POST_MATCH and generates wrap-up scripts: key moments, talking points, what the result means for the table.

All scripts are stored in DynamoDB with a 7-day TTL, so the showrunner can check what it said recently and avoid repeating itself.

What I Learned

The frontend told us what the backend should send. We kept trying to solve the message-loss problem on the frontend (debouncing, forced sync, refs) when the real fix was changing the contract. The backend shouldn't send fragments; it should send complete snapshots. This is obvious in retrospect but took a few live match debugging sessions to internalise.

Stoppage time is surprisingly hard. "45+3'" seems simple until you're computing it from Unix timestamps across 13 different match states, including ambiguous ones like "BREAK" that mean different things depending on which period just ended. The SportMonks periods include was the key — but it required careful handling of both ISO and Unix timestamp formats, because the API isn't consistent.

Mode-awareness unlocked 20 hours of content. Before Phase 2, the platform was silent when no matches were live — roughly 20 hours a day on most matchdays. Adding 30-minute pre and post blocks, with dedicated AI scripts for each, means the broadcasting platform has something to say almost continuously on match days.

What's Next

Phase 3 is the WebGL Avatar — a 3D talking head rendered in the browser using react-three-fiber and Avaturn GLB models with ARKit 52 blendshapes. The showrunner scripts will drive the avatar's speech via browser TTS, turning the text-based commentary into a visual presenter. This replaces the skeleton placeholder currently sitting in the LiveLayout.

After that: push notifications for goals and red cards (Phase 4), social media automation (Phase 5), and eventually the full Unreal Engine MetaHuman virtual studio with cinematic direction. But first — the avatar needs a face.

Sign up for our newsletter

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.