
Every time I finish a Claude session on the Aigentique platform, I face the same friction: the codebase has moved forward, but the documentation hasn't. Notion pages are stale. And the interesting decisions — why we chose COMPOSITE_UPDATE over individual WebSocket messages, or how the schedule collision handling works — they live only in the conversation transcript, slowly scrolling out of reach.
Today I decided to fix that permanently.
The Problem
I've been building Aigentique Sports — an AI-powered live sports broadcasting platform — across dozens of Claude sessions. Each session produces real engineering work: infrastructure changes, frontend layouts, backend logic, architectural pivots. But the documentation update always happens as an afterthought, if it happens at all.
Worse, I wanted to start writing build-in-public blog posts about the process. Not polished marketing content — honest accounts of what was built, what decisions were made, and what I learned along the way. The raw material was all there in the transcripts. I just needed a way to extract it without spending an hour after every session doing manual write-ups.
The Solution: Two Skills
I built two custom Claude skills that turn the end of every session into a one-command workflow.
Skill 1: /document-session
This skill reads the session transcript, figures out what changed, and does two things. First, it updates the relevant Notion documentation pages — I have about 20 pages covering everything from the live pipeline architecture to the DynamoDB single-table design to the AI agent hub. The skill knows which pages cover what, so if I changed the showrunner director's prompt logic, it updates the Intelligence Layer page, the Script pipeline page, the AI Hub technical docs, and the Roadmap.
Second, it creates a blog post draft in my Notion Blog Posts database. It writes from my perspective, captures the key decisions and trade-offs, calculates reading time, picks the right category and tags, and generates branded cover images using a Python script that produces properly sized assets for both the main image and thumbnail.
Skill 2: /publish-blog
Once I've reviewed and approved a post in Notion (just changing the status to "Review"), this skill syncs it to my Webflow CMS. It handles the full pipeline: uploading images to Cloudinary for CDN hosting, passing those URLs to Webflow's CMS API (which auto-imports them as Webflow assets), mapping Notion relations to Webflow references for categories and authors, converting Notion markdown to Webflow rich text HTML, and publishing the CMS item. Notion stays the single source of truth.
Key Decisions Along the Way
The architecture conversation was interesting. I initially considered three options: an all-in-one skill that goes straight from transcript to Webflow, a skill-plus-n8n approach where Claude handles the intelligent work and n8n handles the mechanical publishing, or the two-skill approach I landed on.
I chose two separate skills because the concerns are genuinely different. Understanding a transcript and writing a blog post is an intelligence task — it needs the LLM. Pushing structured data from one CMS to another is a mechanical task. Keeping them separate means I can run documentation updates without publishing, or batch-publish several posts at once.
The Webflow MCP connection was a pleasant surprise. I expected to need n8n as middleware, but Webflow has an official MCP with full CMS tools — collection management, item creation, asset uploads, and publishing. That eliminated an entire layer of infrastructure.
For featured images, Claude can't generate images natively and there's no image generation MCP available. Rather than adding another subscription, I built a Python script using Pillow that generates clean, branded cover images. Each category gets its own accent colour from the Triple P Digital palette, and the images scale properly for both the 1200x630 main image and the 800x450 thumbnail. It's not AI-generated art, but it's consistent, on-brand, and free.
Adding Cloudinary to the Pipeline
The original publish-blog skill uploaded images directly to Webflow's asset system. That worked, but it had friction — the Webflow asset tool wasn't always reliable for programmatic uploads, and the base64 data needed to pass through the conversation context, which has token limits.
I added Cloudinary as an image CDN layer. The workflow is now: generate the image with Python, compress to JPEG, convert to a base64 data URI, and upload to Cloudinary via their MCP server. Cloudinary returns a secure URL, and here's the nice discovery — when you pass an external URL to a Webflow CMS Image field, Webflow automatically imports it into its own asset system. So passing a Cloudinary URL like https://res.cloudinary.com/df4elhhgi/image/upload/blog/my-post-cover.jpg to the main-image field just works. Webflow fetches it, stores it as a Webflow asset, and returns its own CDN URL.
This means the skill now has a clean three-layer architecture: Notion as the content source, Cloudinary as the image CDN, and Webflow as the publishing frontend. Each layer does what it's good at.
The Cloudinary integration also opens up image transformations via URL parameters — I can upload a compressed image and serve it at any size or quality via their transformation syntax. That's useful for responsive images and keeping page load times fast.
How the Field Mapping Works
One of the more tedious but important parts was aligning the Notion and Webflow schemas. My Notion Blog Posts database and Webflow Blog Posts collection needed to match exactly. I pulled both schemas, compared field-by-field, and found several gaps: Webflow was missing a Publish Date and SEO fields, Notion was missing a Thumbnail Image field. After trimming Notion to match what Webflow actually needs and adding the Thumbnail field, the mapping is clean — 11 fields that map directly, plus the page content which converts from Notion markdown to Webflow rich text HTML.
Categories and Authors were already aligned from when I set up the databases, which saved time.
Setting Up MCP Servers for Claude Code
One thing that caught me out: the MCP connections I'd set up in Cowork (the desktop app) don't carry over to Claude Code. They're managed at the app level — there's no settings file to copy. For Claude Code, you configure MCP servers in ~/.claude/settings.json (global) or in your project's .claude/settings.json. Each server needs its own API credentials.
I set up Cloudinary, Webflow, and Notion as global MCP servers so they're available across all my projects. The config is straightforward — each server gets a command (npx with the relevant package), and the API keys go in environment variables. Once that's done, Claude Code picks them up on startup and the skills work identically to how they do in Cowork.
What I Learned
The biggest insight was how much value is locked in session transcripts. Every Claude conversation contains decisions, reasoning, dead ends, and solutions that are genuinely useful content — both for keeping documentation current and for writing about the build process. The bottleneck was never the content; it was the extraction.
I also learned that Notion's API has some quirks with content inside callout blocks and column layouts — string matching for updates doesn't work on those block types. The skill works around this by adding update notes in nearby editable sections instead.
On the image pipeline: base64 encoding large images can blow past context token limits. The solution is to compress to JPEG and resize before encoding — Cloudinary handles serving at full quality anyway. And it's worth testing the full upload-to-CMS chain end-to-end before committing to a workflow, because the way Webflow auto-imports external URLs wasn't documented anywhere obvious.
What's Next
The pipeline is fully operational now — Notion to Cloudinary to Webflow, with Claude orchestrating the whole thing. The next step is to refine the cover image generator. The current version is clean but basic. Adding subtle variations based on the post content — maybe different geometric patterns per category, or visual elements that hint at the topic — would make the blog grid more visually interesting without needing AI image generation.
I also want to test the /document-session skill on a proper coding session rather than a documentation-focused one. That's the real test — can it extract the interesting story from a session that's mostly code changes and debugging?
This post was created and updated using the /document-session and /publish-blog skills described above. The documentation updates, blog draft, cover images, and CMS publishing were all handled from session transcripts.

