March has been a busy one. We tracked how AI platforms talk about brands, explored the next generation of agentic CRM, renewed our Cyber Essentials Plus certification, and continued rolling out a fresh visual identity for ERT. And Courage House is slowly starting to look less like a building site. Here’s what we’ve been up to.
As well as creating the icons, we produced a comprehensive secondary colour pallet as well as animated versions of the icons to complete the visual identity. Over the coming months we’ll be rolling out these icons across various activations for ERT.
But it’s not just about the direct market opportunity. Christina covers the knock-on benefits too. Accessible websites tend to perform better in search, load faster, and are better equipped for voice search and AI driven assistants. There’s also the legal angle, with the UK Equality Act and the EU’s European Accessibility Act (enforceable since June 2025) meaning the cost of inaction is only going up.
The takeaway is pretty clear: accessible design isn’t a nice-to-have bolted on at the end of a project. Done properly, it can make your product better for everyone. Read Christina’s post here.
We have successfully renewed our Cyber Essentials Plus certification, reaffirming our commitment to high standards of cybersecurity.
Cyber Essentials Plus goes beyond the foundational protections of the standard Cyber Essentials certification — which covers firewalls, secure configuration, user access control, malware protection, and patch management — by requiring an independent qualified auditor to verify that all necessary security measures are not only in place, but effectively implemented and fully operational.
As part of the renewal audit, both internal and external vulnerability scans were carried out across our workstations, mobile devices, and servers, confirming that our security configurations remain correctly applied and robust against both internal and external threats.
The approach was built around what we call “bursts.” Rather than running each prompt once and treating the result as definitive, we repeated every prompt multiple times across providers at varying temperature settings. This matters because large language models are probabilistic. A single response is a single dice roll. If your brand has a 30% chance of appearing, one prompt might miss it entirely. Five repetitions start to give you a pattern rather than a single data point, which makes it easier to tell whether a result is meaningful or just noise.
We rotated three temperature levels (0.3, 0.7, and 1.0) across a 10 day cycle. Low temperature responses reflect what the model is most confident about (its core training recall). Higher temperatures reveal the wider range of what the model knows but doesn’t always surface. This distinction turned out to be quite useful: it helps separate whether a visibility problem sits with what AI has learned or with what it chooses to say.
We also tested in two languages, since AI models can behave quite differently depending on query language. Separately, we cross validated our English findings against two independent AI monitoring platforms to make sure the results held up outside our own methodology.
A few things stood out. There’s an interesting distinction between an AI model knowing who you are when asked directly, and actually recommending you when someone asks a topical question without mentioning your name. Those turned out to be quite different things.
We also found the format that content is published in matters more than you might expect. PDF only publications are largely invisible to AI retrieval systems. And there’s an interesting overlap with accessibility here. The same things that make a PDF harder for screen readers and assistive technology to parse (poor heading structure, no alt text, content locked in images) also make it harder for AI to read and cite. Structured HTML with clear headings, summaries, and schema markup is what gets parsed, cited, and recommended. The organisations appearing most frequently in AI responses weren’t necessarily the most authoritative on a given topic, they were the most retrievable.
This is still early days for AI visibility as a discipline. There’s no established playbook and the landscape is shifting quickly. But even at this stage, the data is clear enough to start shaping how you think about content strategy, which is more than we had a month ago.
This opens up a much more dynamic way of communicating. Messages can be triggered at the right moment, content can change based on intent, and activity can be coordinated across channels like email, SMS, WhatsApp, push notifications, and even targeted ads. At the same time, customer data is continuously updated in the background, keeping profiles aligned with the latest behaviour.
The impact for businesses is clear. Teams can spend less time managing campaigns and more time improving messaging, testing journeys, and optimising experiences, while the CRM keeps everything accurate, up to date, and running consistently in the background.
March has been a productive one. With research underway, new work out in the world, and Courage House inching closer to something we’re really proud of, we’re heading into April with plenty of momentum. See you next month!