It’s easy to forget what auditors used to do?
Ticking boxes, making sure a company’s version of reality wasn’t drifting too far from the truth, spreadsheets, receipts, and signatures. Very clear, very measurable stuff.
Now? They’re auditing AI, which sounds mad, until you realise what’s actually happening.
Companies are building these huge, complex algorithms that are making real decisions, about people, about money, about health, and nobody really knows how they’re doing it. Not in a way that wouldn’t leave your CFO staring blankly, wondering if this is still accounting or something closer to alchemy. So what do you do when something is powerful but murky? You get someone official-looking to say it’s fine.
That’s where the Big Four come in. They are preparing to launch AI assurance services in the hope of profiting from clients’ demand for proof that their AI systems are effective.
Yes, those Big Four…the ones who’ve made a habit of missing financial black holes the size of small countries. Let’s be clear…they’re not writing the code…and they’re not regulating it either. But they’ve found a way to sit nearby with a clipboard, offering solemn nods and glossy reports. The algorithms might be mysterious, but the branding isn’t.
And the wild part? It’s working!
Because we want it to work. We want to believe that someone understands what’s going on. That there’s a process. That the machine hasn’t just guessed its way into a life-changing decision. And so we reach for reassurance. For legitimacy. For something, anything, that looks like a system.
But what does that system actually involve?
AI doesn’t behave, it adapts, forgets, and re-learns. It pulls in our messiest instincts and reflects them back with statistical confidence. So when a firm says it’s audited a model, you have to ask: what did they check? And who was paying them to check it?
Because let’s not pretend these firms are impartial guardians of the public good. They’re consultants. They sell reassurance. And when reassurance is no longer a virtue but a service, something to be priced, packaged, and sold, then the line between care and convenience quietly begins to blur.
It would be unfair to call them malicious. The truth is more banal and more human:
They are ambitious, as most of us are, and a little deaf to the deeper implications of what they’re being asked to legitimise.
They see a gap in the market, a regulatory vacuum, and they fill it, fast. Because if there’s one thing the Big Four are brilliant at, it’s turning uncertainty into billable hours. But here’s the uncomfortable bit: we kind of want them to do it. We need someone to tell us these systems aren’t going to go rogue. We want a stamp, a signature, a bit of adult supervision. And that’s why this is working.
Still, it’s worth asking: when the people doing the certifying are also chasing the contracts, are we getting real assurance, or just the illusion of it?
Because if we’re now trusting machines to make decisions, then trusting the people who bless those machines really matters.
And history suggests we should be a bit more careful about who we hand that job to.
It is a strange and almost spiritual task. Because AI, especially the kind now creeping into hiring decisions, medical diagnoses, and credit scoring, is not something easily grasped. It learns from billions of data points, trains on histories we barely understand, and produces results that no human can fully explain. It has become, in a quiet but profound way, our new oracle. And in response, we have turned to the familiar figures of trust.
So, here’s the awkward part:
To the firms: Do not mistake the absence of regulation for freedom. What you audit now will shape what society tolerates later. You are not just reviewing models, you are writing, by precedent, the rules for our digital moral universe.
To the regulators: move faster. The pace of AI integration cannot wait for post-hoc policy. Establish standards. Demand transparency. Define what assurance must mean before it’s too late.
To the public: Be sceptical. A certified algorithm is not necessarily a just one. Ask not only whether it passed the audit, but who designed the criteria, and what interests were served.
In the end, trust doesn’t come with a logo or a line item. It’s not something you can invoice. It’s something you build, slowly, quietly and by getting things right when no one’s watching. And if code is going to shape the world in ways we barely understand, then the people signing off on it need to be more than consultants with a checklist. They need to be people we’d trust in the dark, because increasingly, that’s where these systems are making their decisions.
They must be, in spirit if not in title, worthy of the role we have begun to assign them, watchers not just of algorithms, but of us.

Leave a Reply