← Back

Previous work

Making AI useful inside a regulated business.

In a previous Vice CPTO role, I led an AI adoption programme inside a 60-person B2B fintech/regtech company. The programme ran for six months before AI use became embedded in normal working practice across the business.

The company could see that AI mattered, but early adoption was fragmented. Teams were experimenting in different places, but security concerns, slow governance, lack of protected time, scepticism, and concern about role impact meant AI was not yet changing day-to-day work.

The challenge was less about finding another AI use case and more about creating the conditions for AI to become safe, trusted, and practical enough to affect normal work.

What was getting in the way

There were useful experiments happening, but no clear operating model for moving from isolated trials to repeatable working practices.

Some of the blockers were technical and procedural. Security and governance processes were designed for control, not fast learning. That made sense in a regulated environment, but it also meant that sensible AI experiments were slowed down before teams could find out what was actually useful.

Some of the blockers were human. Engineers and UX specialists were understandably concerned about what AI might mean for their roles. Some leaders and engineers were sceptical that AI would produce enough value to justify the time and attention. There was also a capacity problem. When AI work sat alongside existing delivery commitments, it became too easy for experimentation to be squeezed out.

What I changed

I designed and led a cross-company programme called Operation 10x. I owned the initiative and was accountable for its outcomes. Because meaningful AI adoption needed to take root across engineering, product, sales, and customer success, not only the functions I directly led, building buy-in with senior leaders in each area was a deliberate part of my approach from the outset.

My role was not to build every tool. It was to create the operating model that allowed useful adoption to happen.

That included:

  • creating a clearer model for approved tools, safe experimentation, and shared learning
  • changing parts of the governance process so sensible experiments could move faster
  • ring-fencing time and budget for innovation
  • building an internal champion model so adoption did not depend on one central person
  • keeping the conversation grounded in trust, role clarity, and practical workflow improvement

The most useful changes came from focusing on real work rather than abstract AI strategy.

In engineering, AI changed the shape of some development work. Engineers moved from writing all boilerplate code themselves to reviewing, adapting, and improving AI-generated boilerplate for newer software.

In product, AI helped reduce a bottleneck around early design. Product managers could create credible first-draft designs and mock-ups themselves, test ideas earlier with customers, and involve UX and engineering later with better-shaped thinking.

In sales, AI helped the team reach more prospects and connect with the right decision makers earlier in the process. That made it easier to build a better-qualified pipeline without adding headcount.

What changed

Within six months, AI had moved from scattered experimentation into practical use across engineering, product, and sales.

Teams had clearer routes to try things safely. Leaders had more confidence that the work was being handled responsibly. People had more examples of AI helping with real work, rather than sitting as a vague strategic theme.

The estimated impact was equivalent to eight additional FTE in a 60-person organisation. Five of those were in engineering — at approximately £80k in annual salary per head, that represented around £400k in salary cost avoided each year, before employer costs and benefits. The remaining three fell across sales and customer success at a lower cost per head.

That was not an audited productivity number. It was based on comparing what teams were able to achieve with AI-enabled workflows against what would likely have required extra headcount, taken much longer, or not happened at all.

Some of the value came from time saved. Some came from new capability. For example, when a product manager can create credible early designs without waiting for dedicated UX capacity, the organisation has not just saved time. It has changed the flow of product discovery.

Why it mattered

The most useful outcome was not the specific tooling.

The organisation built a better way to adopt new capability: practical safeguards, faster experimentation, shared learning, internal champions, and enough trust for people to try new ways of working without feeling that change was being done to them.

At that point the capability was AI. In future, it could be something else.

For me, this is the part that matters most. AI adoption is rarely just a tooling problem. It is usually a business system problem involving workflows, governance, leadership confidence, team habits, and trust. When those things are not addressed, experiments stay isolated. When they are addressed properly, new technology has a much better chance of becoming useful in normal work.

Start with a clearer view of what is getting in the way.

If your product, technology, or AI work feels harder than it should, HRVN can help you talk through the situation, make sense of the problem, and decide what kind of support would be most useful.