There were useful experiments happening, but no clear operating model for moving from isolated trials to repeatable working practices.
Some of the blockers were technical and procedural. Security and governance processes were designed for control, not fast learning. That made sense in a regulated environment, but it also meant that sensible AI experiments were slowed down before teams could find out what was actually useful.
Some of the blockers were human. Engineers and UX specialists were understandably concerned about what AI might mean for their roles. Some leaders and engineers were sceptical that AI would produce enough value to justify the time and attention. There was also a capacity problem. When AI work sat alongside existing delivery commitments, it became too easy for experimentation to be squeezed out.