A UK challenger bank gives its AI the power to move money. The world’s largest sovereign wealth fund lets AI analyse data for investment decisions but keeps humans firmly in charge. Both are deploying agentic AI governance approaches. Both have made very different choices about where the human sits, and both illustrate why that choice is now one of the most consequential governance decisions an organisation can make.
What Starling and Norway’s Wealth Fund Actually Did
Starling Bank launched what it claims is the UK’s first agentic AI financial assistant in March 2026. Built on Google Gemini and running on Google Cloud, Starling Assistant responds to voice and natural language prompts and then executes banking tasks directly on the customer’s behalf. A customer planning a holiday can instruct it to calculate a savings schedule and set up automatic transfers to a dedicated savings pot. Someone wanting to restructure finances on payday can ask it to create spending categories and route specific amounts to each. The assistant is opt-in, and Starling states that customer data remains within its Google Cloud environment and is not used to train the underlying models.
At the same time, Norges Bank Investment Management, which manages Norway’s $2.1 trillion sovereign wealth fund, the largest in the world, disclosed that roughly half of its 700 employees are coding their own AI tools using Anthropic’s Claude. Those tools currently gather information to support human decisions: monitoring ESG and financial risk across 7,000 portfolio companies, simulating contract negotiations, preparing for company meetings. Stian Kirkeberg, the fund’s head of machine learning and AI, was direct about the direction of travel: in time, some AI agents will be allowed to make limited decisions autonomously. Not yet, he said, because the tools still make errors. CEO Nicolai Tangen put the fund’s position plainly: it is a long-term investor, not a high-frequency trader, and it is not under pressure to automate decisions.
The contrast is instructive. Starling has given an agent permission to act. Norway has given agents permission to analyse. Both call it agentic AI. The governance question is whether those two things require the same oversight framework. They do not.
Why Agentic AI Governance Is a Different Problem
Advisory AI recommends. Agentic AI acts. That distinction sounds simple; its governance implications are not.
Most AI governance frameworks, including the EU AI Act’s human oversight requirements under Article 14, were designed with the advisory model in mind. The Act requires that high-risk AI systems be built so that humans can properly understand the system’s limitations, monitor its operation, detect anomalies and decide, in any situation, not to use the system’s output. That is a sensible framework for a system that recommends a credit decision or flags a compliance risk. It is a more demanding requirement when the system has already moved the money.
Agentic AI governance forces three questions that advisory AI does not. First, what constitutes meaningful oversight when the action is already complete? Reviewing a transfer after it has executed is audit, not oversight. Second, who is accountable when an autonomous action causes harm? The EU AI Act’s deployer obligations under Article 26 require organisations to monitor AI system operation and inform providers if a risk emerges, but the accountability chain becomes harder to reconstruct when an agent has taken a sequence of steps without human review at each stage. Third, how do you audit a decision made by an agent? A human decision-maker can be interviewed. An agent leaves logs, if you have set up logging properly, and reasoning chains that may not be fully interpretable.
The Automation Bias Problem
There is a further risk that Article 14 explicitly names: automation bias, the tendency to over-rely on AI output. In an advisory system, automation bias is a human failure mode: the human rubber-stamps the recommendation without scrutiny. In an agentic system, there is no rubber-stamp moment. The action happens first; the scrutiny, if it comes, is retrospective. That shifts the entire weight of governance from the decision point to the audit trail.
3 Governance Layers Every Agentic AI Deployment Needs
Organisations deploying agentic AI need three governance layers that advisory AI systems do not require in the same form.
Action Boundaries
Define, in writing, what the agent can and cannot do without human authorisation. Starling has done this implicitly through its product scope: the assistant can set up savings goals and organise bill payments, but it is not making credit decisions or initiating large transfers without customer intent. That boundary is a governance decision, not just a product decision. Your agentic AI governance framework needs to document it as such, with clear rationale for where the line sits and a process for reviewing it as capabilities expand.
Escalation Triggers
Specify the conditions that force human review before an agent executes. These might be monetary thresholds, transaction types, counterparty categories or detected anomalies in the agent’s reasoning chain. The Norwegian fund’s current position, where AI analyses and humans decide, is essentially a blanket escalation trigger applied to the entire system. That is a defensible starting position. As organisations move toward allowing agents to act autonomously in limited contexts, those blanket triggers need to be replaced with precisely defined conditions, each of which has been risk-assessed and approved.
Post-Action Audit
Log every autonomous action with enough context to reconstruct the decision chain after the fact. This is not optional under any serious interpretation of the EU AI Act’s record-keeping requirements, and it is the minimum condition for meaningful accountability. The log needs to capture not just what the agent did, but what inputs it received, what state it assessed and what reasoning it applied. An audit trail that records only the output is not an audit trail; it is a receipt.
The Governance Model Change Most Organisations Have Not Made
The shift from advisory to agentic AI is not a technology upgrade. It is a governance model change that most AI policies have not yet caught up with. Organisations that have documented their AI governance around a human-reviews-output model are operating with an incomplete framework the moment an agent starts acting on that output’s behalf.
Both Starling and Norges Bank Investment Management have made explicit choices about where the human sits in their agentic systems. That explicitness is itself a governance decision: one that should be deliberate, documented and revisited as the systems evolve. If your organisation is deploying or planning to deploy agentic AI, the question is not whether you have an AI policy. The question is whether that policy was written with an agent in mind.
If you are working through what agentic AI deployment means for your oversight architecture, Future Prep’s AI Governance Prep Track covers monitoring, incident governance and escalation design for AI systems that act.