Brief

Executive Summary
- Too often, risk and control teams are asked to approve AI use cases late in the process, resulting in delays, defensive behavior, and dead ends.
- The traditional operating model isn’t built for AI. Legacy processes, unclear roles, and different risk postures have led to fragmented approaches.
- Smart guardrails can enable speed, safety, and scale. Leading companies are embedding risk and control functions early in AI development and empowering teams to make informed decisions.
- Six steps can help organizations adopt agile, adaptive processes that reduce bottlenecks and encourage innovation.
Ask a general counsel or chief risk officer to approve an AI use case, and the answer might be “no.” But shifting the conversation to “how”—by defining minimum risk requirements and tapping into existing risk assessments—will likely spark a more strategic, productive exchange.
As business and tech leaders race to capture the full potential of AI, their enthusiasm is often met with caution—or outright friction—from control functions like legal, risk, compliance, and even finance and HR. The problem? Executives typically bring in risk and control leaders when it’s too late to help inform and steer these strategic initiatives. When risk and control teams’ only role is final-stage approval, they aren’t incentivized to “lean in.” They likely lack the transparency or guardrails that they need to weigh value and risk. As a result, they may default to defensive behavior. Business and tech executives ask yes-or-no questions (“Can we launch this?”), which forces risk and control leaders to either ask more questions or simply play it safe.
Without the right cross-functional model to engage risk and control teams, AI deployments—whether traditional AI/ML, generative, or, increasingly, agentic—stall, or worse, advance in ways that heighten risk exposure. Innovation slows. The organization’s ability to achieve its aspirations is compromised.
Executives need a playbook that turns risk and control leaders into allies, not adversaries. With a pragmatic path forward, they can design an AI operating model that accelerates innovation rather than inhibiting it.
The problem with most AI operating models
AI does not inherently introduce new forms of customer harm. But it does amplify the operational, model, ethical, and reputational risks that organizations already manage—at a new speed and scale. AI’s nature and pace of change are also exposing the cracks in traditional governance models when organizations apply them to AI use cases. For firms in heavily regulated sectors, like financial services, healthcare, and electric utilities, the pressure is especially high. Yet, moving too slowly in deploying AI can also be a strategic risk.
Executives leading AI efforts are up against a unique set of organizational, operational, and governance hurdles:
- Decentralization. Shadow IT and distributed data management make coordinated oversight more difficult—but also more essential.
- Unclear roles and responsibilities. Ownership of AI-relevant risks may not be adequately captured in current taxonomies and frameworks. AI use cases rarely have a single owner.
- Legacy validation processes. Most organizations still rely on model validation practices from a pre-generative-AI world. Rather than testing and approval prior to use, new models need ongoing oversight and continuous empirical validation.
- Vendor and third-party risks. Vendor models with embedded generative AI features and functionality require more rigorous vetting by procurement and third-party operational risk teams. Companies must also beware of “accidental AI” in third-party software.
- A shifting regulatory environment. AI operating models must evolve to align with shifting AI rules and regulations across jurisdictions, as well as evolving industry best practices.
The result is often an uncoordinated, fragmented approach to evaluating, prioritizing, and approving AI use cases. Core processes like risk assessments aren’t adapted for fast-changing technology and agile ways of working. Most legal, risk, and compliance teams aren’t accustomed to agile rhythms, nor are they appropriately staffed to participate in every sprint team or stand-up.
What’s more, as a result of their prior experience and training, some individuals within these functions define their oversight role as one that requires independence and objectivity. They view themselves as protectors rather than advisers.
Meanwhile, institutional and structural factors play a huge role in how risk and control teams engage. These include:
- Risk appetite, or the amount and type of risk that an organization is willing to take to meet strategic objectives. If this is well defined and appropriately quantified, an organization can treat it as a line of demarcation for risk-taking behaviors.
- Risk culture, or how an organization recognizes and rewards good risk behavior, as well as how it addresses bad behavior. It’s typically reflected in performance management and incentive compensation.
- Role definition, or the scope and responsibilities of the second-line risk and control teams. This definition often influences whether team members engage as enablers or take the more defensive posture of protectors.
Designing guardrails for speed
Leaders in AI adoption are flipping the script. Rather than asking for permission, their business and technology teams are asking how to “get to yes,” so they can move beyond experimentation to scaling.
What does this look like in practice? First, it’s critical to have the right AI guardrails. Guardrails encompass both governance and controls. Governance includes organizational aspects, like convening councils that centrally approve AI use cases, as well as tooling aspects, such as frameworks and policies around the technology. Controls span both detective and preventative IT, access, and process controls. The most effective are automated controls, which can be embedded within the IT architecture of an AI use case or system.
Good guardrails are supported by new ways of working. In particular, we’ve seen companies set themselves up for success with four moves.
Establish an AI council. Understanding the invaluable insights that risk and control leaders bring to the table, some organizations are including them in cross-functional steering committees that evaluate value and risk trade-offs. Through these AI councils, risk and control leaders are also involved in prioritizing use cases at the enterprise and business function level. This gives them visibility and voice at the point of strategic decision making, not just after-the-fact review.
Engage risk and control partners early. Leaders are assigning a “risk partner” to teams at the ideation, design, and prototyping stage. The risk partner helps business and technology sponsors navigate the risk approval process, rather than just signing off at the end. For instance, one bank added risk specialists to product squads, resulting in fewer handoffs and faster delivery. These advisers can help solve issues while the fixes are less expensive, leading to fewer last-minute surprises. This move also shifts the organizational mindset, helping risk and control leaders act as partners to innovators.
Clarify minimum risk requirements. Rather than delegating everything to risk and control functions, which often struggle with tight resources, AI leaders are also equipping teams with essential risk management know-how. “Risk-lite” training for product managers and tech leads can help them identify heightened risks, evaluate controls, and escalate AI use cases for formal review. This can include risk-differentiated approval paths such as fast-track processes. Other companies have employed short e-learning modules, office hours for Q&A, risk checklists, and one-page cheat sheets on topics like “top 10 risk triggers.” The goal is to raise the baseline: fewer unnecessary escalations, higher-quality submissions, and faster approvals. After introducing rapid compliance training and cheat sheets, one online bank slashed noncompliant customer communications and last-minute escalations.
Adopt a fit-for-purpose maturity model. Consider, for example, three simple levels of AI maturity: AI as a tool or assistant, AI executing with a human in the loop, and AI executing autonomously. Most companies are still operating at levels 1 or 2, but leaders in agentic AI are accelerating toward a future where agents can make cross-system decisions. While “human in the loop” has served as a form of control on AI usage, enhanced controls on models and processes will be needed as humans are disintermediated from the workflow. That’s why leading organizations are defining control approaches and requirements consistent with each level of AI maturity.
Many agentic platforms are already building tools to monitor, evaluate, and fine-tune agent behavior and adherence to guardrails. The tech will keep getting better, but organizations shouldn’t wait. Now is the time to think critically about where agents fit into oversight and operations. Companies can start small, piloting in low-risk areas to learn what works. They’ll build the muscle to scale agents safely, ensuring they become a force of risk mitigation rather than risk exposure.
Getting started
Companies serious about scaling AI safely and swiftly will initiate six steps:
- Support control functions from the top. Use leadership messaging, performance management, and favorable discretionary compensation to signal the importance of their role.
- Redesign business, tech, risk, and HR processes. Assess to what extent current AI controls apply and what enhancements are necessary to address agentic workflows that support greater autonomy.
- Codify the nonnegotiables. What needs to be true for a use case to move forward? Without clear definitions, approval and prioritization is impossible, leaving use cases stuck in limbo.
- Use a risk-differentiated approach for evaluation and approval. Employ a tiered oversight model to fast-track low-risk use cases as preapprovals. Higher-risk ones warrant deeper scrutiny, better controls, and the appropriate level of oversight.
- Build in test-and-learn cycles. Every six months, perform a quality control review of AI governance ways of working to pinpoint and address areas for improvement. It’s also useful to audit a sample of three to five use cases, to determine if new controls are warranted, if risk types are receiving the right level of scrutiny, and if risks feel accurately categorized.
- Implement continuous post-approval monitoring. Keep risk and control leaders involved to monitor whether risks and benefits materialize as expected during scaling. Be ready to pivot if risks evolve or value falls short.
The best companies don’t view risk and control as a roadblock—they treat it as a strategic enabler. This behavioral shift empowers control functions to operate with agility, ensuring responsible use, speedier scaling, and continuous innovation. Companies that are laying the groundwork now aren’t just avoiding problems; they’re accelerating performance.