

As companies race to embed A.I. into their operations, the governance debate has stalled in the wrong place. Regulators deliberate over mandates, policymakers debate guardrails and developers argue over technical controls. These questions are important, but they overlook the most immediate driver of responsible A.I. governance: the people using these systems every day. Without investing in workforce capability, organizations risk embedding harm into their operations and finding themselves liable when things go wrong.
A.I. adoption is not waiting for governance to catch up
Companies are integrating A.I. tools wherever they can to capture efficiency and revenue gains, with or without oversight frameworks in place. Recent news from the U.K. illustrates this tension between governance and innovation. In the same week that the Treasury Committee warned that the financial sector’s ad hoc adoption of A.I. risked causing “serious harm” to society and the economy, Lloyds Banking Group announced that A.I. adoption increased its 2025 revenue by £50 million ($66.8 million).
The governance risk, then, is not only that A.I. is advancing quickly. Here, risk also stems from the fact that A.I. is being embedded into workplaces where employees are not equipped to understand its limitations, failures or compliance implications. That gap is where new governance concerns are emerging.
The governance risks of deploying A.I. without literacy
The most predictable consequence of poorly governed A.I. adoption is what practitioners call “shadow A.I.” Without formal training, employees turn to unapproved consumer-grade tools to complete professional tasks, often without disclosure. In the U.K., 81 percent of A.I. users don’t disclose A.I. use to managers. Sensitive corporate data can be entered into public models that retain or reuse inputs for further training, creating new regulatory and reputational risks.
The problem compounds when employees misunderstand how A.I. actually works. Staff may treat A.I. as a fact-based search engine rather than a pattern-based reasoning engine, failing to critically assess the accuracy of its outputs. Take, for example, widely reported cases of lawyers sanctioned for submitting A.I.-generated “hallucinations” in court filings. When users cannot evaluate A.I. outputs effectively, it’s their employer who bears liability, undermining trust with clients and regulators.
Bias presents another governance frontier. A.I. systems inherit patterns from their training data. If employees fail to recognize discriminatory outputs, they risk embedding systemic bias into operational decisions. In 2021, this issue was brought to the fore in the U.S. by reporting that found automated lending systems rejected up to 80 percent of mortgage applications from Black applicants. Similar failures have since emerged in algorithms used to assess welfare applications and job applications. From a governance perspective, this creates significant ethical, legal and reputational risks, to say nothing of broader impacts on human rights and social justice.
Even where harms do not materialize, under-skilled deployment limits return on investment. Technology rollouts are not synonymous with digital transformation. Without redesigned workflows and trained employees, A.I. produces fragmented productivity gains rather than company-wide impact.
Building governance from the ground up
In Europe, the workforce dimension of governance is already recognized. The EU A.I. Act embeds A.I. literacy as a legal requirement for staff engaging with A.I. systems. In the absence of equivalent regulation in the U.S., companies must lead this effort themselves. Based on our experience advising organizations on A.I. governance, a credible bottom-up approach rests on three interconnected foundations.
The first is A.I. literacy, differentiated by role. For executives, literacy means knowing which questions to ask: How are we monitoring for bias? Who is accountable for model performance? When does human review override A.I. outputs? Leaders must be able to assess whether A.I. is a strategically appropriate response to a business problem, rather than a convenient one.
For technical teams, A.I. literacy means responsible data governance, model validation, performance monitoring and documentation. For end users in other roles, such as recruiters using A.I. screening tools, marketers drafting A.I.-assisted campaigns or analysts using generative A.I. as research assistants, literacy is practical and procedural. It involves understanding approved tools, verifying outputs, knowing how to escalate concerns and applying human judgment.
The organizations we’ve worked with that are ahead of the curve differentiate literacy training by role, treating it as an operational skill tied to accountability.
The second foundation is updated policies and procedures. Clear acceptable use policies reduce the likelihood of shadow A.I., prevent over-reliance on outputs and clarify accountability for A.I.-assisted decisions.
Policies governing A.I. supply chains and procurement require scrutiny. A.I. vendors should be subject to structured due diligence covering training, data governance, bias mitigation processes, monitoring capabilities and contractual clarity around liability. As we’ve written in the context of corporate sustainability, even well-intentioned organizations can undermine their governance efforts by relying on a poorly vetted supply chain.
The third pillar is clear accountability structures across the A.I. lifecycle. This may include cross-functional A.I. governance committees, Responsible A.I. leads, board-level risk oversight or engaging independent assurance providers. The structure will vary by organization size and sector. What matters is that responsibility is clear, and that governance is integrated into product development, procurement, compliance and risk management rather than treated as a separate exercise.
Responsible A.I. governance as an investment, not a constraint
A.I. governance debates will continue at the regulatory level. Standards will evolve, and enforcement landscapes will shift. Many of these factors remain outside any single company’s control. Workforce capability does not.
Reframing A.I. governance around employee investment, updated policies and clear accountability shifts agency back to business leaders. It also offers a constructive counterweight to concerns of A.I.-driven job displacement: rather than replacing workers, responsible A.I. governance equips and upskills them. Those organizations that take this seriously will be better placed to maintain trust with clients, regulators and the public as scrutiny of A.I. adoption continues to grow.
Amelia Williams is a Senior Research Impact Officer at Trilateral Research with expertise in scientific communication at the intersection of emerging technologies, environmental issues, ethics and policy. At Trilateral, she supports the development and implementation of research projects alongside policy, media, and industry engagement.

