Arrow to go next
All posts

Scaling AI's Trust Wall

Andreessen Horowitz 's Sakina Arsiwala published a sharp piece last week arguing that AI's next billion users won't arrive through better models. They'll arrive through trust. The internet was borderless. Intelligence will not be.

I want to take that argument one level deeper. Because from where I sit, building Carver, the trust problem has a specific shape that most AI companies haven't fully reckoned with yet.

Not one wall. Many.

When we talk about AI trust, we tend to think of it as a single barrier: brand credibility, user confidence, data privacy. Those matter but there is a category of trust that is different from all the others: Regulatory trust.

Regulators are legal entities. What they mandate isn't a preference or a best practice. It holds up in court. You don't climb that wall because it improves your product metrics. You climb it because the alternative is being locked out of the market entirely, or operating in it with your customers exposed.

And it is not one wall. It is a lattice.

There are walls at the country level, each regulator asserting its own standards, shaped by its own legal history and political culture. Walls at the sector level, where financial services regulators think about risk completely differently from employment regulators or health authorities. Walls at the function level, where what a CRO needs to demonstrate to a board is different from what in-house counsel needs to defend in an enforcement action.

Every dimension has its own wall. And they all have to be climbed simultaneously.

The walls aren't going away.

We are in a fragmentation phase of the global economy. Not a temporary dislocation. A structural shift. Geopolitical decoupling, economic nationalism, cultural assertion. Every jurisdiction is reinforcing its regulatory identity, not dissolving it into some harmonized global standard.

AI companies are global by nature. They serve clients across markets by default. But the trust infrastructure those companies need is defined locally, enforced locally, and shaped by decades of institutional history that no product launch can shortcut.

The walls reflect accumulated culture, politics, and accountability. They will not be negotiated away.

There is one more dynamic that makes this harder. Regulators coordinate. The ICO talks to MAS. ESMA influences how SEBI thinks. The FSB sets frameworks that two dozen jurisdictions implement in parallel. That coordination is good for standard-setting, but it means a failure in one jurisdiction doesn't stay local. Regulatory credibility, and regulatory damage, propagates across the lattice.

You cannot climb one wall in one market and call it done.

Who actually decides.

The first generation of enterprise AI was sold past the people who mattered most.

Risk and compliance professionals are not blockers. They are the gatekeepers and protectors of institutional value. When something goes wrong, legally or reputationally, they are the ones who answer for it. Their judgment about what intelligence is trustworthy enough to act on is not a procurement hurdle. It is the standard.

The next wave of professional AI adoption runs through them. And they do not adopt technology because it is impressive. They adopt it when they trust it. In their world, trust is not a feeling. It is a standard that regulators set, that courts enforce, and that professional accountability structures reinforce every day.

What Carver is building.

This is the problem Carver was built to solve.

We process regulatory signals across hundreds of regulators globally, enforcement actions, guidance documents, thematic reviews, consultation papers, and surface structured intelligence to the risk and compliance teams that need to act on them.

But the product is not a feed. It is a trust-scaling system.

Carver RegWatch Signal Detail

Four things specifically make that possible.

  1. Global regulatory intelligence, shaped to the agent. A compliance team in Singapore operating in financial services needs different signals than a legal team in Frankfurt covering employment law. Raw regulatory output is noise without structure. The intelligence has to arrive already calibrated to the jurisdiction, the sector, and the function of the team receiving it.
  2. Continuous evolution. The walls move. Regulators update guidance. Enforcement posture shifts after a high-profile action. A new administration changes the tone of supervision. AI-specific regulation is being written in real time, layered on top of existing frameworks, with no settled consensus yet on where AI governance ends and data protection begins. Regulatory intelligence that captures a moment in time is already wrong by the time anyone acts on it. Climbing the wall means tracking how it is changing, everywhere it matters.
  3. Predictive operations. Knowing what a regulator said last week is necessary but not sufficient. A consultation paper today is an enforcement priority in eighteen months. A thematic review in one jurisdiction becomes a template that three others follow. The teams that see the wall ahead of time can build for it. The ones who only see it when it arrives are always reacting.
  4. Cross-functional coordination. Regulatory walls do not respect org charts. The ICO's guidance on automated hiring decisions lands simultaneously on Compliance, Legal, HR, and Technology. Each function has a different response requirement. Climbing the wall means the intelligence arrives pre-mapped to the people who need to act on it, not routed into an inbox and left there.

Carver RegWatch Signal Detail

The bottom line.

A16z is right that AI scales through trust. In the regulatory domain, that trust is not optional, not temporary, and not simple. It is a lattice of country walls, sector walls, and function walls, all moving, all coordinated, all legally enforceable.

The companies that figure out how to climb it globally will be the ones that reach the next generation of professional AI users. The rest will keep hitting the wall.

Find out more about Carver Regulatory intelligence at https://carveragents.ai