
Andreessen Horowitz 's Sakina Arsiwala published a sharp piece last week arguing that AI's next billion users won't arrive through better models. They'll arrive through trust. The internet was borderless. Intelligence will not be.
I want to take that argument one level deeper. Because from where I sit, building Carver, the trust problem has a specific shape that most AI companies haven't fully reckoned with yet.
When we talk about AI trust, we tend to think of it as a single barrier: brand credibility, user confidence, data privacy. Those matter but there is a category of trust that is different from all the others: Regulatory trust.
Regulators are legal entities. What they mandate isn't a preference or a best practice. It holds up in court. You don't climb that wall because it improves your product metrics. You climb it because the alternative is being locked out of the market entirely, or operating in it with your customers exposed.
And it is not one wall. It is a lattice.
There are walls at the country level, each regulator asserting its own standards, shaped by its own legal history and political culture. Walls at the sector level, where financial services regulators think about risk completely differently from employment regulators or health authorities. Walls at the function level, where what a CRO needs to demonstrate to a board is different from what in-house counsel needs to defend in an enforcement action.
Every dimension has its own wall. And they all have to be climbed simultaneously.
We are in a fragmentation phase of the global economy. Not a temporary dislocation. A structural shift. Geopolitical decoupling, economic nationalism, cultural assertion. Every jurisdiction is reinforcing its regulatory identity, not dissolving it into some harmonized global standard.
AI companies are global by nature. They serve clients across markets by default. But the trust infrastructure those companies need is defined locally, enforced locally, and shaped by decades of institutional history that no product launch can shortcut.
The walls reflect accumulated culture, politics, and accountability. They will not be negotiated away.
There is one more dynamic that makes this harder. Regulators coordinate. The ICO talks to MAS. ESMA influences how SEBI thinks. The FSB sets frameworks that two dozen jurisdictions implement in parallel. That coordination is good for standard-setting, but it means a failure in one jurisdiction doesn't stay local. Regulatory credibility, and regulatory damage, propagates across the lattice.
The first generation of enterprise AI was sold past the people who mattered most.
Risk and compliance professionals are not blockers. They are the gatekeepers and protectors of institutional value. When something goes wrong, legally or reputationally, they are the ones who answer for it. Their judgment about what intelligence is trustworthy enough to act on is not a procurement hurdle. It is the standard.
The next wave of professional AI adoption runs through them. And they do not adopt technology because it is impressive. They adopt it when they trust it. In their world, trust is not a feeling. It is a standard that regulators set, that courts enforce, and that professional accountability structures reinforce every day.
This is the problem Carver was built to solve.
We process regulatory signals across hundreds of regulators globally, enforcement actions, guidance documents, thematic reviews, consultation papers, and surface structured intelligence to the risk and compliance teams that need to act on them.
But the product is not a feed. It is a trust-scaling system.

Four things specifically make that possible.

A16z is right that AI scales through trust. In the regulatory domain, that trust is not optional, not temporary, and not simple. It is a lattice of country walls, sector walls, and function walls, all moving, all coordinated, all legally enforceable.
The companies that figure out how to climb it globally will be the ones that reach the next generation of professional AI users. The rest will keep hitting the wall.
Find out more about Carver Regulatory intelligence at https://carveragents.ai