What Held, What Diverged, and What the Room Left Open
A candid afternoon with European ODD practitioners — pensions, funds of funds, banks, consultants, and family offices — on the risks that are shifting, the standards that are diverging, and the verification problem no one has yet solved.
Thank you for joining us in London on 7 May 2026. These takeaways capture the key themes from the conversation, shared back with participants as a resource and reflection of the discussion you helped shape.
Roundtable participants represented the breadth of the European allocator community — pensions, funds of funds, bank platforms, consultants, and family offices. All insights are attributed to the group. No individual firm or participant is identified.
What made this roundtable distinctive was the candour. The practitioners in the room live inside these workflows daily, and the friction they described was real, structural, and in several cases quietly compounding.
- Callback procedures are breaking down.Voice-based verification has been a cornerstone of fraud prevention — AI-enabled deepfakes are eroding its reliability faster than firms are replacing it, and no credible replacement standard has yet emerged.
- Scope is the blind spot in service provider diligence.Reviewing the actual fund administration agreement is the only reliable way to know who is doing what. Exceptions such as SOC 1 qualified opinions warrant direct follow-up with the service provider and the manager.
- Manager pushback is rarely about the merits.When managers resist a finding, they are running a calculated test of allocator conviction. The only credible response is genuine willingness to walk away, backed by internal alignment.
- US and European ODD standards are diverging.AIFMD and FCA oversight set a different expectation than the US framework — particularly on valuation independence, electronic communications, and conduct risk. The gap is structural, not stylistic.
- AI adoption is uneven; governance is lagging.Roughly 70% of allocators are now asking managers about AI use as a baseline disclosure question. The trajectory mirrors how cybersecurity diligence evolved — disclosure becomes governance, and governance becomes a threshold.
Risk Is Shifting — and the Direction Is Clear
The conversation opened with a practical question: which risks are receding, and which are growing?
On the receding side, expert network governance has tightened materially across the industry. Transcription services have further reduced MNPI exposure. Disaster recovery, long a staple of ODD questionnaires, has largely normalised post-COVID — most managers are now fully cloud-based. The exception: certain systematic and CTA managers still maintaining legacy physical DR facilities, and some managers maintain DR facilities uncomfortably close to their primary sites (< 50 miles).
The growing risks were more pointed:
Service Provider Diligence: Scope Is the Blind Spot
The discussion on fund administrators surfaced a persistent issue: confirming the actual scope of a service provider relationship matters as much as confirming the relationship exists.
Even full-service administrators frequently do not provide all services to a given manager. Investor relations, NAV calculations, transfer agency, and cash control may each sit with different parties — or not be covered at all — regardless of what the fund administration capabilities imply. Reviewing the actual agreement is the only reliable approach.
Specific concerns from the room:
For private markets managers, the independent valuation agent is the most consequential service provider to verify. In VC, administrators often exercise limited control over valuations — practical authority frequently rests with the manager, creating the very conflict the structure is meant to prevent.
Not all administrators hold ISO and SOC 1 certifications. Where SOC reports exist, ensure it's Type II and not Type 1 and also audit scope requires scrutiny — firms can limit what is covered, and a narrow scope can obscure meaningful gaps. A qualified opinion on a SOC 1 should not be treated as a routine finding. Two questions warrant direct follow-up: Is this consistent across review periods? And what is the manager actually doing about it?
The framework for evaluating an administrator has moved beyond certifications. Team size, systems capability, and the manager's actual influence over that administrator are all material inputs.
Leverage, Timing, and the Dynamics of Remediation
The room was frank about how power dynamics shape ODD outcomes.
Managers frequently tell allocators that a given question has never been asked before. The room reached a quiet consensus: they almost certainly say the same thing to every allocator.
Timing is the more consequential dynamic. ODD teams arriving late in a fundraising cycle find that leverage shifts considerably. But there is a subtler dynamic at play even when timing is right: when managers push back on a recommendation or decline to remediate a finding, they are frequently not disagreeing on the merits. They are testing whether the allocator will actually walk away. It is a calculated read on conviction — and managers have become sophisticated at making it. The only credible response is genuine willingness to act on the concern, backed by internal alignment that makes the walk-away threat real rather than rhetorical.
When a manager resists a recommendation, they are rarely disagreeing on the merits. They are testing whether you will walk away.
Larger managers have accumulated significantly more negotiating power than they held seven or eight years ago. The primary lever that remains is exclusion from the next re-up. That threat has teeth, but only with genuine internal alignment behind it.
On remediation: allocators own the decision; consultants recommend. Raising issues at the tail end of a fundraising process is tough as that's when resolution is hardest, unless your allocation amount is significant. The room agreed that arriving earlier changes almost everything.
The Atlantic Gap: European and US ODD Standards
Regional differences ran as a consistent undercurrent.
European allocators operate within a more structured environment. AIFMD drives requirements around valuation independence and risk management. Valuation practice was a specific point of divergence. The room considered current US standards weaker than European norms — both in the independence of the process and the conservatism of the marks.
SEC oversight shapes how US-based allocators think about electronic communications. FCA oversight shapes focus on conduct risk, and documentation standards. The room's assessment was plain: US ODD standards are more relaxed in some areas.
European allocators also travel less frequently for on-site visits than their US counterparts. That difference shapes what can be assessed in person, and has become a point of structural distinction between the two approaches rather than simply a carbon footprint preference.
No unified cross-regional framework exists, and there was no expectation of one.
AI in ODD Practice: Adoption Is Uneven, Governance Is Lagging
This generated the most discussion, and the most variance in experience. Firms are at meaningfully different stages — some have recently deployed general-purpose LLM tools such as Copilot, ChatGPT, Claude for the first time; others have built proprietary AI applications for specific workflows. Several firms have implemented AI usage trackers, requiring staff to demonstrate how AI is embedded in their work rather than simply affirming that it is.
Junior staff are being managed inconsistently. Some firms have asked junior employees to hold off on AI tools entirely. Others are actively using AI as a training scaffold. The room acknowledged that junior staff often identify novel use cases faster than senior colleagues — and that blanket caution may carry its own costs.
Hallucinations are a material concern. Most firms have imposed token limits as a practical cost guardrail. A minority have not. The skill gap — knowing what to look for in AI output, and how to prompt effectively — is not being closed fast enough.
Cyber training must catch up. Staff awareness of AI-enabled threats — deepfakes, synthetic impersonation, more convincing phishing — is an active gap. The threat is evolving faster than most training programmes.
Headcount is not falling — yet. AI is not reducing teams yet. Whether that changes as tools mature was left open.
Who pays for AI? The cost question remains unresolved: fund expense or management company expense? No consensus, and no expectation of one soon. Checkout DV views on this.
What Allocators Are Now Asking Managers About AI
Our poll data from a recent AI webinar for asset managers captures what is landing on their side of the table. The leading question — raised by roughly 70% of allocators — is simply: do you use AI? Disclosure is the baseline.
Poll Result · Session Data
What AI-related questions are allocators currently asking you?
The pattern is familiar to anyone who watched cybersecurity diligence evolve over the past decade. It begins as a disclosure question. It becomes a governance question. Then it becomes a threshold. Managers treating AI disclosure as a compliance exercise are likely underestimating the trajectory.
Three Cities, One Conversation — What Held and What Diverged
DiligenceVault's 2026 roundtable programme has now run from New York in March through insights from Grand Cayman in April to London in May. Reading across all three, a consistent set of concerns has surfaced — but how they are being framed and acted upon varies in ways that matter.
On AI, the progression is clear: New York was grappling with a headcount paradox — AI automating without yet replacing, but the trend visible in declining operations headcount. Cayman put the adoption figure at over 90% of managers, then immediately questioned what "using AI" actually means in practice, concluding that most remain in assistive mode and that data quality is the binding constraint, and a view that AI-generated DDQ responses contain wrong interpretations of a manager's own policies. London was the most sceptical of the three — less interested in the adoption number and more focused on the governance gap, the hallucination problem, and the liability question.
On verification, the three cities arrived at the same problem from different angles. New York's response was to move background checks to always-on continuous monitoring. Cayman's was to return to in-person site visits, reasoning that video can be faked and polished documents can be generated. London left the question genuinely open: callback procedures are failing, and no replacement standard has yet credibly emerged. That unresolved gap is the most important thread connecting all three conversations.
On valuation, the cross-market contrast was sharpest. New York focused on the mechanics — cross-referencing marks across managers holding the same asset, demanding back-testing of prior vintages against actual realisations. Cayman worked through a detailed case study of a PIK situation where a 5% markdown was applied while portfolio NAV was reported as flat, making the point that structure, not valuation methodology, is the hardest challenge in private credit ODD. London elevated the conversation to a standards question: US valuation practice is structurally weaker than European norms, both in process independence and the conservatism of marks, and AIFMD sets a bar that SEC oversight currently does not.
On manager leverage and remediation, the framing evolved across the three. New York surfaced pass-through drag and the SEC exam findings letter as a veto signal. Cayman offered the cleaner principle: you can outsource the function; you cannot outsource the risk. London added the most human observation of the three — that when managers resist a finding, they are rarely disagreeing on the merits. They are running a calculated test of allocator conviction. The answer to that test is not a better argument. It is genuine willingness to walk away.
Questions the Room Left Open
- As callback procedures fail against AI-enabled fraud, what verification standard credibly replaces them?
- Where SOC scope is narrow and a qualified opinion persists across review cycles, at what point does that become a disqualifying finding?
- If US regulatory posture continues to relax, should European allocators apply a different standard to US managers than they receive under their own domestic frameworks?
- As AI accelerates junior workflow, what replaces the judgment that repetitive manual work used to develop?