Understanding how trust is built between groups of people, institutions and technologies is essential for thinking about how AI systems can be trusted to reliably address human needs while mitigating risks.
While the concept of trust is widely referenced in AI policy discussions, consensus remains elusive on its meaning, its role in governance, and how it should be integrated into AI development—especially in public service contexts.
To address this, the Schwartz Reisman Institute for Technology and Society (SRI) hosted a roundtable discussion on February 11, 2025, as part of the official side events at the AI Action Summit in Paris. Titled Building Trust in AI: A Multifaceted Approach, the discussion centered on insights from an upcoming SRI paper, Trust in Human-Machine Learning Interactions: A Multifaceted Approach, led by SRI Research Lead Beth Coleman, which examines multidisciplinary approaches to fostering trust in human-machine learning interactions.
Joining Coleman on the panel was a diverse set of experts, including Monique Crichlow of SRI, Donato Ricci of Sciences Po Medialab, Katharina Zügel of the Forum on Information and Democracy (the Forum), and Duncan Cass-Beggs and Matthew da Mota of the Global AI Risks Initiative at the Centre for International Governance Innovation. Each brought unique perspectives on trust in AI and its implications for governance, certification, and public adoption.
A central conclusion from the discussion was that trust is an action—continuously performed, reappraised, and evaluated between human actors and institutions. As Coleman noted, “We must move toward a better understanding of AI’s limitations and capabilities rather than getting caught in the noise around its potential.” Coleman added that while technical reliability plays a role in trustworthiness, actual trust in AI deployment depends more on public perception of the institutions and individuals managing these systems.
The discussion highlighted that as governments incorporate AI into public services and infrastructure, the regulatory frameworks—whether through treaties, legislation, voluntary codes, or technical standards—will play a crucial role in shaping societal trust in AI. Coleman highlighted this urgency, stating, “Trust is not an object but an ongoing negotiation. Understanding how AI fits within existing trust relationships is key to responsible deployment.”
The conception of trust and how it is interpreted in the development of AI systems and governance tools will be central to how societies frame the future goals of AI adoption and the thresholds of safety, transparency, and fairness of AI systems.
The roundtable’s insights tied into the summit’s broader debates on AI governance, commercialization, and global cooperation.
Notable summit outcomes included:
While enthusiasm for formal international AI agreements appears to have tempered, these initiatives reflect continued collaboration across different aspects of AI governance.
It is also important to note that while the AI Action Summit was praised by some for shifting discussions beyond safety and risk to consider AI adoption and economic impact, critics viewed this as a departure from the safety focus established in prior international meetings, such as the Seoul and Bletchley Park summits.
The discussion on trust in AI is far from over. As governments and industries race to deploy AI across sectors, ensuring that AI systems are not just technically reliable but also socially and ethically grounded will be a key challenge. The AI Action Summit reinforced that building trust in AI is not solely a technical issue—it is a governance, societal, and philosophical challenge that will shape the future of AI and society worldwide.