By Fernanda Arreola and Jean-Christophe Lassalle
In order to gain acceptance among the general public, AI systems need not only to deliver correct information, but also to gain people’s confidence on a variety of other, less tangible, plains. Arguably, mass transportation offers an excellent crucible for examining the issues at stake.
Artificial intelligence is increasingly embedded in the systems that guide everyday decisions, from online shopping to financial services and healthcare. Public transportation is no exception.1 In mass transit networks, AI now predicts delays, recommends alternative routes, and interacts with passengers through conversational interfaces.
But a fundamental question remains: Do people trust AI to provide relevant information? What about high-stakes, time-sensitive situations such as daily commuting?
Drawing on insights of a study2 of the mass transportation industry, this article explores the conditions under which passengers trust or reject AI-driven information systems. The findings reveal a paradox: people welcome AI when it improves clarity and reduces uncertainty, yet they become skeptical when personalization feels opaque, manipulative, or intrusive. Trust in AI, particularly in essential public services, is not automatic. It must be deliberately designed.
Why Mass Transportation Is a Perfect Trust Laboratory
Mass transit systems offer a unique lens through which to examine trust in AI. This is because, every day, millions of passengers rely on real-time information to make rapid decisions: Should I wait? Should I change lines? Will I miss my connection? Unlike entertainment or retail platforms, mobility decisions are constrained by time, physical presence, and limited alternatives. A poor recommendation has immediate consequences.
Unlike entertainment or retail platforms, mobility decisions are constrained by time, physical presence, and limited alternatives.
In dense urban rail systems such as metros, suburban trains, and regional express networks, delays are structurally unavoidable. Infrastructure saturation and operational complexity make perfect punctuality unrealistic. In this environment, information becomes critical. What passengers need is not perfection, but predictability. This is where AI emerges as a powerful tool to address this need. Yet the technology’s success ultimately depends on user trust.
Mass transit is a real-world stress test for AI-generated information. Commuters must make rapid decisions under time pressure and limited alternatives. A poor recommendation has immediate outcomes.3 What determines user confidence is not perfect punctuality, but the quality of information provided when disruption occurs. AI promises to reduce uncertainty, but only if passengers believe it works in their interest.
How AI Changes Passenger Information
AI reshapes mobility information in three essential ways.4 First, systems become predictive. They anticipate delays and congestion instead of merely reporting them. Second, information becomes personalized. Recommendations adapt to individual constraints such as time sensitivity or accessibility needs. Third, interaction becomes conversational. Chatbots and voice assistants reduce cognitive effort during stressful moments. These capabilities increase perceived relevance. Yet relevance alone does not guarantee trust.
Research5 shows that passengers evaluate trust on three dimensions. The first is competence, meaning the information must be accurate and consistent. The second is benevolence, which refers to whether the system appears to act in the passenger’s interest. The third is transparency, meaning the reasoning behind recommendations must be understandable. Passengers accept minor errors if the system is generally helpful. They reject systems that appear manipulative, opaque, or inconsistent across channels.
Privacy concerns further complicate trust. In essential services like transport, refusing data sharing often means accepting degraded service. This weakens the idea of fully voluntary consent and increases the importance of responsible governance.
Algorithmic bias also poses risk. If AI systems are trained primarily on peak-hour commuters, minority users, such as night workers or passengers with disabilities, may receive inferior recommendations. In mobility contexts, such failures have tangible consequences.
Executive-Level Governance: Why Trust Cannot Be Delegated to IT
Trust in AI-generated information is not merely a technical issue. It is a governance issue.6 Decisions about personalization logic, data boundaries, nudging mechanisms, and transparency standards directly affect institutional legitimacy. In public transportation, these choices cannot remain at the operational level.
Executive leadership must define clear purpose boundaries, responsible data limits, minimum transparency standards, and accountability structures. In the age of intelligent systems, governance becomes part of the value proposition.
Managerial Takeaways: Building Trust in AI Information Systems
For executives and policymakers, the question is not whether AI should be used, but how trust can be embedded into its deployment.
1. Make Explainability a Design Requirement
Trust increases when passengers understand why recommendations are made.
Action: Integrate simple, user-facing explanations into recommendation interfaces.
2. Prioritize Data Governance Over Data Volume
Collecting more data does not automatically increase relevance and may reduce trust.
Action: Adopt data minimization and privacy-by-design principles as strategic positioning, not merely regulatory compliance.
3. Audit for Fairness and Inclusivity
Bias in routing or prediction models can undermine legitimacy.
Action: Test algorithms across diverse user profiles and incorporate accessibility constraints from the start.
4. Align Optimization with User Interest
AI can rebalance flows across a network, but behavioral steering must remain aligned with passenger benefit.
Action: Establish ethical oversight mechanisms to avoid over-flowing and monitor user sentiment continuously.
5. Treat Trust as a Performance Metric
Traditional KPIs focus on punctuality or capacity utilization. In AI-enabled systems, perceived trustworthiness should also be measured.
Action: Integrate trust and user confidence indicators into digital transformation dashboards.
6. Apply the 3C Framework to AI Governance
Transportation planning has long relied on the 3C Planning Process: Continuing, Cooperative, and Comprehensive. This framework provides a powerful model for structuring AI governance in passenger information systems.
Action: AI governance must be ongoing and iterative. Systems should be regularly reassessed for performance, fairness, and evolving societal expectations.
Conclusion: Trust Is the Real Innovation
Mass transportation demonstrates that trust in AI is neither automatic nor purely technological. People trust AI-generated information when it is accurate, relevant, transparent, and aligned with their interests. They distrust it when personalization becomes opaque, data practices feel coercive, or recommendations appear manipulative.
In essential public services, trust is not a marketing attribute, it is a structural condition of legitimacy. The future of AI in mobility, and beyond, will depend less on algorithmic sophistication than on the ability to combine competence, transparency, and fairness. The real competitive advantage will belong to organizations that understand a simple truth: in the age of artificial intelligence, trust is the ultimate infrastructure.










