By Madeleine Roantree and Adrian Furnham
Although AI-driven monitoring and performance tools are designed to optimise productivity, they may have an adverse effect on how employees perceive fairness, support, and control in the workplace. Attachment theory shines an illuminating beam on how individuals may react to algorithmic oversight.
Attachment Theory in the Workplace
Attachment is defined as the human propensity to seek out and develop close affectional bonds with others. Attachment theory, originally developed to explain interpersonal relationships, offers a valuable lens for understanding these dynamics. This framework posits that individuals’ attachment styles—secure, anxious, or avoidant—shape how they respond to perceived support or threat in relationships, including those mediated by technology.
This article examines how AI-based monitoring systems interact with employees’ attachment styles, influencing their sense of trust, autonomy, and well-being. It proposes a psychologically informed approach to designing AI systems and provides actionable recommendations for fostering workplace environments that prioritise human needs alongside organisational goals.
Attachment theory suggests that early childhood experiences profoundly shape all adult relationships.
Attachment theory suggests that individuals develop internal models of relationships based on early experiences with caregivers, which influence their interactions in adulthood. That is, early childhood experiences profoundly shape (all) adult relationships. These attachment styles manifest in the workplace as follows:
- Secure attachment: Individuals are comfortable with interdependence, view support systems positively, and adapt well to feedback.
- Anxious attachment: Individuals seek reassurance, fear rejection, and may perceive monitoring as a sign of distrust or criticism.
- Avoidant attachment: Individuals prioritise independence, may withdraw under scrutiny, and perceive monitoring as intrusive.
Research indicates that attachment styles significantly predict workplace outcomes, including engagement, collaboration, and stress resilience. When applied to AI-based monitoring, attachment theory suggests that employees’ reactions to algorithmic oversight depend on how these systems align with their relational expectations.
Research has demonstrated significant relationships between attachment styles and job performance, job satisfaction, burnout, feedback-seeking and acceptance, as well as organisational commitment. Attachment styles can be utilised to inform a host of organisational attitudes, behaviours, and other outcomes.
Over a decade ago, Harms (2011) showed that attachment theory could also explain how the characteristics of leaders foster positive and negative outcomes in their subordinates. He suggested that organisations could utilise attachment dimensions in their selection systems for supervisors. Also, they can be used to inform job design to ensure closer contact with supervisors for anxiously attached individuals who may experience a sense of loss when physically separated from their leaders. Moreover, performance reviews could be conducted and delivered in a way that is mindful of the fact that some followers may be particularly sensitive to feedback that may indicate that their leader has a negative perception of them. That is, they are suggestive of ways to strengthen the relationship rather than simply pointing out past behaviours that were seen to be disruptive or off-putting.Â
AI Monitoring and Psychological Responses
Build algorithmic systems not just for efficiency, but for psychological safety.
Securely attached individuals may view AI tools as reliable support systems, whilst those with anxious or avoidant tendencies might interpret such monitoring as intrusive or distrustful. These responses can profoundly affect organisational outcomes—from engagement and innovation to burnout and turnover. We suggest a novel psychological framework for understanding algorithmic management, proposing that workplace technologies must be designed with human attachment needs in mind. By integrating insights from occupational psychology and behavioural science, it sets out principles for developing transparent, autonomy-supportive AI systems that foster trust rather than fear. We conclude with practical recommendations for leaders, HR professionals, and policymakers across Europe: build algorithmic systems not just for efficiency, but for psychological safety—thereby supporting both individual well-being and long-term organisational resilience.
The integration of artificial intelligence into workplace management has transformed how organisations monitor and evaluate employee performance. From tracking keystrokes to analysing communication patterns, AI-based tools promise enhanced efficiency and data-driven decision-making. However, their psychological implications remain underexplored. As organisations across Europe adopt these technologies, they must consider how AI-mediated oversight influences trust, autonomy, and emotional well-being—core components of a healthy workplace.
AI Monitoring and Psychological Responses
AI-driven tools, such as performance analytics and real-time productivity trackers, are often designed to optimise efficiency. However, their implementation can trigger varied psychological responses. Securely attached employees may perceive these tools as neutral or supportive, enhancing their sense of structure and fairness (Neustadt et al., 2011). Conversely, anxiously attached employees may interpret constant monitoring as evidence of mistrust, heightening stress and reducing engagement. Avoidant employees may disengage entirely, viewing AI oversight as an invasion of autonomy.
Studies show that perceived surveillance can reduce intrinsic motivation and increase burnout.
These reactions have tangible consequences. Studies show that perceived surveillance can reduce intrinsic motivation and increase burnout, particularly among employees with insecure attachment styles (Warnock et al., 2023). Furthermore, excessive monitoring may undermine psychological safety—the shared belief that a workplace is safe for interpersonal risk-taking—leading to lower innovation and higher turnover.
A Psychological Framework for Algorithmic Management
To mitigate these risks, organisations must design AI systems that align with human attachment needs. This requires a framework rooted in three principles:
- Transparency: Employees should understand how AI tools collect and use data. Clear communication reduces perceptions of threat, particularly for anxiously attached individuals.
- Autonomy-Support: AI systems should empower rather than control. For example, offering employees access to their own performance data fosters a sense of agency, appealing to both secure and avoidant individuals.
- Psychological Safety: AI tools should be integrated into a broader culture of trust, where employees feel valued beyond their metrics. This is critical for fostering resilience across all attachment styles.
The European Commission’s AI Act (2024) provides a regulatory foundation for such principles, emphasising transparency and accountability in workplace AI. However, organisations must go beyond compliance to address the emotional and relational dimensions of technology use.
Practical Recommendations
To create AI systems that support trust, autonomy, and well-being, leaders, HR professionals, and policymakers should consider the following:
- Co-design with Employees: Involve employees in the development and implementation of AI tools to ensure they meet diverse psychological needs. This collaborative approach can enhance trust and reduce resistance (EU Agency for Fundamental Rights, 2023).
- Tailored Feedback Systems: Use AI to deliver personalised, constructive feedback rather than punitive metrics. For example, dashboards that highlight strengths alongside areas for growth can resonate with securely attached employees whilst reassuring those with anxious tendencies.
- Training for Managers: Equip leaders to mediate between AI systems and employees, fostering open dialogue about monitoring practices. This can mitigate avoidant employees’ withdrawal and support a culture of psychological safety.
- Ethical AI Guidelines: Policymakers should expand the AI Act’s principles to include psychological impact assessments, ensuring that workplace technologies are evaluated for their effects on trust and well-being.
Conclusion
As AI reshapes the workplace, its psychological implications cannot be ignored. By applying attachment theory, organisations can better understand how employees respond to algorithmic oversight and design systems that foster trust, autonomy, and well-being. Transparent, autonomy-supportive AI tools, embedded in a culture of psychological safety, can enhance both individual and organisational outcomes. For Europe’s business leaders and policymakers, the challenge is clear: build algorithmic systems that prioritise human connection alongside efficiency. In doing so, they will cultivate workplaces that are not only productive but also resilient and humane.
About the Authors
Dr Madeleine Roantree is a UK-based psychologist and relationships expert. She divides her time between the NHS and private practice, working with individuals and couples.
Professor Adrian Furnham is a professor at the Norwegian Business School. He has long had an interest in the concept of attachment and how it applies to the workplace.






