By Jack McGuire, David De Cremer, Leander De Schutter, and Yorck Hesselbarth
In the contemporary digital era, innovations such as artificial intelligence (AI) are profoundly transforming the business landscape (De Cremer, 2020). The buzz surrounding ChatGPT, coupled with recent assertions about the sentience of Google’s LaMDA, a large language model, underscore the prominence of chatbot technology in these advancements (Adamopoulou & Moussiades, 2020; Ryu & Lee, 2018; Tiku, 2022). Customer-oriented chatbots, an emergent application of this tech, offer unparalleled efficiency and cost-effectiveness, operating ceaselessly and responding to client inquiries in real time (Salesforce, Research, 2019). Yet, amidst these advantages lies an ethical conundrum. Customers cherish genuine human interaction and can become quickly disillusioned when they realise they’re communicating with a bot, not a person (Ciechanowski, Przegalinska, Magnuski & Gloor, 2019). Balancing this desire for authenticity with the allure of operational efficiency poses a challenge, making it tempting for businesses to deceive customers by blurring the lines between human and machine.
Specifically, organisations nowadays are confronted with a reality where chatbots demonstrate remarkable human-like qualities (Collins & Ghahramani, 2021; Leviathan & Matias, 2018). This reality makes the choice to cut costs by adopting human-like chatbots a rational one. However, this choice is not so straightforward for organisations to make. After all, customers prefer the real thing (i.e., interactions with a human) over the artificial one, and therefore making the rational choice requires organisations to adopt a strategy of deceiving their customers by not disclosing to them that chatbots are used.
However, what are the risks when firms use chatbots without disclosure? What happens to the reputation of organisations engaging in these deceptive acts when customers find out what is really happening? And, even more important, what happens to the employees working for those organisations? When deception is found out, organisations are likely to suffer reputational damage, but will it also tarnish the careers of their employees? Several high-profile tech companies have faced backlash over the unethical use of emerging technologies.
Consider the fallout from the Theranos fraud and misconduct scandal. While the company suffered legal and reputational damage, employees faced a backlash, too. Several of them reported difficulties in job transitions, with potential employers associating them with the scandal (Lapowsky, 2021). As companies carry responsibility for their employees, it is imperative from an accountability point of view that they are aware of any potential effects on the careers of their employees before succumbing to the allure of deploying chatbots under a veil of deception. To test whether employees indeed suffer in their career prospects when the organisation they work for engages in deceptive chatbot practices, we conducted several experimental and field studies (McGuire, De Cremer, De Schutter, Hesselbarth, Mai & Van Hel, 2023).
The Ripple Effect on Careers
First of all, our research unsurprisingly finds that organisations employing undisclosed chatbots are perceived as less ethical by customers when found out. Obviously, if you work for an organisation that is seen as unethical in its use of emerging technologies, it will affect your work identity. If this is the case, how will it affect the judgements and subsequent actions of these employees? The Uber scandal involving the suppression of sexual harassment allegations presents some useful insights regarding how to respond to that question. Employees at Uber, even those uninvolved, experienced that the company’s ethical breaches overshadowed their individual reputations and motivated many of them to resign (Kosoff, 2017).
To validate this idea, we ran a series of experimental studies where employees in a simulated company were asked to facilitate deceptive chatbot use. Putting employees in this situation made them more likely to perceive their organisation as cultivating a culture of making unethical requests to their workforce. In turn, because of these perceptions, we found that those employees wanted to quit their job more. So, organisations that deceive their customers by pretending to have humans handle customer enquiries are judged to be unethical by both customers and the employees working for those organisations. As a result, customers will show no loyalty to those organisations, and employees want to leave them. But where can those employees go? Are they contaminated for the job market? With today’s rapid transmission of information online, a company’s unethical practices can become widely known, and thus impact employees’ professional trajectories. To study this phenomenon, we conducted two more studies, where we assessed how those employees are seen by recruiters. Our results showed that employees that had worked for an organisation known to use chatbots deceptively were perceived by recruiters to be less trustworthy, were less likely to be offered a job, and were given a lower salary when offered one. The deceptive use of chatbots therefore has widespread repercussions. It harms not only the company, but also the people who work there. The case is clear. Tech professionals must champion ethical AI use. The broader societal implications of our creations cannot be ignored. Advocating for transparency and ethical guidelines protects both the company’s reputation and your own professional standing. The findings from our research offer two actionable takeaways: In conclusion, as AI’s role in business grows, its ethical use is critical. It’s not merely about company profits; it’s about the careers and reputations of those who make up the organisation. Prioritising ethical AI practices isn’t just a business imperative; it’s a career necessity. About the Authors Jack McGuire is Jack McGuire is a Postdoctoral Research Associate at the D’Amore-McKim School of Business at Northeastern University (Boston). He received his PhD in Management & Organization from the National University of Singapore Business School and his MSc from University College London. Prior to this, he was an Experimental Lab Manager and Research Assistant at the University of Cambridge, Judge Business School. Jack’s research examines the psychological consequences of artificial intelligence and its increasing application in the workplace. This work has been published in Journal of Business Ethics, Computers in Human Behavior, International Journal of Human–Computer Interaction, and Harvard Business Review, among others. David De Cremer is currently the Dunton Family Dean of D’Amore-McKim School of Business and professor of management and technology at Northeastern University (Boston), and an honorary fellow at Cambridge Judge Business School and St. Edmunds College, Cambridge University. Before moving to Boston, he was a Provost chair and professor in management at National University of Singapore and the KPMG endowed professor in management studies at Cambridge University. He is the founder and director of the Center on AI Technology for Humankind (AiTH) in Singapore, which was hailed by The Higher Education Times as an example of interdisciplinary approaches to AI challenges in society. He is one of the most prolific behavioral scientists of his generation, and a recognized global thought leader by Thinkers50. He is a best-selling author, including “Leadership by algorithm: Who leads and who follows in the AI era?”, and his newest book “The AI-savvy leader: 9 ways to take back control and make AI work”, which will be published by Harvard Business Review Press in 2024. Leander De Schutter is assistant professor at the Vrije Universiteit Amsterdam, the Netherlands. He is interested in leadership and decision-making in the workplace. Yorck Hesselbarth is building foundational models with European values at Nyonic AI, contributing to digital sovereignty on the continent. Previously, he conducted research in the field of human-computer interaction and led several cutting-edge AI projects for the German Armed Forces. References The Responsibility of Tech Professionals: A Call to Action