AI

By Laetitia Cailleteau and Patrick Connolly

As conversational artificial intelligence (AI) advances, it is able to sustain ever more human-like relationships with end users. This can vastly improve customer and employee experiences, but it also creates complex ethical and trust considerations. As EU AI regulation starts to take shape, Accenture’s new research identifies a practical approach for AI designers to identify and mitigate these issues in a more systematic way, as part of a broader responsible AI framework.

By some estimates, the market for conversational artificial intelligence (AI) technologies will reach $13.9 billion by 2025—and it’s easy to see why.1 Using advanced technologies like affective computing, facial recognition and large transformer models like BERT and GPT-3, companies can vastly improve customer and employee experiences. For example, “Digital human” technologies can replicate human emotions, gestures, and visual cues in some customer service touchpoints, as UBS, BMW, Southern Health Society and Noel Leeming’s Stores are discovering.

UBS, for example, created a prototype digital double for its chief economist, Daniel Kalt,  with the potential to use that avatar (with full disclosure) in certain “face-to-face” meetings with high-wealth clients. The “digital Kalt,” created in a process using more than 100 DSLR cameras, draws on information provided by the real Kalt to communicate with others; it also makes eye contact and reacts to conversational cues, for example by smiling.

Meanwhile, United States insurer MetLife has applied Cogito’s emotion AI coaching solution to help its agents improve their customer interactions in real time, using prompts including “slow down your speech,” “ask an open question” or “take time to think about how the customer might be feeling.” As a result of these prompts, MetLife has achieved a 14-point improvement in its net promoter score (a loyalty measure). The company has also increased its “perfect call” scores by 5%, achieved 6.3% greater issue resolution, and a 17% reduction in call handling time.

The European Commission’s proposed AI Regulation, if approved, will take a risk-based approach that, for example, would prohibit the use of systems with significant potential to manipulate human behaviour and actions.

The news isn’t all good, however. In fact, there’s a sobering risk associated with conversational AI technologies. As conversational AI technologies have advanced, increasingly adopting more human-like characteristics, we see a corresponding rise in ethical risks associated with using it. What if the AI learns a bias? What if it engages in stereotyping? Where is the line between supporting and persuading, or even manipulating? What might human users inadvertently disclose to a machine that they wouldn’t want to disclose to another human? Are there instances where users might believe they’re communicating with a human when they’re not? What are the implications of these potential scenarios?

Recent responsible AI initiatives attest that business leaders and governments are both keenly aware of, and concerned about, these potential dangers. The European Commission’s proposed AI Regulation,2 if approved, will take a risk-based approach that, for example, would prohibit the use of systems with significant potential to manipulate human behaviour and actions. It would also place strict requirements around “high risk” use cases for conversational AI and would also subject all conversational AI solutions to a transparency requirement.

These are important actions. Still, in the absence of industry standards and clear regulatory guidance, we have found that business leaders, product owners, designers, developers, and data scientists will lack a practical way to identify and address ethical risks. To fill that gap, we have developed a practical approach, which considered the intricacies of technology development and human rights in tandem, to help conversational AI designers and leaders think through some of the ethical implications and potential consequences of their decisions, as they develop and deploy conversational AI tools. Our approach focuses on a set of framing questions around three critical facets of conversational AI: Looking Human, Understanding Humans, and Behaving in a Human Way. We believe that considering the technologies from these three entry points will help companies lower the associated risk and increase their opportunity for success. See our full-length research report for more detail.

Looking Human

Mimicking human features and characteristics in virtual agents can increase their ability to engage end users. But care must be taken not to unintentionally embed stereotypes and discrimination. For example, Research summarized in a recent UNESCO report, “I’d Blush If I Could” highlighted how female-sounding voice assistants often respond to abusive language with playful evasion at best or flirtation at worst. No decision is entirely neutral, and each choice must be thought through and weighed up on its own merits.

AI

Key questions to answer:

  • What visual identity and personality are we choosing for our AI assistant and why?
  • How does this identity support the goals for the interaction?
  • What accent, pitch, pace, and tone of voice is appropriate?
  • What unconscious biases might affect decisions about visual appearance?
  • What is inappropriate or abusive language and how should the assistant respond?
  • How well does the assistant align with brand engagement objectives and create user stickiness?

For example, to encourage greater inclusivity and representativeness, Accenture has developed and open-sourced non-gendered voices for digital assistants, including Sam, the world’s first non-binary voice solution. To do this ethically, Accenture surveyed non-binary individuals and used their feedback and audio data to influence not only pitch, but speech patterns, intonation and word choice.

Understanding Humans

The data companies gather, and what is inferred about users’ wants, needs and behaviours, informs the way we engage with them, and the products and services we offer. As “always-on” data gathering sensors/devices become increasingly prevalent, it is vital users are fully aware of what is being inferred, in control of what data is gathered and that safeguards are in place to protect their human rights. For example, Stanford’s open-sourced, privacy-preserving Genie Virtual Assistant protects privacy by executing all data operations locally. It also doesn’t “listen in” on user conversations, choosing to train the natural language model primarily on synthesised data. It also allows users to share data with privacy, fine-grained control and without disclosing to third parties. A user can decide who has access to certain information and in what situation, for example: “Allow my parent to be notified of motion detected in my house, only when I am not present.”

It also allows users to share data with privacy, fine-grained control and without disclosing to third parties.

Data such as video, voice, text, and other physiological metrics are the foundation of emerging technologies such as emotion AI and affective computing, which attempt to infer a human user’s emotional state. Understanding this can be hugely beneficial in a variety of areas such as helping children with autism, but this mode of learning raises serious considerations relating to accuracy (is the inference scientifically robust), legality (does it infringe the user’s legal rights), and ethics (is it the right thing to do).

  • How much data is being collected? Are we inadvertently collecting more than we need? And if so, how are we using it?
  • Are we being transparent about what we collect and how we use it?
  • Have we established clear user consent?
  • What steps are being taken to mitigate bias?
  • To what extent are we making inferences about a user’s emotional state?
  • How is privacy and data security ensured?
  • What access controls are we implementing?

Behaving like a human

The simulation and stimulation of emotions and behaviours enables companies to better engage end users in a positive way. For example, chatbots have shown promise in helping people recovering from trauma.

To quell loneliness and improve health and quality of life, for example, Accenture Song worked with Stockholm-based Exergi to create a reverse engineered voice assistant AI called Memory Lane that invites elderly people to share their stories. The AI understands the correlations between different answers to trigger relevant follow-on questions. For example, it might ask: “Can you tell me about your first true love?” and follow up with: “Could you tell me about your first date?”

Every day, Memory Lane analyzes the previous conversation and uses the findings to build onto a memory graph – a virtual, structured account of the person’s memories. To ensure privacy, all responses are stored locally on the user’s own smart speaker rather than uploaded to the cloud.

AI

Among the incredible stories Memory Lane has captured: the reflections of a nurse in World War II, and also the reflections of an early founder of the PRIDE movement in Sweden.

But any use of this technology must also consider the danger of “Hypernudging” and “Dark Patterns”—situations in which emotional and cognitive biases are exploited at scale through manipulative interfaces. When human-like AI becomes the interface, how we present information, feedback and choices to the user determines whether we are crossing an ethical line.

  • How open and transparent is our use of AI in each interaction?
  • What level of agency does the user have?
  • Are we adequately preserving their freedom of opinion, choice, and thought?
  • What recourse does the user have if they want to make a different choice?
  • Have we cast a wide net when thinking about the ways bad actors could engineer undesirable results?

Radically Human

In the book “Radically Human,” authors Paul Daugherty and H. James Wilson note that trust is set to become a key differentiator for AI companies. Those that fail to adapt, they write, will ultimately be left behind.

To be deserving of trust, companies must consider how it is manifested in design decisions, which shape how conversational AI looks, understands and behaves, and the subsequent implications for workers, users and society at large.

While it is not enough on its own to ensure conversational AI has been implemented in a responsible and trustworthy manner, this approach can be an invaluable tool for designers in identifying and addressing the additional implications of their decisions, as part of a broader organizational Responsible AI framework.

Conversational AI is on the cusp of profoundly changing the ways in which machines can support and improve human lives. But the technology is also sounding a clarion call for ethical oversight. Let’s get on it. 

About the Authors

Laetitia Cailleteau

Laetitia Cailleteau – Laetitia leads the Data and Artificial Intelligence Europe group at Accenture and the Conversational AI domain globally, driving innovation, sales and delivery for multiple industries and clients around the world.

Patrick Connolly

Patrick Connolly – Patrick is a research manager at The Dock, Accenture’s Global Research and Development and Innovation Center located in Dublin, Ireland.

Endnotes

  1. “Conversational AI Market by Component (Platform and Services), Type (IVA and Chatbots), Technology (ML and Deep Learning, NLP, and ASR), Application, Deployment Mode (Cloud and On-premises), Vertical, and Region – Global Forecast to 2025.” Markets and Markets, June 2020. Retrieved from: https://www.researchand markets.com/reports/5136158/conversational-ai-market-by-component-platform
  2. “Proposal for a Regulation laying down harmonised rules on artificial intelligence.” European Commission. April 21, 2021. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

LEAVE A REPLY

Please enter your comment!
Please enter your name here