Getting Closer to Machines with Mindful Steps

machines

By Robb Wilson

People have always had a strong tendency to anthropomorphise inanimate objects. Who doesn’t know someone who gave a name to their first car? But perhaps we should give more thought to the relationships we enter into with AI systems.

From rocks to hammers to high-impact drills, for thousands of years, we’ve grown accustomed to holding tools in our hands. Now, more often, we’re holding them in our minds. AI-enabled tools are rapidly growing more powerful and precise, and part of their predictive prowess lies in the ability to communicate through written and spoken words. This puts us on the verge of interacting with machines in much the same way we interact with humans, conversationally. Very soon, we will be having meaningful, ongoing, human-like relationships with machines.

According to Gartner, by 2025, generative AI will be a workforce partner within 90 per cent of companies worldwide (Gartner, Gartner IT Symposium 2023 Presentation, “We Shape AI – AI Shapes Us”, Mary Mesaglio and Don Scheibenreif, 16-19 October 2023). Gartner is calling these partners digital teammates – I call them intelligent digital workers (IDWs) – and if we’re going to populate our daily lives with scores of them, there’s significant work to be done. Organisations need to develop clear strategies and build systems with intention. Designers need to poke lots of holes and develop the iterative chops to quickly plug them – or, better yet, quickly divert flows in safer directions. Users will need to be able to understand what kinds of systems they are interacting with throughout their days.

With the race already well underway to connect powerful generative models to organisations and end users, business leaders will need to move quickly but intentionally. Taking the long view and thinking solutions through to all possible ends will be difficult to balance against incoming waves of disruption, but it will be necessary. These early moments of intimacy with machines will define the very nature of our relationship with these powerful new allies.

Let’s take a look at the ways we can connect with this new class of tool and the careful steps we can take in getting there.

Anthropomorphism, Innate and Powerful

The most memorable cars I’ve owned or ridden in had names and a semblance of personality that emerged from the experience of being inside their cabins. Plus, their headlights looked like eyes and their grills grinned like mouths. Imagine how the dynamics will shift when we can have useful conversations with our cars. Or, when the IDW we speak with in the car is the same one that we can continue speaking with while walking into the kitchen.

The conversations we have with IDWs can take all kinds of forms, but in productivity settings it’s important to consider how human we want these interactions to seem – how much intimacy we want to create. With their ability to speak and write, to listen and read, large language models (LLMs) are already innately anthropomorphic. They pass the Turing test1, with humanness to spare. They feel real.

It’s becoming much easier to use anthropomorphism to fool an end user into believing that they are interacting with a real person, which can happen by design or by accident.

With ChatGPT in particular, we’ve already seen people using conversational AI as an armchair therapist2. I’ve also heard people who have intimate knowledge of how LLMs operate express surprise at how intelligent they seem. Their anthropomorphic nature makes them incredibly powerful from a design standpoint – power that is, of course, double-edged. These models can be made to delight or deceive.

It’s becoming much easier to use anthropomorphism to fool an end user into believing that they are interacting with a real person, which can happen by design or by accident. Obviously designs that are intended to fool people into thinking they are human are unethical (perhaps barring designs that are intended for entertainment). In the realm of armchair therapy, these systems have led people down dark and, sadly, deadly paths simply by virtue of being human enough to seem trustworthy3.

Bottom line, anthropomorphism can happen in designed and unintended ways. As we get our footing in the world of digital teammates and begin creating a new kind of intimacy with our tools, letting machines behave more like machines might be erring on the side of caution.

Getting Even Closer

OpenAI recently unveiled GPTs, customisable versions of ChatGPT that users can train without having to write code. According to their blog4, ”Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images, or analysing data.” Even more recently, Google launched their Gemini model, which can understand text, images, video, and audio. Gemini can also perform impressive tasks relating to maths and physics, and it understands and generates high-quality code.

GPTs are shareable and deployable within organisations, which is a step forward in their evolution. Gemini’s multi-modal capabilities are also game-changing. Still, I wouldn’t classify either of these tools as an IDW or digital teammate on their own. A truly intelligent digital worker has shareable skills and can interact multimodally, but they also have the ability to communicate across an organisation’s departments, systems, and data. Make no mistake, IDWs are being used right now by forward-thinking orgs with full-blooded AI strategy5. Companies that are still fiddling with siloed chatbots aren’t even on the same path.

This level of connectivity that IDWs provide, whether inside an enterprise or a household, is fertile soil for what the GPTs of the world are aspiring towards: personalisation. This personalisation offers opportunities to go beyond simply delighting users. Imagine if the voice interface I mentioned earlier can look through your email inbox to find the plumber you used a few years ago while also ordering you a ride to the airport, checking you into your flight, and sending a note to your friend in Chicago that your flight is on time.

Make no mistake, IDWs are being used right now by forward-thinking orgs with full-blooded AI strategy. Companies that are still fiddling with siloed chatbots aren’t even on the same path.

That’s a home speaker use case loaded with high-value activity. The value of a truly smart speaker in a business setting is far greater. Well-orchestrated conversational AI can remove so much tedium from our work existence that the quality of our lives will take a dramatic upswing. I bring this up for two reasons.

Obviously there’s untold business value wrapped up in this level of personalisation. There’s also another layer of intimacy that bears consideration. If IDWs are radically improving your life in ways that the humans in your life aren’t capable of, how do we compartmentalise their existence? Are they best friends? Are they like family? Do IDWs become gods? The safe answer seems to be that they should be powerful yet benevolent machines working on the behalf of all of us.

Regardless, the temptation to view IDWs as human and to trust them more than we should will be great. The design choices we make in creating IDWs – including, and perhaps especially, how we anthropomorphise them – will dictate the kind of intimacy that emerges between humans and machines.

Value in Closeness

In removing the tedious tasks that take up so much of our time, conversational AI might allow us to have deeper interactions with the humans in our lives. Conversational AI also allows us to interact with machines in a way that’s far less distracting than glowing screens. If we can periodically ask machines questions and receive written or spoken responses, we can spend less time looking at our pocket computers, which might also allow us to deepen our connections with other humans.

The design choices we make in creating IDWs – including, and perhaps especially, how we anthropomorphise them – will dictate the kind of intimacy that emerges between humans and machines.

At the beginning of this piece, I mentioned that we frequently hold out tools in our minds. You could also say that our most cutting-edge tools now hold our minds in them. The GPT models were trained on nearly all of the internet. That represents thousands of years of accrued human knowledge. So, in an abstract sense, if you’re asking ChatGPT a question on your phone, the tool is in your hand and in your mind. At the same time, your mind – or, more accurately, our collective mind – is in the tool.

culture or its intelligence.

I recently had a conversation with Blaise Agüera y Arcas6, AI researcher, author, and VP and Fellow with Google Research, and he reminded me just how collective human intelligence is. He spoke of the “islanding” effect, where small populations are cut off from the larger societies, and their level of innovation plummets. “It kind of shows you that people are a little bit like neurons in a bigger brain,” Blaise said. This realisation informs his take on AI, which he called a bit unorthodox.

“I think about [AI] more ecologically,” he said. “Their intelligence is our intelligence. The way we’ve arrived at AI is literally by training it on corpuses of human interaction. Their interaction with us is very, very human-like. I would argue it is human … It may be different from us in its implementation, but it’s not different from us in its culture or its intelligence.”

While this might sound lofty or abstract in terms of things business can do right now, there are immediate choices that organisations are making right now that are setting trajectories in a moment of massive acceleration.

As Blaise sees it, in the way that there’s collectively more intelligence in a city than on an isolated island, AI can radically boost our capabilities. I agree, and can envision technology taking us to dizzying new heights – actually bringing us closer together as people and enhancing our collective intelligence to the point where even the most daunting problems we face (like climate change, widespread corruption, and inequality) can be solved.

While this might sound lofty or abstract in terms of things business can do right now, there are immediate choices that organisations are making right now that are setting trajectories in a moment of massive acceleration. There are pathways that business leaders can begin to follow that will foster a measured and responsible form of intimacy between conversational AI and users. The work won’t be easy, but it’s hard to think of a more critical time to get something right.

About the Author

robb wilsonRobb Wilson is the co-founder and CEO of OneReach.ai and the GSX creator / builder platform, the only platform in the space that’s named as a leader by all of the most respected analyst firms, Gartner, Forrester, IDC, Everest, etc. He co-authored Age of Invisible Machines, the first WSJ bestselling book about conversational AI, and co-hosts the Invisible Machines Podcast. Robb has spent more than two decades applying his deep understanding of user-centric design to unlocking hyperautomation. In addition to launching 15 startups and collecting over 130 awards across the fields of design and technology, he has held executive roles at several publicly traded companies. A trusted thought leader in the realm of conversational AI and hyperautomation, Robb has played a part in creating a wide variety of products, apps, and movies that have touched nearly every person on the planet.

References:

  1. Turing test. Wikipedia. https://en.wikipedia.org/wiki/Turing_test
  2. ChatGPT is giving therapy. A mental health revolution may be next. 27 April 2023. Aljazeera. https://www.aljazeera.com/economy/2023/4/27/could-your-next-therapist-be-ai-tech-raises-hopes-concerns
  3. Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change. 31 March 2023. Euronews.net. https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
  4. Introducing GPTs. ChatGPT. https://openai.com/blog/introducing-gpts
  5. Case Study: Global Fortune 50 Company achieves 83% CSAT Score By Automating Employee Experience. OneReach.AI. https://onereach.ai/portfolio/case-study-global-fortune-50-company-achieves-83-csat-score/
  6. S2E23 Identity and Collective Intelligence with Blaise Agüera y Arcas, VP at Google Research. December 2023. YouTube. https://www.youtube.com/watch?v=xZ2EQgINEh4

LEAVE A REPLY

Please enter your comment!
Please enter your name here