By Sam Ward
At the beginning of this month, McKinsey released its annual **State of AI** 2023 report, showcasing how workplaces are responding to the generative boom. The results show that a third of workplaces are already using generative AI, but only 21% have appropriate governance policies in place.
This news comes at a time of unprecedented GenAI innovation. From Llama to Midjourney, the pace of what is possible to do with GenAI is changing week by week, with new releases right, left and centre. While it’s remarkable that you can write dazzling product copy, automate repetitive tasks and create advertising creatives in seconds, the tech is moving so fast that government regulators simply can’t keep up. And that is a real cause for concern.
Of the 21% of AI-enabled workplaces that have established policies governing employee use, ‘inaccuracy’ is cited as the leading reason for governance, with ‘security’ coming in second place. Most workplaces are simply not addressing AI-related risks at all.
The implications of GenAI are not yet fully known
Distinguishing between the potential of GenAI and effectively integrating it into a business process is crucial. Just as in the initial stages of GPS technology, those who overly relied on it prematurely often found themselves stuck down a dirt path on a road to nowhere. Similar risks loom with GenAI, as its implications are not yet fully known, and its responses shouldn’t be universally applied to customers or seamlessly woven into processes.
Let’s suppose a company goes all-in on GenAI, making it a central part of its solution and can’t function without it. We don’t have to imagine too much, as this is already happening with 10% of service ops teams who are relying on GenAI for customer service, forecasting service trends and creating first drafts of documents. If rules and regulations pop up that limit or forbid the use of GenAI, these teams and the organization as a whole could be in a tough spot.
Privacy and bias issues could leave businesses in a vulnerable spot
All AI has a little bit of bias because of the data it learns from. For example, if a LLM (large language model) has been built by an American company, it’s likely to have American perspective, and if you’re using the same tool for say, drafting emails or responding to complaints, then depending on the region you’re working on, these biases might not match up with your customers viewpoints.
Another fear is privacy issues, businesses that haven’t taken the time to implement proper guidelines and guardrails for handling customer data (eg. service teams using chatbots for customer service) could find themselves accidentally leaking sensitive info through GenAI tools. It’s the same as any other dependency we rely on – GenAI comes with its own set of risks. Relying too heavily on an external piece of tech for our crucial business goals could leave us vulnerable.
What’s more, if regulations suddenly tighten their grip, GenAI-powered products could vanish from the market with little warning. Unlike other dependencies, GenAI’s distinctiveness warrants regulatory attention due to its reliance on extensive public datasets, so it’s best to err on the side of caution.
Employing a Chief AI Officer will soon become business as usual
When considering any GenAI tools for work, always do your due diligence before jumping in with two feet. Creative roles such as Graphic Design or Copywriting are low risk in terms of data safety and businesses should be adopting them to help their employees boost productivity and automate mundane tasks. However, if your job title includes words like ‘delivery’, ‘process’ or ‘customer relationship manager’, you need a way of measuring the outcome you’re expecting, and you need a crystal-clear policy around data, how to manage it, and what the appetite is to use your organisation’s data in training other models.
I would go as far as to say that you need to appoint someone in your business to be responsible for AI safety. A Chief AI Officer is a very specific skill set for organisations who want to get serious about AI.
More jobs will be AI-enabled than AI-replaced
When considering the economy, it’s true that specific professions will cease to exist, and certain roles will decline. McKinsey’s report indicates that jobs in service operations are already set to decrease.
Undoubtedly, there will be an effect, and it’s imperative that we provide assistance to individuals navigating through this change. We mustn’t endorse an indifferent stance towards job losses, where people simply shrug off the issue. After all, AI is a collective resource intricately woven by all those who have ever used the internet. It’s important to distribute its benefits widely, ensuring its advantages extend to society at large, rather than being confined to a select group of tech enthusiasts who harnessed its vast information to develop AI-driven products.
Throughout history, technology has consistently displaced and generated employment opportunities. The positive news is that there are currently more jobs available than there are people to fill them. There exists a limit to what machines can do; human presence remains essential. Nevertheless, it is evident that our society requires heightened productivity within the market to enhance economic functioning, and artificial intelligence can play a pivotal role in meeting that demand. Many positions will become AI-augmented rather than fully replaced.
Governments are playing catch-up
As AI innovation continues apace, global government bodies are disjointed in their approach to plug the regulation gaps. Last week, China set forward 24 principles for mandating AI use while fostering innovation, while at the time of writing, the US has no distinctive AI-related laws. The EU is taking a hard line approach, with its AI Act however, the act has come under scrutiny as it risks being defunct before its even begun due to GenAI’s rapid expansion. Former UK politician, Nick Clegg has called for an autonomous international agency rather than isolated, ‘fragmented laws’.
It’s highly likely that the legislation of the countries or regions you operate within (where your technology is hosted) will impact you. This introduces a challenge, as varying regions might necessitate distinct software builds or only permit certain functionalities.
Looking ahead, I envision large corporations building their own Language and Learning Models. The advent of open source LLMs is more likely to be used for private use and for adoption in smaller enterprises. This would be a transition of expenditure away from relying on multiple AI/cognitive services to hosting one that can deliver most of the outcomes.
Time will tell
ChatGPT creator, Sam Altman has warned that society would be “crazy to not be a little afraid of AI”, and yet the general consensus is: build it and they will come. Only time will tell if we can mitigate the risks of AI in the workplace while welcoming this new dawn of AI innovation. The race is on.
About the Author
Sam Ward is currently working as Enate’s Head of AI Research & Development following almost two decades of engineering experience with a focus on innovation and research, specifically in AI and machine learning. He has a passion for solving complex technical problems and delivering solutions that are heavily augmented with AI.