AI and credibility: Where’s the balance?
2024 is shaping up as a year of massive change as workplaces come to grips with the reality of generative AI in its many forms – and yet, the biggest challenges remain very human ones.
The businesses, organizations and agencies that thrive will be the ones that keep people at the front and centre of everything they do, while harnessing the benefits technology offers.
Underpinning all of that is clear and credible communication. It's the key that unlocks the connecting tissue between the solutions of today and the opportunities of tomorrow.
It’s often overlooked in the excitement of the latest, greatest shiny object.
One of the biggest challenges of the year for leaders will be working out how to use generative AI, with everything it offers, without creating big mistakes that lead to a damaged reputation.
I’m having a ball playing with various generative AI tools – how about you? Some of my favourites are Descript – amazing for video production – and
with its genius feedback on your communication performance.
Once you go down this rabbit hole the possibilities are mind-blowing.
You can now easily create an AI presenter to say anything you want about your business in any accent you choose. You can do voice cloning, create music, even make images of your ancestors walk and talk.
You can record yourself on video reading your script and AI will adjust your eyeballs so it looks like you're looking into the camera. Crazy, right?
There’s not enough space here to itemise everything AI can do.
This is all amazing, but the question is, how far can you go with these incredible tools before they start to have an impact on your credibility?
Data security, inaccuracy, inherent bias, copyright – these are just a few of the risks for those who plunge into using generative AI on a large scale without proper guardrails.
The problem is, the lines are blurry. There’s no playbook. We’re basically making it up as we go along.
When I’ve spoken to audiences of leaders over the last few months, I’ve thrown up a couple of scenarios where an employee has used ChatGPT to perform certain, usually unremarkable, tasks.
I’ve asked them to choose whether they think the action is diligent – a good use of time and resources – or dodgy, even a little bit.
Every time, the room has been divided. And there are always leaders who say they don’t have a clue.
People in your organization are using AI tools whether you acknowledge it or not. And there’s a good chance you don’t yet have any guidelines to help ensure the risks don’t become reality.
There are now various forms of government guidance on generative AI, along with policies from around the world. Some of these have useful approaches that you can adapt in your own business or organization.
At the heart of it is the need to keep our focus on human-centred decision-making. That’s a great rule of thumb to define the line between helpful AI uses and reputation risks.
The City of Boston sums it up well in interim guidelines:
Generative AI is a tool. We are responsible for the outcomes of our tools. For example, if autocorrect unintentionally changes a word - changing the meaning of something we wrote, we are still responsible for the text. Technology enables our work, it does not excuse our judgment nor our accountability.
Last year, leaders could put generative AI in the “early days” basket and not give it too much focus in the whirlwind of other issues.
2024 is different. Make it the year you get clear on how your team or organization will use AI in productive ways without killing your reputation.