WHY HUMANS NEED TO STAY AT THE CORE OF DECISIONS IN AN AI WORLD
Governments around the world are scrambling to develop guidelines that catch up with the onslaught of generative AI.
With hundreds of platforms now freely and publicly available, businesses and organizations are seizing the opportunity to fast-track many tasks from research and report-writing to emails, video production and so much more.
But how do you know if you’ve crossed a line with generative AI? Are there some areas that remain a strictly human zone?
In Australia, the government has released interim guidance on generative AI. While aimed at public service agencies, it contains advice that could equally apply to any leader or organization.
The guidance recommends publicly available generative AI platforms should only be used when the risk of negative impact is low, adding that unacceptable risk includes situations “where services will be delivered or decisions will be made.”
It also stresses the importance of human-centred decision making – a crucial guiding principle for your organization regardless of your sector.
AI can be invaluable for gathering and sorting information, but it must not call the shots when final decisions are made, especially when those decisions impact human lives.
The City of Boston sums it up well in their interim AI guidelines: “Generative AI is a tool. We are responsible for the outcomes of our tools.”
At this extraordinary time in history, it’s important to remember that communication is an exchange of meanings between people.
Whether you’re communicating using the written word, images, voice or video, AI is useful but it’s a human touch that builds true connection.
Dr Neryl East is a professional speaker and executive coach who shows leaders how to be heard, stand out and command influence. Connect with Neryl on LinkedIn here: https://www.linkedin.com/in/neryleast/