Is covert AI use a risk to your business? It's time to find out!

As generative AI continues to evolve at whirlwind pace, many businesses now recognise the need for staff guidelines on the use of this technology - but they’re stumped on how to go about it.

How do you catch something so wild by the tail?

There are a few decent examples online of generative AI policies, but leaders are mostly waiting to see what others do before putting their own AI approach out there.

In fact, a few months ago when I asked an audience of communication professionals to rate their executive team’s interest in developing rules for responsible AI use, most said their leaders would rather have root canal than even think about it. 

On that score, it was interesting to see reports  of Nine Entertainment developing principles for AI use – acknowledging both the opportunities and the risks. 

I appreciate this is a very thorny issue given Nine is embroiled in industrial action over job losses. We’re yet to fully grapple with the deep impacts of these tech changes on many industries.

But at least on the question of AI usage guidelines, they’re beginning the conversation. One of their principles, “we start and end with humans” is a solid foundation for your organisation’s AI rules, if you haven’t already developed them.

It’s important to also include guidance on risks like data security and inaccuracy of AI-generated information. For example, your guidelines might include that staff must:

  • Continue to be responsible for their work and comply with all privacy and confidentiality policies, making sure data stays secure and personal information is handled appropriately. 

  • Check any material generated by AI for accuracy, bias/stereotyping and copyright breaches.

  • Be able to explain, justify and take ownership of their advice and decisions. Generative AI is useful for gathering background information and drafting documents, but decisions must remain human-centred and based on verified information.

  • Assume information they input into generative AI tools could become public, so avoid sharing anything that could reveal confidential, personal or sensitive information.

This situation is changing so rapidly that today’s AI guidelines will be out of date before they’re even completed – but we need to start somewhere.

The alternative is your staff will continue to use AI under the radar, generating content that could find its way into the public domain without proper human intervention. Because make no mistake, they are using it.  Your challenge – like so many other leaders -  is to harness AI’s power to amplify its benefits without damaging your reputation credibility.

Neryl East
Neryl East is a reputation, communication and media expert who shows businesses and organisations how to stand out - for the right reasons! EDUCATION: PhD in Journalism, University of Wollongong Master of Arts, University of Wollongong Certificate IV Training and Assessment (TAFE NSW) International Certificate of Public Participation (IAP2) EMPLOYMENT HISTORY: Director - Neryl East Communications Pty Limited Manager Communications and Public Relations - Wollongong City Council Manager Media and Communications - Shellharbour City Council Head of Communications and Marketing - Australian War Memorial Lecturer and tutor - University of Wollongong Lecturer - APM College of Business and Communication Manager External Relations - University of Western Sydney Freelance journalist - The Australian, ABC, Southern Cross Television, Prime Television News Director - WIN Television, Western NSW Journalist/producer/presenter - WIN Television, Wollongong Journalist/producer - Radio 2CH INTERESTS: Netball umpiring, theatre, travel
http://neryleast.brandyourself.com/
Previous
Previous

Lessons from another airline traversing turbulent skies

Next
Next

The second most critical communication failure in major changes