Employees Using AI? Here’s what business leaders should know

Published:
29
August
2025

Another article about AI might feel like old news, but lately we’ve had plenty of questions from businesses asking what guidance they should give their teams.

It’s been over two years since AI tools like ChatGPT and CoPilot burst into the mainstream with a whole lot of hype (and some understandable hesitation). At the time, many businesses felt wary, and in some instances asked employees to refrain from using AI tools.  

But in 2025 it's not the most realistic approach (and there could be a whole lot of opportunities you're missing). In a recent survey 46% of Australian workers reported using AI in their jobs, so if your business isn’t on board, it’s likely your team are using it without guidance.

While AI tools continue to bring exciting opportunities to our workplaces, it doesn’t mean some of those initial concerns were unfounded. It’s still important to understand any potential business risks when your team are using it.

So, how can you guide your team, keep your business data as secure as possible, and reduce the risks of misuse? Here’s a few points to consider:

Find the opportunities and get clear on ground rules

What do you/your business consider as appropriate use of AI? Be clear. Is it okay for research? Content writing? Admin tasks? What things don't you want it to be used for?

Providing structure around how to use AI, and how not to, doesn't just protect your business from the possibility of privacy and legal breaches, it could also help your team understand how to use it to their advantage and work more efficiently in ways they hadn't considered.

You also need to be clear in how you communicate what's appropriate with your team.

If you don’t have anything in place to help employees understand when AI is okay to use, developing an AI usage policy is a good place to start.

Here's a few points worth thinking when deciding what is appropriate for your team and what a policy might cover.

HOW is it being used?

  • What kind of information is being input into AI? Is this information confidential or already public? What are the risks of uploading it? If it includes names, client details or sensitive business information, you need to be across how this intersects with privacy laws.
  • When is AI being used? During regular, internal meetings to take notes, or in external client meetings?  Different contexts come with different legal and privacy implications, especially if people are not aware it's being used.
  • What use of AI is completely off limits for your team? Creating anything that violates existing policies, laws, or ethical standards is a no-brainer. But you might also want to set boundaries around things like using AI in place of proper client consultation or in other situations where professional judgment is expected.

WHAT AI platforms are being used, and WHERE are they being accessed from?

Not all AI tools are created equal. Some are more secure than others, and free versions can especially come with risks. Consider:

  • What AI platforms are your employees using? Are they trusted? Are there better alternatives?
  • Are employees using personal devices, phone accounts, or incognito browsers? If so, your business has no way of being across the information being input.

When people are using AI for work purposes, it’s best if they’re logging in with their work email. That way, any IP or data stays linked to the business account (as much as the AI tool allows), and if needed you can access that account if or when team members leave.

You may also want to invest in a preferred AI tool for your team, so you know what’s being used across the board.

What happens IF something goes wrong?

If someone accidentally shares sensitive info, or an AI tool spits out something dodgy that gets sent publicly or to a client, what’s the process?

Make sure your team knows what steps to take and how to report it. When you’ve created a culture where people feel safe to speak up, you’re in the best position to respond and manage any fallout.

Understand WHY is it being used

Why are people in your team using it? Is it faster and more efficient? Or is because they’re stuck or being asked to do something outside their skillset?

If it’s the latter, a training opportunity might be a solution.

AI can be a great support tool, but it’s not 100% reliable and professional judgement is still needed. Make sure your team is comfortable doing their role without AI and is able to recognise when there are errors, or outdated/incorrect information.

Want to make sure your team’s clear on how to use AI?

If you’re not sure where to start, we can help you develop an AI usage policy that fits your business. Get in touch to find out more.