AI tools are already in your organisation. Your team is using ChatGPT, Gemini, Copilot, and other AI assistants to draft emails, analyse data, and solve problems. The question isn’t whether to use AI—it’s how to use it responsibly.Without clear policy, you risk data breaches, compliance violations, and inconsistent quality. With the right framework, you enable confident adoption while protecting what matters most.This guide helps leaders create practical AI policy that builds trust with teams, clients, and stakeholders.
Australia doesn’t have specific AI legislation yet, but existing laws apply to AI use in business. The Australian Government is taking a “voluntary framework” approach while monitoring for future regulation.
Personal information handled by AI must comply with Australian Privacy Principles (APPs). This includes data collection, storage, and disclosure through AI tools.Your responsibility: Ensure AI tools don’t expose customer or employee data without consent.
Competition and Consumer Act 2010
AI-generated content and decisions must not be misleading or deceptive. Automated advice or recommendations must be accurate and fair.Your responsibility: Verify AI outputs before using them in customer-facing materials or business decisions.
The Office of the Australian Information Commissioner (OAIC) published guidance on AI and privacy in 2024. Review their recommendations at oaic.gov.au for detailed compliance requirements.
The Australian Government’s AI Ethics Principles provide guidance for responsible AI use. While not mandatory, they represent best practice expectations.
Human-centred values
AI systems should respect human rights, diversity, and individual autonomy. Decisions affecting people require human oversight.
Fairness
AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination.
Privacy and security
AI systems should respect and uphold privacy rights and data protection, and ensure security of data.
Reliability and safety
AI systems should reliably operate in accordance with their intended purpose.
Transparency and explainability
There should be transparency and responsible disclosure about AI systems to ensure people know when they are engaging with them.
Contestability
When an AI system significantly impacts a person, there should be a timely process to allow people to challenge the use or output.
Accountability
Those responsible for AI systems should be identifiable and accountable for the systems, and human oversight should be enabled.
Here’s a starting framework. Customise the language and examples to match your organisation’s reality.
These principles work best when they’re specific enough to guide decisions but flexible enough to evolve with technology.
Copy The Core Principle Template
## [Your Organisation's] AI Principles### Why we're publishing these principlesAI is changing how we work. These principles guide our approach and hold us accountable. We're sharing them publicly because transparency builds trust with our team, clients, and partners.Clear principles help everyone make confident decisions about when and how to use AI in their work.### Our principles#### 1. AI supports people, doesn't replace judgmentWe use AI to enhance human capability, not substitute it. Decisions affecting people, strategy, or significant resources require human review.**In practice**: Team members use AI for research, drafting, and analysis. Managers review and approve all AI-assisted work before it reaches clients or informs major decisions.#### 2. Quality matters more than speedWe deploy AI when it improves outcomes, not just because it's faster. Every AI application must meet our quality standards.**In practice**: We test AI outputs against our quality benchmarks. If AI-generated work doesn't meet our standards consistently, we don't use it for that purpose.#### 3. We're transparent about AI useWe tell clients and stakeholders when AI contributed to our work. We're honest about AI capabilities and limitations.**In practice**: Client deliverables note when AI tools assisted in research, drafting, or analysis. We explain our quality assurance process and human oversight.#### 4. We protect your informationWe safeguard client and personal data rigorously. We don't use confidential information to train AI models. We comply with Australian privacy law and our data protection obligations.**In practice**: We use enterprise AI tools with data protection agreements. We never input client-identifying information or sensitive business data into public AI systems. We maintain detailed records of data handling.#### 5. We respect intellectual propertyWe honour copyright and attribution. AI assists our work but doesn't replace proper citation or licensing.**In practice**: We attribute sources properly. We verify AI-generated content doesn't plagiarise existing work. We respect copyright in all materials, including training data.#### 6. We keep learningAI evolves rapidly. We regularly review tools, update practices, and train our team on responsible use.**In practice**: Quarterly tool evaluations, monthly team training, and open discussion of successes and failures. We document lessons learned and share them across the organisation.#### 7. We address bias and ensure fairnessAI can perpetuate bias. We actively monitor for unfair outcomes and maintain diverse review processes.**In practice**: We review AI recommendations for potential bias before implementation. Decision-making teams include diverse perspectives. We correct problems when identified.### Accountability**Owner**: [Leadership role/team name] **Review frequency**: Quarterly **Questions**: [email@yourcompany.com]### Living documentThese principles evolve as we learn. Last updated: [Date]
Copy the entire section above and customise it for your organisation. Replace bracketed placeholders with your actual details.
Policy works when people know exactly what’s allowed and what isn’t. The traffic light framework below provides clear visual guidance your team can remember and apply.
The detailed framework below expands on the traffic light system. Use this when teams need specific guidance for edge cases.
Copy The Practical Boundaries Template
## AI Use Categories### Approved uses (no approval required)Your team can use AI for these tasks without seeking permission:- **Research and information gathering**: Using AI to find information, summarise articles, or explore topics- **First draft generation**: Creating initial versions of emails, reports, or presentations that will be reviewed- **Brainstorming and ideation**: Generating ideas, exploring options, or working through problems- **Learning and skill development**: Using AI as a learning tool or to understand new concepts- **Code assistance**: Getting help with syntax, debugging, or understanding programming concepts- **Data analysis support**: Using AI to identify patterns, create visualisations, or suggest analytical approaches**Requirements**: All outputs must be reviewed by a human before use. Never input confidential client data or sensitive business information.---### Requires approval (manager sign-off needed)These uses require written approval from your manager:- **Client-facing content**: Any material that will be shared directly with clients- **Financial analysis**: AI-assisted financial modelling, forecasting, or investment recommendations- **HR decisions**: Using AI to screen candidates, assess performance, or inform employment decisions- **Legal interpretation**: AI assistance with contracts, compliance, or legal matters- **Strategic planning**: AI input into business strategy or significant resource allocation- **Public communications**: Media releases, social media posts, or public statements**Requirements**: Submit your use case to your manager. Explain the AI tool, intended use, and quality assurance process. Wait for written approval before proceeding.---### Prohibited uses (not permitted)Your team must not use AI for:- **Processing sensitive personal information**: Health data, financial information, or government identifiers- **Automated decision-making**: Decisions about people without human review (hiring, firing, promotion, credit assessment)- **Legal advice to clients**: AI cannot replace qualified legal counsel- **Financial advice or transactions**: AI cannot make investment decisions or execute trades- **Bypassing security controls**: Using AI to circumvent data protection or access controls- **Creating deepfakes or misleading content**: Generating fake images, videos, or audio of real people- **Replacing required human oversight**: Any regulated activity requiring professional judgment**Requirements**: Don't do these things. If you're unsure whether your use case falls into this category, ask your manager before proceeding.
Understanding what counts as personal information helps your team make safe decisions quickly.PII (Personal Information) is information that can identify a person on its own or when combined with other data.
Allowed in approved enterprise tools (minimum necessary)
Names (e.g., “Jane Smith”), company, role/title, work email, work phone, meeting details
Business context such as project names and account IDs that are not regulated identifiers
Public business information already available through normal channels
Prohibited PII (never input to any AI)
Tax File Numbers and other national IDs (Medicare number, passport, driver’s licence)
Financial numbers (credit card, bank account, BSB, CVV)
Health or biometric data, medical details, genetic identifiers
Authentication data (passwords, MFA codes, API keys, secrets)
Sensitive personal attributes (racial/ethnic origin, religious beliefs, sexual orientation, political opinions)
Children’s personal data
Home addresses and personal phone numbers for customers or employees
Rule of thumb: Names and work contacts are OK in approved tools. Any government ID, financial, health, or secret data is not.
Someone needs to be responsible for AI governance. Make it official.
Copy The Governance Framework Template
## AI Governance Framework### Ownership and accountability**AI Policy Owner**: [Name, title] **Responsibilities**:- Maintain and update AI policy quarterly- Review high-risk AI use cases- Monitor compliance with policy- Report AI-related incidents to leadership- Coordinate training and communication- Maintain approved AI tooling register**Executive Sponsor**: [Name, title] **Responsibilities**:- Approve policy changes- Allocate resources for AI governance- Champion responsible AI use across organisation- Escalate significant issues to board### Stakeholder roles**Leaders**:- Participate in quarterly reviews and education updates- Actively leverage tools to build AI fluency- Model responsible use for their teams**Managers**:- Ensure team compliance with policy- Identify safe, high-value AI use cases- Approve moderate-risk requests from team members**CTO / IT Security**:- Maintain approved AI tooling register and configurations- Review medium/high-risk requests and run quarterly risk reviews- Provide technical guidance and training support**All users**:- Follow policy and complete annual training- Ask for help if unsure about a scenario- Report incidents immediately### Approval process for new AI toolsBefore deploying a new AI tool or service:1. **Submit request** to AI Policy Owner with: - Tool name and vendor - Intended business use - Data it will access - Number of users - Cost and contract terms2. **Risk assessment** completed within 5 business days: - Data protection and privacy - Information security - Compliance requirements - Vendor reliability3. **Decision** communicated with: - Approval or rejection - Any conditions or restrictions - Training requirements - Monitoring approach### Tool approval categories**Approved (enterprise-managed)**:Use organisation-managed accounts for approved enterprise tools. Follow configurations maintained by CTO/IT Security.**Conditionally approved**:Specialist tools are acceptable when they run inside the organisation tenant and appear on the Approved AI Tooling Register.**Not approved**:Do not use consumer or personal versions of any AI tool for work content, even if you hold a paid subscription.### Incident reportingReport AI-related incidents to the AI Policy Owner immediately:- Data breach or privacy violation- Significant inaccuracy in AI output used for decisions- Bias or discrimination in AI recommendations- Regulatory inquiry or complaint- Vendor security incident- Unauthorised AI tool usage- Compromise of authentication credentials### Review schedule- **Monthly**: Review incident reports and usage metrics- **Quarterly**: Update policy based on lessons learned- **Annually**: Comprehensive review with external benchmarking
Selected team members test new AI tools and workflows in a controlled environment before wider rollout.
Governance
Champions operate under additional oversight with enhanced monitoring and regular review of experiments.
Knowledge sharing
Champions document learnings and share wins, failures, and insights with the broader organisation quarterly.
The Champions Group experiments with AI in a secure, controlled manner before wider roll-out and is governed by a separate policy framework. Opportunities to join the Champions programme are offered periodically based on business need and individual interest.
When things go wrong, speed and clarity matter. Document your response process before you need it.
1
Stop the activity
Discontinue tool usage immediately and isolate the content involved. Don’t delete anything yet.
2
Notify leadership
Inform your manager and AI Policy Owner as soon as possible. For serious incidents (data breach, regulatory concern), escalate to Executive Sponsor immediately.
3
Document the details
Record what was shared, which tool was used, when it happened, and who has access. Be thorough and factual.
4
Preserve evidence
Retain screenshots, file versions, and prompt text for investigation. Don’t modify or clean up anything.
5
Wait for direction
Hold further action until guidance is provided by the response team. Don’t attempt to fix it yourself.
Incidents happen even with good policy. How you respond determines whether they become minor corrections or major crises. Train your team on this process during onboarding.
Technical teams need specific boundaries for code-related AI use.
Use approved tools for boilerplate
Tools like GitHub Copilot Business may assist with scaffolding and suggestions to speed up delivery.
Protect proprietary work
Never paste proprietary client code, secrets, or prohibited PII into prompts. Assume everything you input could become training data.
Own licence compliance
You are responsible for verifying dependencies, security posture, and intellectual property of generated code before using it.
Code generation tools can suggest code with security vulnerabilities, incompatible licences, or outdated dependencies. Always review generated code as if a junior developer wrote it.
Don’t try to implement everything at once. Follow this 90-day rollout plan.
1
Week 1-2: Leadership alignment
Get executive agreement on principles and priorities. Assign the AI Policy Owner and Executive Sponsor. Set budget for tools and training.
2
Week 3-4: Draft and circulate
Customise the policy template. Share with department heads for feedback. Refine based on practical concerns and edge cases.
3
Week 5-6: Tooling and infrastructure
Select and implement approved AI tools with proper data protection. Set up monitoring and reporting systems. Prepare training materials.
4
Week 7-8: Team training
Run mandatory training sessions for all staff. Cover principles, boundaries, and practical examples. Provide hands-on practice with approved tools.
5
Week 9-12: Launch and monitor
Officially launch the policy. Monitor usage and incidents closely. Hold weekly check-ins to address questions. Document lessons learned.
6
Month 4+: Refine and scale
Review incident reports and feedback. Update policy based on real-world use. Share success stories. Plan advanced training and Champions Group expansion.
“Draft a polite follow-up to [Name, Company, Role, Work Email] about the action items from our 18 Sept meeting. Keep it under 150 words and propose two time slots next week.”
“Review this function for security vulnerabilities and suggest improvements. Focus on input validation and error handling.”
AI access request form
Use this template when team members request access to new AI tools or expanded permissions.Tool requested: [Tool name and version] Use case (1-2 lines): [Specific business need] Data types: [What data will be processed? Any PII? If yes, justify and confirm it excludes all prohibited PII] Risk level: Low / Medium / High Owner / Reviewer: [Who will oversee this use?] Pilot duration & budget (if applicable): [Timeline and cost] Success metric: [How will you measure if this works?] Approval required from: [Manager / AI Policy Owner / Executive Sponsor]
PII reference table
Category
Examples
Allowed in approved enterprise AI?
Work identity
Name, company, title, work email/phone
Yes (minimum necessary)
Government IDs
TFN, Medicare, passport, driver’s licence
No
Financial identifiers
Bank account/BSB, credit card numbers/CVV
No
Health/biometric
Medical conditions, genetic/biometric data
No
Secrets & security
Passwords, tokens, API keys, one-time codes
No
Sensitive attributes
Race/ethnicity, religion, sexual orientation, political opinions