Skip to main content

Why your organisation needs AI principles

Your team is already using AI. Whether it is drafting emails, summarising reports, or building spreadsheets, someone in your organisation has opened ChatGPT, Copilot, or Claude in the last week. The question is not whether AI is being used. The question is whether it is being used with clear expectations. AI principles give your people a shared language for making decisions. They set boundaries without slowing teams down, and they signal to clients and regulators that you take responsible use seriously.

Understand the landscape

Know where government guidance is heading

Draft your principles

Create guidelines your team can actually follow

Assign ownership

Make someone responsible for keeping principles alive

Publish and review

Share widely and commit to regular updates

The regulatory landscape is moving

Most governments have not introduced standalone AI legislation yet. Instead, the direction globally is a principles-based, voluntary approach, with existing laws (privacy, consumer protection, workplace safety) already applying to how organisations use AI. In October 2025, the National AI Centre (NAIC) released the Guidance for AI Adoption, which outlines six essential practices known as AI6. This is the current primary government reference for responsible AI use, replacing the earlier Voluntary AI Safety Standard.

AI6: Guidance for AI Adoption

Released October 2025. Consolidates the previous 10 voluntary guardrails into 6 essential practices. Comes in two versions: Foundations (for organisations starting out) and Implementation Practices (for scaling AI). Includes templates, a screening tool, and a policy guide.Why it matters: This framework is voluntary today, but is expected to become the baseline for any future mandatory requirements.

Existing laws already apply

Privacy law, consumer protection, anti-discrimination, and workplace safety legislation already govern how you collect data, make decisions, and communicate with customers. AI does not create an exemption from these obligations.Why it matters: You do not need to wait for new AI-specific laws to act. Your current legal obligations already shape how AI should be used.
The full AI6 guidance, templates, and screening tools are available at industry.gov.au/publications/guidance-for-ai-adoption.

The six essential practices (AI6)

Your AI principles should address each of these areas. The template in the next section maps directly to them.
Decide who is accountable for AI decisions in your organisation. This does not need to be a new hire. It can be an existing senior leader who owns AI governance alongside their current role.
Understand what your AI tools are doing and who they affect. Before rolling out a new tool, assess how it impacts employees, clients, and data.
Measure and manage risks specific to AI, including inaccuracy, bias, data leakage, and over-reliance.
Be open about how AI is used in your products, services, and internal processes. Tell clients when AI contributed to a deliverable.
Keep humans in the loop for decisions that affect people, finances, or strategy. AI assists. Humans decide.
Review your AI tools and practices regularly. Update your principles as the technology and regulatory landscape evolves.

Building your AI principles

Your principles do not need to be perfect on day one. Start with a clear framework, test it with your leadership team, and refine based on real use.

Before you start

Answer these questions with your leadership team before drafting.

What is our stance?

Are you encouraging AI adoption, proceeding cautiously, or somewhere between? Be honest about where you sit.

What matters most?

Is it speed, quality, compliance, or innovation? You cannot optimise for everything. Pick your priority.

What is non-negotiable?

Identify absolute boundaries. Client data? Financial decisions? Legal advice? List what AI cannot touch.

Who is accountable?

Name one person responsible for AI oversight. Without ownership, principles become shelf-ware.

Draft your core principles

The template below gives you seven principles you can customise to your organisation. Each one maps back to the AI6 practices and common legal obligations around privacy, consumer protection, and intellectual property. The principles follow a consistent structure: a clear statement, a short explanation, and “What this looks like here” examples you replace with your own real actions. The accordion format lets readers scan the headings first, then expand only the ones they need to customise.
## [Your Organisation's] AI Principles

AI is part of how we work. These principles guide our approach and hold us
accountable. We share them publicly because transparency builds trust with
our team, our clients, and our partners.

These principles align with the Guidance for AI Adoption (AI6) and our
obligations under existing law, including privacy, consumer protection, and
intellectual property legislation.

---

### 1. People lead, AI supports

We use AI to enhance human judgment, creativity, and expertise. AI helps our
team work faster and smarter, but final decisions, especially those affecting
people, clients, or strategy, always involve a person.

**What this looks like here:**
- [Example: Our team uses AI for research and first drafts. A team member
  reviews, edits, and approves every output before it reaches a client.]
- [Example: No automated decision affecting an employee or client outcome
  is made without human sign-off.]

---

### 2. Quality over speed

We use AI when it improves outcomes, not just because it is faster. Every
AI-assisted output meets the same quality standard as work produced without
AI.

**What this looks like here:**
- [Example: AI-generated content goes through our standard review process
  before delivery.]
- [Example: We test new AI tools against our quality benchmarks before
  rolling them out to the wider team.]

---

### 3. Transparency by default

We tell people when AI played a role in our work. We are clear about what AI
can and cannot do. We do not overstate AI capabilities or hide its
involvement.

**What this looks like here:**
- [Example: Client deliverables note when AI tools assisted in research,
  drafting, or analysis.]
- [Example: We maintain an internal register of AI tools we use, what they
  are used for, and who approved them.]

---

### 4. Your data stays protected

We protect client and employee data rigorously. We only use enterprise-grade
AI tools with appropriate data protection agreements. We never input
confidential or personally identifiable information into public AI tools.

**What this looks like here:**
- [Example: All AI tools are approved by our IT lead and operate under
  enterprise data protection agreements.]
- [Example: We maintain a list of approved AI tools. Any tool not on the
  list requires approval before use.]
- [Example: We comply with applicable privacy law and data protection
  principles in all AI-related data handling.]

---

### 5. Respect intellectual property

We respect copyright and intellectual property in everything we create. AI
assists our creative process. It does not replace proper attribution,
licensing, or original thinking.

**What this looks like here:**
- [Example: We verify that AI-generated content does not reproduce
  copyrighted material before publishing.]
- [Example: We attribute sources and do not present AI-generated text as
  original research.]

---

### 6. Check for bias and fairness

AI can reflect and amplify biases present in its training data. We actively
monitor for unfair outcomes, seek diverse perspectives in our review
processes, and correct issues when we find them.

**What this looks like here:**
- [Example: We review AI-generated recommendations and outputs for
  potential bias before acting on them.]
- [Example: Recruitment, performance, and client-facing decisions that
  involve AI are reviewed by more than one person.]

---

### 7. Keep learning, keep improving

AI changes fast. We commit to ongoing learning, regular tool reviews, and
updating our practices as the technology and regulatory environment evolves.

**What this looks like here:**
- [Example: We review our AI tools and principles quarterly.]
- [Example: Team members complete AI training as part of onboarding and
  receive updates when tools or policies change.]
- [Example: We track government guidance and industry developments to stay
  aligned with emerging best practice.]
Copy the template above and replace the bracketed examples with actions your organisation actually takes or plans to take. Remove the brackets when you are done.

Add accountability and ownership

Principles without an owner become shelf-ware. This section names who is responsible, how often you review, and how people raise concerns. The AI lead does not need to be a dedicated hire. Assign it to a senior leader who already touches operations, compliance, or IT.
## Accountability

| Role               | Person                    | Responsibility                                                |
| ------------------ | ------------------------- | ------------------------------------------------------------- |
| AI Lead            | [Name, Title]             | Maintains principles, reviews use cases, coordinates training |
| Executive Sponsor  | [Name, Title]             | Approves changes, allocates resources, escalates to board     |
| Review schedule    | [Quarterly / Six-monthly] | Full review of principles, tools, and incident log            |
| Questions          | [email@yourcompany.com]   | Single point of contact for AI-related concerns               |

## Living document

These principles are not set and forget. We update them as we learn, as our
tools change, and as regulatory guidance evolves.

Last updated: [Date]
Version: [X.X]

Publish and communicate

Writing principles is half the work. The other half is making sure your team knows they exist and understands how to apply them.
1

Share with leadership first

Circulate the draft to your leadership team. Give them one week to flag practical concerns or missing edge cases.
2

Run a team briefing

Walk through the principles in a 30-minute session. Use real examples from your organisation. Answer questions live.
3

Post where people work

Add principles to your intranet, onboarding materials, and client proposal templates. If they are hard to find, they will not be followed.
4

Set the first review date

Put a quarterly review in the calendar now. Assign the AI Lead to prepare a short update on what is working, what is not, and what needs to change.
Start with what you are already doing. You do not need perfect processes to publish principles. The act of writing them down creates accountability.

How this connects to AI policy

AI principles are the “why” and the “what”. AI policy is the “how” and the “who”. If you have already built your principles, the next step is turning them into operational policy with practical boundaries, approval processes, and incident response.

Principles

What you believe and commit to publicly. Guides decision-making at every level.

Policy

Operational rules, boundaries, and processes. Tells people exactly what to do and not do.

Training

How you build capability across the organisation. Turns policy into daily practice.

Key references

Guidance for AI Adoption (AI6)

The primary government framework for responsible AI governance. Includes templates, screening tools, and a policy guide.

Voluntary AI Safety Standard

The original 10 guardrails (2024), now fully integrated into AI6. Useful for more granular control statements.

OAIC AI and Privacy Guidance

Privacy compliance requirements for commercially available AI products.

AI Ethics Principles

The eight voluntary ethics principles that underpin government AI guidance.

What to do next

Step 1

Copy the principles template and replace the placeholders with your details

Step 2

Fill in the “What this looks like here” examples with real actions from your team

Step 3

Add the accountability section and name your AI Lead

Step 4

Share with leadership, brief your team, and set your first review date