Downloadable templates to support your AI rollout: opportunity tracker, champions charter, operating rhythm planner, tool evaluation scorecard, and analysis prompts.
These templates give you a starting point for each part of your AI rollout. Copy them, customise them for your organisation, and iterate as you learn what works.
Governance
Champions charter and AI policy templates
Operations
Operating rhythm planner and opportunity tracker
Evaluation
Tool evaluation scorecard and adoption health prompt
Use this to formalise your Champions group. Share it with the group on day one so everyone knows the purpose, expectations, and rhythm.
Copy the Champions Group Charter template
Champions Group Charter — [Your Organisation]PurposeTest AI tools and use cases in the context of our business, evaluate what works, and make recommendations for broader rollout.Members- Driver: [Name, Role]- Champion: [Name, Role / Department]- Champion: [Name, Role / Department]- Champion: [Name, Role / Department]- Champion: [Name, Role / Department]Responsibilities- Test use cases in your own workflows each month- Compare tools against agreed criteria- Log wins and learnings in the Opportunity Tracker- Attend monthly huddles and share progress- Flag risks, limitations, and governance gaps- Help onboard the broader team when scalingMeeting rhythm- Monthly huddle: [Day, Time] — 30 to 45 minutes- Async channel: [Teams/Slack channel name]- Quarterly reset: [Month] — governance review and tool evaluationDecision rights- Champions recommend. Leadership approves.- Tool trials under [dollar amount] can be approved by the Driver.- Tool trials above [dollar amount] require leadership sign-off.Review dateThis charter will be reviewed every [3/6] months.
Map out your cadence for the first 12 weeks. Adjust the frequency based on what your team can sustain.
Copy the Operating Rhythm Planner
Operating Rhythm Planner — [Your Organisation]Weekly- Quick check: did each Champion start at least one task in AI? (Yes/No)- Driver posts a prompt in the async channelFortnightly- Show-and-tell: one Champion screen-shares a real workflow (30 mins)- Open to Champions group only in the first 4 weeks, then broader teamMonthly- Champion Huddle: wins, blockers, new use cases (30–45 mins)- Update the Opportunity Tracker- Count recurring AI workflows (your leading indicator)Quarterly- Run the confidence pulse survey (1–5 score)- Re-place the team on the maturity scale- Review governance: retire what is not working, evaluate new tools- Run the "what if" refresh: what would you build if AI did 80%?Key dates- Week 1: [Date] — Champions kickoff- Week 4: [Date] — First showcase to broader team- Week 6: [Date] — First sentiment pulse survey- Week 12: [Date] — Second sentiment pulse survey and quarterly reset
Opportunity Tracker — Column Structure1. Use Case — What is the task or workflow?2. Tool — Which AI tool was used?3. Team / Department — Who tested it?4. Owner — Who is the Champion responsible?5. Size — Small (saves minutes), Medium (saves hours), Large (changes a workflow)6. Status — Testing / Working / Scaled / Retired7. Outcome — What happened? Time saved, quality improved, or new capability?8. Notes — Anything else worth capturing (limitations, risks, next steps)Instructions- Add a new row for every use case tested, even if it did not work- Update the status column monthly- The Driver reviews this tracker before every Champion Huddle- Use this as the basis for quarterly scaling decisions
Set this up in SharePoint, Notion, or whatever tool your team already uses. Do not create a new system. Put it where people already work.
Use this when your Champions are comparing AI tools against specific business criteria.
Copy the Tool Evaluation Scorecard
Tool Evaluation Scorecard — [Tool Name]Evaluation criteria (rate each 1–5)1. Research and analysis How well does the tool handle research tasks, summarisation, and data analysis? Score: [ ] / 52. Content creation How well does the tool draft, edit, and format written content? Score: [ ] / 53. Workflow integration How easily does the tool fit into existing tools and processes? Score: [ ] / 54. Enterprise security Does the tool meet your data handling and compliance requirements? Score: [ ] / 55. Ease of use How quickly can a non-technical team member get productive? Score: [ ] / 5Total score: [ ] / 25Strengths:Limitations:Recommendation (keep / trial further / retire):Evaluated by: [Name]Date: [Date]
Use this to map out your first 90 days after the cohort ends.
Copy the 90-Day Rollout Plan
90-Day AI Rollout Plan — [Your Organisation]Month 1: Foundation- [ ] Finalise Champions group (Driver + 3–5 Champions)- [ ] Share the Champions Charter with all members- [ ] Set up async channel in Teams or Slack- [ ] Run first Champion Huddle- [ ] Each Champion identifies one use case to test- [ ] Run sentiment pulse survey (baseline)Month 2: Momentum- [ ] Run first showcase for broader team- [ ] Update Opportunity Tracker with all use cases tested- [ ] Complete tool evaluation scorecards for primary tools- [ ] Each Champion tests a second use case- [ ] Driver reviews and updates governance if neededMonth 3: Scale- [ ] Run second sentiment pulse survey- [ ] Place team on maturity scale- [ ] Run the adoption health analysis prompt- [ ] Make scaling decisions: which use cases roll out to broader team?- [ ] Quarterly reset: review governance, tools, and rhythm- [ ] Present adoption health update to leadershipSuccess measures- Number of recurring AI workflows: [target]- Confidence pulse average: [target]- Use cases in Opportunity Tracker: [target]- Team members actively using AI weekly: [target]
This is the prompt from the Measuring Adoption page. Keep it here for easy reference.
Copy the analysis prompt
You are an AI adoption health analyst for a programme lead. I have uploaded my team's adoption data from three sources: our sentiment pulse survey results, AI platform usage data, and AI win/opportunity tracking.Analyse this data and produce a one-page leadership update covering four dimensions:Confidence and Skill — Rate as Strong, Developing, or Needs Attention.Impact and Habit — Rate as Strong, Developing, or Needs Attention.Culture — Rate as Strong, Developing, or Needs Attention.Business Outcomes — Rate as Strong, Developing, or Needs Attention.Then provide:- Overall maturity placement: Curious, Exploring, Embedded, or Integral- Overall health summary in one sentence- Top three wins this period with evidence- Top five AI use cases ranked by frequency or impact- Top risk that leadership needs to be aware of- One recommendation that requires senior support or a decisionRules: plain language, no jargon, one page maximum, evidence-based only.