Skip to main content
understanding AI agents

What you will learn

Most people using AI today are still in chatbot mode. They open a tool, type a question, get an answer, and close the tab. That works, but it caps how much value you can extract. This section shows you the three levels of working with AI and what it actually takes to move into agent territory.

Three levels of AI

Distinguish between chatbot, copilot, and agent modes of working

What qualifies as an agent

Identify the three qualities that separate agents from simpler AI setups

Agent capabilities

See how the Thinker, Assistant, and Creator roles apply when agents run autonomously

Right tool, right job

Know when an agent is the right solution and when a simpler approach is better

The three levels of working with AI

Think of these as three different ways to work with a colleague, each with a different level of independence.

Level 1: Chatbot

You control every interaction. You ask a question. You get a response. You ask another question. The AI has no memory of your preferences, no access to your tools, and no ability to act without you prompting it. This is where most people start, and it is genuinely useful for brainstorming, quick research, and one-off tasks.
But you are the engine. Nothing happens without you in the chair.

Level 2: Copilot

You are still in the driving seat, but the AI knows more about you. You have given it context through a custom GPT, a Project in Claude, or a set of uploaded documents. It has preferences, tone guidelines, and background knowledge baked in. This is where a setup like a contract review assistant or a communications helper sits. You have built something reusable. But you still trigger every action.
If you have set up custom GPTs, Projects, or Gems in Weeks 1 and 2, you are operating at Level 2. That is a strong foundation. Agents build directly on that same skill (writing clear instructions, providing context, specifying outputs) but add automation on top.

Level 3: Agent

The AI works without you prompting it. It triggers itself based on conditions you set. It connects to your tools: email, calendar, documents, web data. It does this repeatedly, following the same process each time, improving as you give it feedback. You shift from doing the work to reviewing the work. From being the operator to being the manager.

Copilot

You drive. AI assists. Every action requires your input.

Agent

You set the rules. AI drives. You review the results.

The three qualities of an agent

Not everything that calls itself an agent actually is one. The market is noisy, and plenty of tools use the word β€œagent” as a label for what is really a chatbot with extra steps. Here is how to tell the difference.

Proactive

It initiates work on its own based on triggers you define. You do not have to prompt every time. A schedule fires, a file lands in a folder, an email arrives, and the agent starts working.

Integrated

It connects to your actual tools and data. Email, calendar, documents, CRM, project management. The more integrated it is, the more it can do on your behalf.

Repeatable

It follows the same process each time, producing consistent and predictable outputs. You can give it feedback and improve its performance over time.

Verifiable

You can see what it is going to do before it does it. Flowcharts, logs, and human-in-the-loop review give you visibility and control over every action.
If something you are using requires you to prompt it every time, it is not proactive. If it cannot connect to your tools, it is not integrated. If it gives you different quality results each time with no way to refine it, it is not repeatable.
Those are copilots, and they are valuable. But they are not agents.

What agents can do

Agents draw on the same three roles you have already practised (Thinker, Assistant, and Creator) but they apply those roles autonomously.
An agent in Thinker mode researches, analyses, and surfaces insights without you asking. A competitive analysis agent scans competitor LinkedIn profiles every Monday and sends you a briefing. A market research agent tracks industry publications and flags relevant developments. The thinking happens on schedule, not on demand.
An agent in Assistant mode handles recurring operational tasks. It triages your inbox, sends meeting briefings 24 hours before each calendar event, follows up with attendees who have not responded, or files completed documents into the correct folder in your Drive. The admin work runs in the background.
An agent in Creator mode generates content on a schedule or in response to triggers. It drafts weekly reports from your data, writes social media posts based on company updates, or creates first-draft blog content from research it has already gathered. The creative output arrives ready for your review.

When an agent is not the right answer

Not every task needs an agent. If you are doing something once, a chatbot is fine. If you need flexibility and judgement each time, a copilot with strong context is often better. Agents work best when the task is repetitive, the process is definable, and the triggers are clear. If you cannot describe the steps, the trigger, and the expected output, you are not ready to build an agent for that task yet.
Jumping straight into building an agent before you understand the work it needs to do. The most valuable step is often the least exciting: mapping out what you currently do, step by step, before you automate any of it.

Quick checkpoint

You are done with this section when you can:

Name the three levels

Explain the difference between chatbot, copilot, and agent to a colleague

Spot the qualities

Identify whether a tool is proactive, integrated, and repeatable, or not

Match the role

Describe how Thinker, Assistant, and Creator roles work in an agent context

Choose wisely

Decide whether a task is better suited to a chatbot, copilot, or agent

Next: Designing Your AI Agent

Learn how to map your workflows, identify where agents fit, and write an agent job description