Your post-session crib sheet · The AI Guys

AI, demystified.
You're sorted.

Everything you need from Mat's session, turned into something you'll actually use. The secret sauce, the tools, the prompts, the traps to dodge. One page. Your cheat code.

◉ ~2hr session
◉ Small business audience
◉ Practical, no fluff
Stay curious One chat per topic Always fact-check Role · Task · Context · Output Small models exist Verify the maths Stay curious One chat per topic Always fact-check Role · Task · Context · Output Small models exist Verify the maths
01 — The Secret Sauce

One word. That's it.

If you take one thing away from the session, take this. It's cheesy, it's true, and it's the single biggest predictor of whether you'll get good results from AI.

Curiosity.

The best AI users aren't the most technical people in the room. They're the most curious. The ones who keep asking "can you help me with this?" until something useful falls out.

The old world: bash your head against a spreadsheet for an hour, watch three YouTube videos, try a formula, break it. The new world: the tool talks back. Ask it how to do the thing. Ask it to write the formula for you. Ask it why its last answer was wrong.

If you're good at what you do, you'll get better results than someone who isn't — because you'll know the right questions to ask and you'll spot when it's fibbing.

"The tools will change. The problems stay. Get curious about the new tools and you stay relevant."
02 — What AI Actually Is

Super duper predictive text.

Strip away the hype and the buzzwords and this is what you're really looking at. Four things to know, then you'll understand why it behaves how it does.

01

Trained on nearly the whole internet

Large Language Models (LLMs) — your ChatGPT, Claude, Copilot, Gemini — sit in big data centres and have consumed almost all publicly available text.

02

Very advanced pattern recognition

It spots patterns in how words combine across languages and contexts. That's why it can write across domains, and why it's often strong on translation.

03

It's just predicting the next word

Given what's come before, it guesses what likely comes next — word by word, paragraph by paragraph. That's the engine.

04

Humans taught it what "good" looks like

Reinforcement learning with human feedback (RLHF) — thousands of people scored responses as good or bad. That's the leap that made it genuinely useful.

Why it's rubbish at maths

Because it isn't calculating anything. When you ask it for 500 × 72 − 45 × 100, it's predicting what the next characters look like based on similar maths it's seen before. Use a spreadsheet for arithmetic. Use AI for pattern-spotting, categorising, drafting, and explaining. Different jobs.

03 — Chats vs Agents

Temp staff or permanent hire?

Every platform calls them something different. ChatGPT calls them GPTs. Gemini calls them Gems. Copilot calls them Agents. Claude calls them Projects. Same idea in all cases.

Standard chat

The temp

  • Ad hoc, one-off tasks
  • You brief them from scratch every time
  • No memory of previous work (mostly)
  • Perfect for drafting, summarising, quick ideas
  • Flexible, available on demand
VS
GPT / Gem / Agent / Project

The permanent hire

  • Recurring, structured task
  • Instructions built in once, then it just runs
  • Connected to your data sources
  • Best for regular reports, monitoring, structured Q&A
  • Remembers its role every time
1:1

One chat, one topic. Don't ask your marketing chat for a beef stew recipe. Context bleeds, answers get worse. Keep conversations specialised and you'll get consistently better results — same as you would with a human you'd hired for a specific role.

04 — Prompting

The basic recipe.

AI can't read your mind. Yet. Until it can, feed it these four ingredients in most prompts and watch your results jump.

R
Role

Who should it be? "Act as an experienced office manager…" sets the vocabulary and depth.

T
Task

Be specific. Not "help with my email" — "rewrite this under 150 words, keep the ask in line one."

C
Context

Who you are, who reads this, what it's for, any constraints. AI doesn't know your world unless you tell it.

O
Output

How do you want it? Table? Bullet list? Three options? 200 words max? Examples help more than you'd think.

Weak
Write a weekly health and safety checklist for my office.
Strong
You are an experienced office manager. Create a weekly H&S checklist for a 60-person office across two floors. Format as a table with columns: Item, Responsibility, Frequency, Notes. Keep each item actionable and under 15 words. UK context.

Three tricks to get less vanilla answers

RLHF leans the model toward safe, familiar responses. These three moves pull it out of vanilla mode.

Trick 1 · Ask for three with probabilities
Give me three genuinely different answers to this, each with a one-line reason and a rough probability of being the best fit. Don't hedge — commit to the differences.
Trick 2 · Ask the same question twice in one message
[Your question] [The exact same question again, word for word]
Trick 3 · Get AI to write the prompt for you
I want to [describe the outcome you're after]. Ask me every question you need answered to write a really strong prompt for this — then, once I've answered, give me the final prompt I can paste into a fresh chat.
Bonus · The "I'm not sure how AI can help" prompt
I'm trying to [the task you want to achieve]. I'm not sure exactly how AI can help me with this. Please: ask any questions you need to understand my situation, outline the ways AI could help, suggest the best tools for the job, and give me a clear plan forward.
05 — The Toolbox

What Mat actually uses.

Covered in the session, in order of how often they came up. Start with one or two — don't try to learn all of them at once.

Mat's go-to

Claude

Anthropic · Chat, documents, design

Consistently excellent writing and reasoning. Handles long documents and templates beautifully. Claude's design and artifact features can generate whole websites and visual assets from a prompt.

Best forWorking documents, deep writing tasks, design, code, reports.
Search-first

Perplexity

Search + AI combined

Built to search the live web before answering. Lets you pick which underlying model to use (Sonar, GPT, Claude, etc.) and has an academic mode for research papers.

Best forResearch, current info, comparing sources, academic work.

ChatGPT

OpenAI · The household name

Agent Mode (paid tier) opens its own browser and takes action — great for pulling structured research into a spreadsheet. Significantly fewer hallucinations than a normal chat because it works from live sources.

Best forAgent Mode research, voice mode, scheduled tasks.

Microsoft Copilot

Inside M365

Lives in Word, Excel, Outlook, Teams. Data stays in your Microsoft tenancy. Now lets you pick underlying models (including Claude). Build Agents scoped to a specific SharePoint folder — don't let them search everything.

Best forAny business-sensitive task, M365 power users, team Agents.

Notebook LM

Google · Research workspace

Upload big documents (PDFs, papers, transcripts). Grounded to your sources — much lower hallucination rate. Generates infographics, quizzes, and audio overviews. Brilliant for turning old lecture handouts into living study material.

Best forGrounded research, study, training materials.

Comet

Perplexity · Agentic browser

A browser that takes control on your behalf. Tell it "research construction firms in the South East and put them in a sheet" and it does it. Powerful, but keep it away from anything sensitive.

Best forHands-free research, repeatable workflows.

Kimi K2

Chinese model · Agent Swarm

Has a mode that spins up multiple specialised agents — each does its piece, then they combine. A glimpse of where orchestrated AI is heading: lots of narrow specialists beats one generalist.

Best forComplex multi-step research tasks.

LocKR Labs

UK-hosted LLM

British language model with UK data residency. Relevant if your contracts — public sector, critical infrastructure — forbid US data centres. Keeps your data on-shore.

Best forPublic sector, regulated industries, UK-only constraints.

=AI() in spreadsheets

Copilot (Excel) · Gemini (Sheets)

Call AI directly from a cell. Classify sentiment on 500 reviews, generate summaries, enrich a company list with research — then drag the formula down. Hours turn into seconds.

Best forBulk classification, enrichment, analysis on tabular data.

Scheduled tasks

Copilot, ChatGPT, Gemini

Ask once, have it run every Monday at 9am. News briefings, inbox summaries, repeated research. Three dots under any reply in Copilot, or just tell ChatGPT "do this every weekday at 9am."

Best forRecurring briefings, weekly digests, anything repeatable.

Small Language Models

Gemma, local models

Run on your phone or laptop. No data leaves your device, costs nothing, privacy by default. Mat estimates 60–80% of everyday queries could be handled by one of these.

Best forPrivacy-sensitive work, offline use, reducing energy footprint.

Netlify

Free static hosting

Pair with a site built by Claude or similar — drag-drop the file, you've got a live website for free. This is what "AI killing SaaS" looks like in practice: cheap, fast, yours.

Best forHosting anything you've built with AI.
06 — Watch Outs

The bits that bite.

Knowing these stops you falling into the traps everyone else falls into. None of them are reasons not to use AI — they're reasons to use it properly.

Hallucinations

It'll lie to you. Confidently. With justification.

Because it's predicting the next word, if it gets one bit wrong it'll just keep going — and then defend the wrong answer if you push back, before flipping to a different wrong answer. The fix: ground it in source material. Upload the PDF. Point it at a specific webpage. This is called Retrieval Augmented Generation (RAG) and it dramatically cuts hallucinations.

Mat's take: he's glad it hallucinates. It keeps the people who fact-check ahead of the people who don't.
Data leaks

Free and paid consumer chats train on your data by default.

On free or standard paid ChatGPT, Claude, Gemini — your conversations are used to improve the model unless you go into settings and turn it off. Enterprise tiers and M365 Copilot are generally opt-out by default. Never paste anything sensitive into a consumer chat. Shared conversation links can become publicly indexed on Google.

Best defence: train your team on why, not just what. Banning tools just moves the usage to people's phones — and then you've got shadow AI.
Prompt injection

Dodgy websites can hijack your agentic browser.

When you use an agent that browses the web, it reads the HTML. A malicious page can hide instructions in white-on-white text that tell your agent to do something else entirely. Sam Altman has admitted this may never be fully solvable. The rule: don't give agentic tools access to anything you'd mind losing — no banking apps, no sensitive systems.

Public chatbots

A US Chevrolet dealership "sold" a $76,000 car for $1.

Someone convinced their customer-facing chatbot that the deal was legally binding. The internet had a field day and the bot came down fast. Moral: test chatbots internally for weeks before you put them in front of customers, and never give a public bot access to sensitive data or decisions it could be talked into making.

Context rot

Long chats get dumber.

Once a conversation gets big — roughly past 40% of the model's context window — quality drops. Models remember the start and the end better than the middle, same as humans. When a chat you've loved starts giving worse answers, that's your cue to either start fresh or upgrade it to a proper Agent/GPT/Project with a fixed set of instructions.

Sycophancy

It agrees with you too much.

Most models are trained to be agreeable. Say something wrong and they'll often nod along. Tell it up front: "push back on me, disagree if I'm wrong, don't just validate" — you'll get more honest answers. Useful especially for strategy work where you're looking for holes, not hugs.

07 — This Week

Your to-do list.

Tick them off as you go — progress saves in your browser. Small steps, in the right order. This is the sequence that builds a real AI habit.

08 — From The Room

The questions you asked.

I tried AI a while back and it was rubbish. Is it worth going back?
Yes — and not just because the models have improved. You've probably improved too. When most of us first tried these tools, we were treating them like Google and our prompts were weak. Come back with a clearer prompt (Role, Task, Context, Output) and you'll get a different experience, even from the same model. Try Claude or Perplexity for a genuinely different feel to what you remember.
Will AI take my job?
The tools will change, the problems won't. Writers still write, designers still design, the tools they use evolve. The further back you look, the further forward you can see: handwriting → printing press → typewriter → Word → AI. Each jump broadened who could reach an audience. The people who adapt fastest tend to win — because knowing your domain well makes you better at using AI, not worse.
Is there a safe way to let my team use ChatGPT?
Three things. One: pick your approved tool (Copilot inside M365 is low-friction for most; enterprise ChatGPT or Claude if you prefer). Two: write a short AI usage policy and have everyone sign it — not legalese, just "these are the tools, here's what not to paste into them, here's why." Three: train people. Blanket bans don't work — people just switch to personal phones and you've lost all oversight.
Is AI bad for the environment?
Real but proportionate. AI's current share is around 1.5%, projected toward 2% over the next few years. For comparison, fashion sits around 8%. The biggest levers you have: use small language models for routine tasks (run locally, nearly zero cost), get your prompt right first try rather than regenerating images five times, and pick tools that run on cleaner infrastructure. The direction of travel — undersea cooling, colder climates, even space-based data centres — is toward lower impact, not higher.
Can AI actually help me win business?
Yes, genuinely. The clearest win is B2B research: give an agentic tool (ChatGPT Agent Mode, Comet, Kimi K2) a prompt like "find me construction firms in Essex & Suffolk, dig up reasons they'd benefit from AI services, pull contact details into a spreadsheet" — it does in minutes what used to take a day of Googling. Used well, it can compete with expensive SaaS data tools like Glenigan for a fraction of the cost.
My website's traffic has dropped. Is AI to blame?
Partly, yes. Industry-wide, traffic from AI-enabled search is down 40–50% in some sectors — people get the answer in-line and never click. Two things help. First: AEO (Answer Engine Optimisation) — FAQs and comparison tables are gold, AI models love them. Second: remember a chunk of the traffic you lost wouldn't have converted anyway. Focus on being the cited answer to a specific question, not just a page of content.
Why do the answers sound so... vanilla?
Because of how the models were trained — human raters tended to reward familiar, neutral answers (RLHF). You have to actively pull the model out of vanilla mode. Ask for three options with probabilities. Paste the same question twice in one message. Tell it to push back and disagree. Tell it not to hedge. Given permission to be sharper, most models will be.
Should I pay for it?
Try free versions first — you'll learn fast and save yourself from subscriptions you won't use. If you find yourself using it every day, pay for one good one rather than spreading thin. Mat's personal stack: Claude (work), Perplexity (research), Copilot (inside M365). Most people get huge value from just one paid tier, around £20/month.