Cyber Talks: Phishing in the Age of Agentic AI: Craig Taylor on Culture, Literacy, and the New Human Firewall

Welcome back to CyberTalks from Bare Metal

Cyber.

I'm Dr. Jason Edwards,

and today we're diving into one of the

toughest puzzles in security,

phishing in the age of agentic AI.

My guest is Craig Taylor,

a fellow CISSP and a thirty-year

cybersecurity veteran.

Craig co-founded CyberHoot back in twenty

fourteen to help small and mid-sized

businesses and managed service providers

raise their cyber literacy.

Over the years,

he's led cybersecurity organizations in

web hosting at CSE,

in finance at JPMorgan Chase,

and in manufacturing at Vistaprint.

He also runs a consultancy that has

provided virtual CISO services to more

than fifty companies across many

industries.

Craig is also a seasoned communicator and

community builder, a Toastmaster,

a Rotarian up in Portsmouth,

New Hampshire,

and a dedicated fundraiser for cancer

research,

having raised more than a hundred and

fifty thousand dollars riding in the Pan

Mass Challenge over the past eleven years.

Craig, welcome to CyberTalks.

Well, it's great to be here.

Thank you, Jason.

I'm really looking forward to our

conversation today.

Awesome.

So agentic AI.

So, you know,

tell us a little bit about what that

is and what, you know,

and what we're dealing with right now and

the threats we have.

I know in the pre-session we talked about

some of the recent issues that had come

up with it.

And of course, AI is, you know,

the topic everywhere.

And there is, of course,

a huge fear about AI being used as

a weapon,

which I think we're starting to see,

right?

yes that's exactly right so i'm sure

everyone listening to this knows what ai

is and has been using it to help

craft difficult conversations email things

of that nature we certainly use it at

cyber who to produce videos that are

appealing to the end user population

trying to teach them cyber literacy skills

but agentic ai is really the holy grail

of all these different products and

platforms it's a step removed from

artificial general intelligence,

it's basically the ability for AI to go

and carry out a series of steps,

instructions,

research and actions and activities,

doing so autonomously without direct human

oversight.

And that is really what we saw happen

recently in the last ten days.

Anthropic published an article,

which we'll be going through some slides

on today,

that they interrupted or thwarted

a series of attacks that were going on

with a jailbroken version of their cloud

code ai platform when i say jailbroken

that needs to be explained we've probably

heard of jailbreaking your phone right you

can install apps from the ice app store

on your iphone but if you want to

install your own apps you kind of have

to jailbreak it or get it registered

through the official channels of apple and

that can be difficult

On a computer system,

if you break into the root account or

the administrative account,

you're sort of getting a little bit closer

to jailbreaking,

but it's not really the same thing.

It's basically having carte blanche to do

whatever you want with the tool.

And a jailbroken AI turned into an agentic

attack methodology is what we're gonna

cover today.

It's very scary actually.

Well, I think, you know,

AI can do so much allowing it to

go out and do its own thing.

Sounds a whole bunch of like, well,

are we getting to Skynet?

Is that the next step, right?

Just go out and take over the world.

Let me know when you're done.

Like, you know, okay, cool.

So yeah.

So how would you like to get started?

Do you want to go ahead and you

want to bring up the slides?

Yeah, let's do that.

Thanks, Jason.

Thanks.

All right,

so fishing in the age of AI was

the original presentation I was putting

together.

And then this article dropped from

Anthropic, specifically this one.

disrupting the first reported AI

orchestrated cyber espionage campaign.

Espionage sounds a little bit softer than

what it was.

They were what they think they tied it

back to Chinese nation state actors.

Attribution is often difficult,

but in this particular case,

I think they were pretty accurate.

And they said, hey,

the agentic AI was jailbroken and

orchestrated multi steps of an attack

methodology framework you know when you do

a pen testing as a white hat hacker

Jason you do reconnaissance then you do

breach and then you do land and expand

and then you you know maintain your access

over time all of these different steps

they turn the agentic AI bot to doing,

right?

Going out and mapping and scanning all of

the ports and protocols that were open

from the internet on these companies.

Scanning social media to identify who

works at the company so you can send

them very specific spear phishing attacks

for them to click on to breach the

network.

and get them to run something or to

steal credentials to get you in,

et cetera,

or to even steal the session credentials

for email and so on and so forth.

So this then got picked up because when

you think about if you're watching this

industry of AI and you think about

you know, artificial general intelligence,

agentic agents is the step before that.

No one's convinced there's a tool out

there that really orchestrates a lot of

steps in sequence.

But this was then picked up by a

lot of different fronts.

I'm sorry,

did I step out of that presentation?

One second.

It's okay.

We'll go back into slideshow here.

Okay, so I can just do it here.

So this was picked up,

world's first large scale cyber attack

executed by AI.

They think the Chinese was the threat

hackers behind it.

Here's Fortune covering it.

The next day, Anthropix says it disrupted

Palo Alto,

a big cybersecurity firewall vendor,

said AI agents are here,

so are the threats.

So this is a company that's hoping to

build AI into their defensive tools.

And that's one of the takeaways from the

article on Anthropic.

If hackers are going to be using this

to attack us,

we need to start using agentic AI agents

to defend us.

This came across my desk today, Jason.

I use ChatGPT-Five to create a phishing

simulation,

including the exact wording of an email.

And it generated a perfect example for us.

And it's insane.

Just tell Chatty it's a simulation and it

will play along.

So much for the big guardrails there,

right?

If you just say it's a simulation.

So the point here is agentic AI is

about to make all of our cybersecurity

jobs a lot more difficult.

but it could also make them easier and

more effective as well.

Did you have a question?

Yeah, Craig,

let me go back to one thing just

real quick that you said too.

Why would the Chinese use Anthropic?

Do you know?

Like,

why aren't they using your own homegrown

models or their services?

Well, probably two reasons.

There are geofencing security protective

measures that might not allow a deployment

outside of the US from even talking to

the networks they were targeting.

That's possible.

Anthropic is a local US company,

so communications from it to different

networks would probably be allowed over

coming through the Great Firewall of

China.

I'm not sure exactly I'm thinking maybe it

was an exercise of research too what can

we get the other tooling to do if

we were able to jailbreak it and they

were so I don't know that we had

a a really good answer to that question

yet I think time will tell why they

used anthropic probably because they could

And there's probably a little bit of,

you know,

let's smear a company while we're doing

it, right?

So, I mean,

Anthropix is absolutely not getting good

PR this week, so.

No,

but they were very good about releasing

their report.

That's a publicly facing report and they

released it even though it was egg on

their face.

They said, listen,

this has happened and you need to know

about it.

And it wasn't just one company targeted.

I think there was like,

thirty companies targeted based on what we

could tell.

So what can we learn from this article

about Anthropix's agentic AI attack

report?

There were really three key takeaways that

came from this that cybersecurity

professionals need to be aware of.

Agentic AI in this particular instance ran

a multi-step attack campaign just about

autonomously, right?

Eighty to ninety percent of its decision

making was handled by the Claude agents

without decision points by the hacker.

There was about five or six decision

points that the hackers were called upon.

The first couple were,

who are we attacking and how badly do

you want us to attack them?

The rest were done internal and then the

hackers would check in and say,

continue and do this or do that.

But largely it was without human

oversight.

So think about that.

The AI agent at AI speeds was doing

research, recon, targeting,

content generation, and execution.

That's really fast.

Not single-threaded human hacker one at a

time talking thousands of things going on

all in parallel.

It changes the game a little bit.

Then you- It shortens the kill chain,

right?

It shortens that kill chain, yeah.

Yeah, I suppose so.

If you have agentic AI agents watching for

this stuff and you see in unhuman like

speeds of things happening,

then you might be able to more easily

identify it and disrupt it, right?

There's certainly hallmarks of that kind

of thing going on.

Think of CAPTCHA for attacks, right?

This is not possible for a human attacker.

We're gonna shut it down immediately.

But you're right,

because the other part of the report said

AI, agentic AI was increasing the speed,

the scale,

and the precision of the attacks.

The agentic agents were scanning public

networks and then went inside internal

sources to find unpatched systems, which,

God forbid,

we all have those in our networks, right?

Then personalized email outreach and

crafted perfect communications.

And then what was really scary is adapting

mid-campaign as it learned.

Remember ten years ago when Gary Kasparov

played IBM's Deep Blue in a chess match?

And I think they split the first two

and then Gary said,

there's no way I'm going to beat it

ever again because it's learning from my

best secrets of chess and I can't beat

it anymore.

Well,

adapting mid-campaign about what kind of

communications clicked on and what weren't

and then reattacking is kind of a scary

thought, right?

What it basically told us in this report

was that an entire team of attackers

tuning things can be replaced by a single

attacker with agentic AI bots, right?

So it's a force multiplier in the extreme

in the offensive attack capabilities.

the third thing that the report from

anthropic taught us was that ai enables

more strategic self-improving and

coordinated attacks so if you're able to

jailbreak an ai and create these agents to

work autonomously and you can create the

scripts to for them to you know take

step one reconnaissance and pass it to

step two attack and step three break in

and step four land and expand and step

five evade detection

it's going to have this ability to do

that at speeds we've never seen before

that humans can't keep up with unless we

have um perhaps agentic ai defensive

agents right so it's basically a star wars

uh scenario remember in the eighties and

nineties it was like why are we gonna

spend millions and billions of dollars

arming space it's just everyone has to do

it if someone does it so we should

not do it at all um or or

ballistic missiles or nuclear missiles

that sort of thing um

So what was once required a multi-stage

process executed over weeks and months by

individual or groups of hackers can now be

executed by agentic AI autonomously in

timings that are measured in hours, right?

Possibly minutes,

but hours is what we felt safe saying.

So this is what we've learned from this

report.

And any security professional listening to

this is going to wake up at night

going, oh my God,

what are we going to do?

This is a bad thing.

So I'm going to pivot now from these

attacks, right?

When the goal of this attack is to

break into a company and then land and

expand.

But how you break in remains the same.

And if you will,

we'll go to Verizon's data breach report

on this next slide and give you some

statistics from them.

Verizon stated in their twenty five year

history of releasing attack summaries for

the last year, they do an annual report.

I'm sure most of you have listened and

read these at least once in twenty twenty

three.

They did a twenty year retrospective and

they said, listen,

over the last twenty years,

the attacks have remained largely the

same.

Phishing is number one because humans are

the weakest link in any cybersecurity

program,

typically followed closely by

Credential theft and then a distant third

in the past was zero days and

vulnerability exploitation.

Now,

the last report was interesting from

Verizon in twenty twenty five.

The one covering twenty twenty four

released this year said that the

vulnerability exploitation went up to

about twenty percent.

That's the highest it's been in a long

time.

And it's tied back to a bunch of

things.

There were a lot of flaws found in

SSL VPNs and those were exploited to get

in the front door, right?

Because most SSL VPNs are providing

internet enabled services into an internal

network.

Hot topic,

very lucrative if you can break into them.

And almost all the vendors have had

problems there to the point where I don't

recommend using them anymore.

I would say IPSec or even an internet

based VPN tunneling service is better.

But it's still the number one,

followed by credentials and exploitations

as two and three.

So what happens then?

Social engineering accounts for the

largest share of breaches,

which are tied back to, in most cases,

phishing emails,

but there's also smishing and quishing and

vishing voice-based for financial

breaches.

Voice-based phishing is really big,

and we should talk at some future date

about deep fakes and how to protect

yourself with safe words and things of

that nature.

But as much as sixty percent of the

breaches,

according to the Verizon data breach

report, are tied to human elements.

Now,

you can go and look at online and

find reports from IBM and others that say

as close to eighty to ninety percent of

breaches are tied back to human error.

But then you're starting to get into,

well,

they forgot to patch and that's human

error and other things.

The point is.

It's a big number.

And these are the areas we need to

focus on is preparing our staff for social

engineering attacks via email, right?

Social engineering and email is still the

number one.

Why?

Because it works and it's very easy to

conduct.

So AI sharply increases the speed,

targeting, and success rate of phishing.

And we get into this.

Oh, here's a statistic from the report.

Doubling of synthetically generated

malicious email content over the last two

years.

Doubling.

But with AI,

I'm sure that that could double some more.

And we're just going to begin getting

flooded with very personalized,

very enticing

you know specific to we were talking

before the call jason about we both like

hockey what if you got a very interesting

you know hockey email because of this

conversation generated by ai that you

wanted to click on you're going to have

to ask yourself why would this just show

up out of the blue in my mailbox

it's very highly tailored to my interests

you know that's going to be a real

a hallmark of some of the attacks we're

facing well you can see a future where

we just don't click anything

Yeah, that's probably true.

All emails.

Yeah, I see it.

I don't see that happening, Jason,

because I think email is is communications

is the fundamental human trait.

We all want to be connected.

We all want to share ideas and we

all want to consume ideas.

So clicking and email communications

aren't going away.

But I see a future where I gets

involved and can more easily identify and

spot, you know,

type of squad of domain names than anyone

else.

Think about your password manager.

It would never put your password into the

wrong website.

So why couldn't we take that and extend

it to an AI defense tool that protects

us from phishing emails?

It's just a matter of days, weeks,

months before that happens, in my opinion.

Let's go down a little history lesson,

right?

How many people remember the old fishing

from twenty years ago with spelling

mistakes, generic greetings, and,

you know,

I am a prince from some foreign country

trying to get money out of my country.

Will you help me?

Right?

This is that little picture in the old

IBM, you know, keyboard and monitor here,

right?

Please help me.

Well,

those years are almost nostalgic for me

now.

I wish we had those because those were

so easy to identify.

today we have ai crafted messages that are

personalized automated and attacks at

scale they can target every person in your

company all at once with things that they

each individually really like with perfect

grammar tone

Even the translation tools that start a

message in any foreign language,

turning it into English,

turns it into regionally specific

language, right?

The AI tools,

they know how people talk all over the

world, especially English,

because it has the most training materials

to say, oh,

an English speaker would never say that

French term,

so let's turn it into the English

equivalent of it,

and its grammar and tone is perfect.

personalized context,

sophisticated iterations on social

engineering.

This is not just these on steroids,

it's like a different world entirely.

So what do we do?

What are we going to do?

If we know that AI has transformed

phishing from obvious scams into highly

convincing personalized attacks that only

need to fool one employee in our company,

I bet you everyone working at Jaguar or

not working at Jaguar because they can't

produce any vehicles right now because

they're down from a massive attack is

wondering about this one employee.

Who let the hackers in that encrypted our

entire flat network?

Shame on you for having a flat network,

but that's a different story.

So if the threat landscape has

fundamentally changed, and I would argue,

Jason, we've had a watershed moment.

That watershed moment means a tipping

point where things went from before this

point to after this point.

You look at historical events like landing

on the moon,

that's a tipping point for space travel or

the nine-eleven attacks or what have you,

the advent of AI and ChatGPT.

Well, with agentic AI,

we've had a watershed moment because the

AI and computers can execute multiple

steps in a row targeting people.

We need to up our game on user

education to spot and avoid these things

until we have the agentic AI defensive

mechanism on our inbox.

So let's talk about the human factor and

what we know empirically then pivoting to

phishing, right?

The topic here was phishing in the age

of agentic AI.

Sticks for clicks doesn't work, right?

Using attacks and punishing attacks and

shaming people.

Let's start with the statistics, though.

That Verizon data breach report,

I already covered it,

but sixty percent of incidents tied to a

human element as of last year.

And you could argue more or less,

but it's still a huge, huge number.

And it's always the first

foray of breaching a network is going into

it.

But what we know from these three studies

over the last, well, say, five years,

four years,

twenty twenty one was the first one

findings from a large scale long term

study.

This was at the University of Zurich,

the first one.

And they concluded that fake email phish

testing had an unexpected effect of making

end users more likely to click rather than

less likely to click.

The next one here,

we've been doing fishing all wrong,

dark reading.

This was presented at the Black Hat

Conference here in twenty twenty five.

It's a very recent study.

The dark reading was a commercial website

report or overview shared,

helped crafted by the researchers

themselves.

But they have a formal article in the

peer review journal about this.

They concluded that all the fake email

phish testing we do in the embedded

training that comes after it doesn't work.

They only saw a one point seven percent

difference between a control group who got

no training and all the different groups

that got various kinds of training with

fake email phish testing.

One of their conclusions was the end users

that failed phishing tests were assigned

videos of fifteen, twenty minutes,

whatever.

They spent an average.

of ten seconds watching the video assigned

to them and then they left their eyes

were not paying attention they weren't

listening they were just frustrated and

unengaged apathetic about learning how not

to be tricked by these fake email messages

so it just doesn't work the last study

was recently published really just a few

weeks ago saying that when you do training

that is tied to failure

meaning if all you do to train your

users is fake email phishing tests and the

only users that get trained are the ones

that fail then the ones that pass will

eventually fail too because they're not

getting any training on this the only time

they got trained was if they failed right

so it's it there's just so many failure

points on this and what we know from

psychology and education

and dog training and parenting,

I would add,

is that punishment doesn't build or change

behaviors, right?

If anybody's a parent,

just imagine your child having a temper

tantrum and you deciding, well,

the way to stop this is to give

a spanking, right?

Maybe that's a hundred years ago approach,

but today that just wouldn't work.

It's not gonna help the child in any

way, shape or form.

Or let's say you're a dog trainer and

you put a shock collar on your dog

to keep them in your yard.

And anytime they get too close to the

electric fence, they get zapped, right?

Well, that might be okay on a dog,

but what if there's a rabbit or a

squirrel across the yard,

across the electric fence, or God forbid,

a skunk, right?

You know your dog's gonna go through that

little zap and say, the hell with it,

I don't care about the zap.

That's really attractive to me.

Well, punishment doesn't change behaviors.

If you use positive reinforcement though,

That's what changes behaviors because

people engage more.

They learn better.

and they adopt better defensive behaviors

when the training they receive positively

reinforces the good things they're

supposed to do more of.

And when it's paired with encouragement

and gamification,

all of those things combine to get

engagement over apathy,

to get interest and the ability and the

belief that people are smart enough to

learn this stuff.

Because think about dog training for a

moment.

If you shock a dog over and over

when you're at the dog park to try

and teach them things,

the dog just basically will give up and

say,

I'm just going to sit on the ground.

I'm not going to do anything because every

time I move, I get zapped.

That sucks.

Yeah, it's...

The term is called learned helplessness.

There you go.

And that's exactly what happens, right?

And another great interjection here,

I'm sorry, just real quick,

is something called Schofield's definition

of discipline,

if you ever get a chance to look

at it.

Schofield, S-C-O-F-I-L-E-D.

Yeah, Schofield, General Schofield,

like Schofield Barracks in Hawaii.

It's too many consonants.

Don't ask me to spell it.

But the great thing about it,

he wrote a treatise once about compliance

in the military, right?

And all officers at Officer Candidate

School have to memorize his definition of

discipline.

But basically is that fear and punishment

gain immediate compliance,

but long-term resentment.

And then they will work against you as

time goes on, right?

So exactly what you're saying has been

proven by theorists for hundreds of years.

Even in the military, we know this, right?

And why some companies still do it this

way, I have no idea.

Right.

Well, here's the thing.

The entire industry of cybersecurity for

twenty five years,

I've been in this thirty years,

but for twenty five years has been pushing

bigger sticks to stop the clicks, right?

That's an old saying,

you can sort of memorize it,

but the idea is let's punish and stop

these really,

really dangerous behaviors of clicking on

links you shouldn't.

And you just gave me yet another analogy

of why immediate compliance, people think,

oh, that's good.

Thank God the temper tantrum has stopped.

The dog has not crossed the yard.

Oh, God.

Oh, OK.

We can relax now.

No,

because there's been no long term behavior

change.

Psychologists and we'll see it on the next

slide, actually, as we go forward here.

Let's talk for a moment on positive

reinforcement.

Seventy five.

Three years ago, so many two years ago,

B.F.

Skinner,

the godfather of operant conditioning and

psychology and cognitive cybersecurity

psychology, he basically said,

I'm I'm paraphrasing,

but the shortest version of it is

reinforced behaviors are repeated.

right?

Even when I was in college,

I did a rat study where the rat

pressed the lever, got a pellet of food,

and so he would press the lever and

get a pellet of food.

And that's a one for one.

And that's pretty rewarding,

and that builds some behavior consistency.

But if you really, really,

really want to get the rat to press

that button a lot,

stop giving it a pellet every single time

they press it.

Give it an intermittent reward schedule.

And what have we just created?

A slot machine.

for rats and they will pet that thing

all day long,

never knowing when the next pellets

coming,

but eventually it gets one and it's

rewarded.

But that is positive reinforcement.

So tell me,

let's talk about some of the

characteristics of positive reinforcement.

It basically rewarding good behaviors,

recognizing and encouraging the behaviors

you want to see more of,

especially around security best practices.

That's very helpful.

Teaching and positive reinforcement with a

supportive education builds lasting

habits.

And this is important,

deeper understanding, right?

That understanding, right?

There's a Bloom's taxonomy for those that

are interested.

Go look that up.

It says, well, you can learn something,

right?

Like I used to rote memorize the anatomy

of a frog to pass that biology exam.

but I didn't understand it.

I only memorized that I knew the answers,

but then the next level up is

understanding it, right?

But if you understood it,

could you teach it?

You're a teacher, Dr. Edwards,

I'll use your formal title.

You have to know a material much deeper

to teach it.

There's a couple of books I've read in

my lifetime that says,

if you want to really,

really understand a topic,

go to try and teach it.

Because then you have to break it down

into its component parts,

and you have to understand how they all

interrelate.

And you have to then synthesize that in

a way that you can communicate it to

others.

And that gets you to the deepest level

of understanding,

kind of the sensei of a topic.

so that's what teaching in positive

reinforcement in a supportive environment

can build you further along now i'm not

saying you'll teach people how to teach it

but you'll teach them a deeper

understanding and knowledge of it when you

use positive reinforcement and then

another area of to keep in mind and

this is one that's often forgotten

is to celebrate and recognize publicly

within our businesses,

within our organizations,

if you're a CISO,

build a positive reinforcement public loop

that says when people report a fish and

they're accurate, let everyone else know,

hey, look, John reported this fish.

It's really a good one.

Look at how devious it is,

but he wasn't fooled.

We don't want anybody else to be fooled

for we're sharing it with you.

But boy,

does John walk around the office that day

like the hero because he is a hero.

He found something that could have

devastated the company and reported it.

So celebrate progress, create momentum,

employee engagement,

confidence that they're doing the right

thing.

You'll build your culture very,

very well if you follow these things.

Questions or any comments?

No.

going back to one thing too and this

is another armyism uh we teach junior ncos

this way so for example you know many

many years ago when i was a junior

nco i got given a class and it

was like and it was like i know

nothing about this i know it's like that's

why you're going to teach it and it's

just you know a lot of times it's

that stress of oh my god i got

to teach something i'm becoming really

good at it but it was an amazing

learning experience because suddenly i

became the expert on whatever that topic

was at the time which i think was

the mt for that machine gun but

I don't think I remember it that far,

but this is exactly tracking with what

some of the best systems,

educational systems in the world are

doing.

Right.

You should be doing.

If we were to pause this for a

moment and sort of summarize what has

happened.

And by the way, my company, CyberHoot,

I'm the CEO of it.

We've taken the psychology

the educational best practices,

and the cybersecurity best practice,

we've combined them into the platform,

we have created a SAP platform that

leverages all these things.

And we produce materials to help remind

you go reward your employees for reporting

things.

But if you look at the Venn diagram,

the center of what we've produced

CyberHoot is the best practices from

psychology, education, cybersecurity,

and shared with the employees in a

positive way, a gamified way.

All of this is designed really to reach

the threat we're about to face with this

agentic AI agents creating these one for

one phishing emails to your employees.

So I think we're well positioned to help.

So let's talk about the next phase of

this.

Right.

We know that phishing and positive

reinforcement need to be combined.

But what are all the best practices from

a cybersecurity perspective?

We've talked about the best practices from

the military.

You brought those up from psychology,

from education to get knowledge,

understanding and behavior change in our

employees.

Well,

these are some of the cybersecurity best

practices around awareness.

Don't do a once a year,

four hour afternoon of cybersecurity

training.

Let me put that into real context for

you.

if you went to the gym in january

because your new year's resolution was to

get fit and you spent four hours working

every part of your body out on every

machine there because you were committed

to getting fit you would give up the

next day because every ounce of your body

every inch would hurt to and possibly be

injured right so you wouldn't go back so

what do experts in in physical fitness

suggest they say something called h-i-i-t

High intensity interval training,

three to five minutes, you know,

once a day.

Or I know the military does this right.

Start your day with fifty push ups and

fifty sit ups or whatever it is.

And that's H.I.T.

But we need to do it inside in

cybersecurity bite sized lessons,

three to five minutes once a month on

a video topic.

on a fishing simulation.

And believe it or not,

that's sufficient to create awareness and

a culture and then rewarding it and doing

some other things you're about to see is

so much better than a forty five minute

video.

I don't know anyone that can sit still

for a forty five minute video that isn't

a Hollywood studio nail biter with a lot

of action.

And that creates habits.

If you read books around breaking bad

habits or creating new habits,

you have to consistently practice it for a

series of times.

And then all of a sudden it's a

habit and you've got a new good habit

to replace the bad habit or what have

you.

But track your progress to build those

habits.

Gamify the experiences to get engagement.

We have in our platform, Jason, we have

a leaderboard and friends that you can

compete with on your cyber literacy

journey.

And as you complete assignments,

if you get them done on time and

if you get perfect answers,

you get more points and your avatar grows

and you can see your leaderboard where you

sit in the company.

Now, the full leaderboard's anonymous,

but you say, okay, Jason,

you're at spot three out of fifty because

you've done everything on time and almost

perfect and you've done all your science.

You know that.

And so it's a way to create this

engagement.

I know I heard one CEO said,

I was at the bottom of the leaderboard

and I decided to do something about it.

And I climbed up to about halfway because

I was late on my assignments,

but I got everything done and I got

great, a hundred percent accurate answers.

And I was able to move myself up.

And it was the gamification that

encouraged that behavior because they

didn't want to be seen as the laggard.

and then celebrate those safe actions that

you're trying to get more of.

When people report a suspicious email,

call them out on all hands calls,

call them out on team meetings,

build success in this area.

And it builds confidence and encourages a

culture of everyone's responsible for

cybersecurity.

You know, when I got into this field,

twenty five years ago,

they said I'd talk to leaders and I'd

say,

you need a SISO on this organization to

do this.

You go, no, no, no.

My IT guys in charge of all the

security.

If something breaks,

it's his fault or her fault.

Right.

Not so much anymore.

Everyone has to be responsible for this.

Now, the last bullet here,

this one is a little bit nuanced.

And I think those listening to this

podcast or watching it will really

understand this when I explain it.

In traditional fake email,

gotcha phishing attacks,

you're sending a message that's been

crafted in some company's tool,

let's say Cyberhood or Know Before or any

of the vendors out there.

And you're sending it through the email

delivery mechanisms called SMTP port

twenty five or there's a secure port six

thirty six, I think.

And it's delivered to the inboxes of

someone else.

Now,

that person receiving it can take one

action that can disassemble the whole

delivery mechanism and report it as spam.

And if it's reported as spam,

the domain name that you use to deliver

that from,

it was probably typosquatted in some

fashion.

It cannot be very closely aligned to the

vendor you're impersonating.

And I'll tell you what happened to us

at CyberHoot.

Three times we got cease and desist legal

letters that said, stop impersonating us.

The IRS, Facebook, Meta, and Zoom,

we had created typosquatted domain names

in our attack phishing emails that were

just too close to the vendor's name.

One person out of the hundred thousands of

people that we train each month reported

it as spam.

that got sent to the vendor and their

legal team has an automated process that

says, hey, cyber,

we don't care if you're a sat vendor,

you can't impersonate us.

Stop doing it.

So what happened?

And this is if we go back to

that early study from twenty twenty one,

the University of Zurich,

and it said when you test people with

super dumbed down domain names,

they actually will click more on real

attacks because the domain names are typo

squatted.

An M in Microsoft or Amazon

will be turned into an R and an

N by the hackers.

And they'll have access to deliver

phishing emails from R and Microsoft dot

com or something similar to that or Amazon

dot com.

And the individuals have been trained to

look for domain names that are like

account resets or us dot com for a

Microsoft MFA reset email.

Right.

It's so obviously not

tied to the vendor that individuals become

overconfident in their abilities because

the dumbing down of the domain names to

battle against these legal issues of

sending emails over the internet with a

domain name too close to the vendor has

made the test itself inaccurate.

It doesn't work.

It doesn't teach you what you need to

know, right?

so you have to have realistic practices

and what do you do how do you

get realistic practices you take it the

simulation phishing simulations out of

email entirely and you put them in a

browser that's what we do we have a

web browser you you visit the site branded

for your msp or your company and it

walks you through six or seven questions

on a fictitious email saying okay

Here's a real live email that you might

receive from a hacker.

And your job is to label these six

or seven components of the email.

Think of them as puzzle pieces.

as safe or suspicious.

And we start with the sender.

And we've got rnicrosoft.com on a MFA

reset email.

And the user will often fail it the

first time.

And then we spell out the mistake.

And we say,

look at the letters carefully because the

M has been turned into an RNN.

And people's minds are blown.

We get feedback.

You can give us a thumbs up down

and they get this feedback like,

oh my God,

I never knew that was possible.

Oh my goodness.

So realistic practice is a very important

cybersecurity best practice.

So let's talk about AI, agentic AI,

human knowledge,

and who's going to win this battle, right?

Because we know AI makes phishing smarter

and agentic AI makes it smarter, faster,

stronger, more specific to individuals.

So sophisticated attacks with artificial

intelligence are going to get more

personalized.

They're going to come with greater

frequency.

And the impact, if they succeed,

is going to be even worse, right?

We've heard about double,

what do you call it?

Double,

when a ransomware event happens and they

encrypt files.

Thank you.

The single extortion is the data has been

encrypted and you need to pay for the

key to decrypt it.

The double extortion is an advancement on

that that's been around for five or six

years, ten years now,

where they exfiltrate your sensitive data

and threaten to release it online if you

don't pay the ransom because you have good

backups, right?

Everyone listening to this I know has

great backups with a three-to-one backup

methodology,

three copies on two different mediums.

One is offline and immutable, right?

So you can back up and you can

get back up and running.

But then if your data is shared with

the public, oh, my God,

if your accounting firm and your tax files

have been shared for all your clients

online, you're dead.

You're dead in the water.

Or a law firm who's, you know,

legal briefs and all that are exposed.

Oh, my God, it's terrible.

Well, there's triple extortion now.

Did you know that?

In fact, there's quadruple extortion.

What's that?

The Swedish company that did child health

records had a triple extortion attack a

couple of years ago.

The one that I'm familiar with was in

the US where an FTC regulated financial

firm didn't report their breach to the

FTC.

And the hacker said, well, listen,

you haven't paid us for the data to

get it back from the encryption.

You haven't budged on the release of

sensitive information online.

We're going to tell the FTC and you're

going to pay fines now if you don't

give us the ransom or we can keep

quiet.

It's your choice, right?

And then the latest one, the fourth one,

I have an ethical reason why I can't

share it with you.

But you and I will talk afterwards.

But there's a fourth way hackers are

really deviously trying to extort things.

And I'll tell you offline.

But it's really a bad situation.

Can humans outthink AI?

Well,

humans have this really interesting thing

called intuition and contextual awareness,

right?

And I mentioned to you earlier,

you and I both like hockey.

If we got a hockey email out of

the blue and we know we're talking about

it on podcasts and things like that,

maybe we would be

Hmm, that seems a little unusual.

I would get this perfectly crafted email

about something I spoke about recently.

Well,

we have the intuition and contextual

awareness that AI cannot replicate.

So we could probably look at that and

say, hmm, something's not right here.

Let's take a closer look.

And then the muscle memory of,

high intensity interval trainings on

fishing simulations that we've been

practicing so many times would just give

us muscle memory to go look at the

center.

Oh, look at that.

What a devious little domain name.

How interesting.

So together,

these things can provide the best

protection,

but this is more than just teaching humans

how to outthink fishing attacks.

We're actually talking about the article

from Anthropic.

So let me bring it full circle.

If hackers are going to use agentic AI

agents to attack us, Anthropic said,

we also need to start developing agentic

AI defensive mechanisms.

So building agentic AI into our endpoint

detection and response to tease out that

one needle in the haystack.

into our SOC analyst solutions so that

they can monitor for the amount of traffic

is going up through the roof.

How are we going to pick out that

needle in the haystack?

We need an agentic AI agent to look

at the logs from our firewalls,

to look at the unpatched systems,

to look at the XDR alerts and say,

oh, my God, there's a common theme here.

We have one thing that's gone through all

three of these things.

create a red alert,

we've got an attacker on our hands here,

right?

So we need to do that plus the

human training to prevent the initial

breach and go from there.

So my last slide is key leadership

takeaways for people watching this,

or I could say key takeaways for

cybersecurity professionals, for CISOs,

CIOs, CTOs.

Agentic AI has changed our landscape.

We've had this, you can't go back moment.

If they jail broke it once,

they're going to jailbreak it again.

And they're going to use deep,

what is it?

Deep seek,

I think is the Chinese equivalent.

They'll be using that instead if they

can't jailbreak our own.

And it's created and run multistage

attacks that begin with advanced

customized phishing attacks and emails.

So the landscape has changed.

We need to do something.

We need to up our game, essentially.

And positive reinforcement,

we need to change the

default behavior of more sticks for clicks

in the cybersecurity industry and move

towards a positive reinforcement,

rewarding positive behaviors,

educating and engaging with our employees

in ways that doesn't alienate them.

And to your point,

get people give up and shut down and

resent in the long run.

That's what fake email gotcha phishing

does.

It creates this disengagement and

disincentive to participate.

And then finally,

we need to push our vendors of

cybersecurity tools to up their game to

use AI and agentic AIs to help identify

these attacks as they're happening,

whether it's an automated tool doing port

scanning and pen testing and running

vulnerability exploits or password hacking

and breaching and collection or phishing

attacks that have gotten really,

really good.

So that's the phishing in the days,

the agentic AI.

I'm happy to take questions in our forum

as we go forward, you can ask.

You can go to cyberhoot.com for more

information or to book a demo or download

our white papers.

We have a very advanced white paper that

talks about all the psychology and

educational best practices and the

problems that we've had as an industry in

cybersecurity of fake email phishing,

what we call attack phishing,

gotcha phishing.

What's interesting to me, Dr. Edwards,

is that is there a place in anyone's

cybersecurity program for attack phishing?

Yes, there is.

But not on the first day of your

semester long class on, say,

I'm using an analogy here,

but on genetics.

Let's say you're a teacher.

And on day one,

all the students come in, you say,

put down your

Put your books away.

Here's a test.

I'm going to give you your final exam.

So, you know,

so I can see how little you all

know and you're going to fail it.

That's what an attack fish on day one

does.

Right.

Gets you this high baseline of all the

failures.

And then it shows improvement over time

because, hey,

look at the everyone clicked at this rate.

Now they're down to this rate.

But it's a very negative experience of

breeds resentment.

Instead,

you should follow the educational best

practices of saying,

let's teach you the materials,

but it's okay to have a final exam

or a midterm and a final exam.

So run that attack fish or that gotcha

fish once a year, twice a year max,

to measure if your employees are learning

what they need to know to defend the

entity, to defend the organization.

So yes,

there is a possibility there for that.

Yeah.

That's it.

That's my chat.

I hope that was helpful.

No, it was awesome.

I really liked it, Craig, too.

I think and, you know,

it reinforces a lot of themes that good

leaders know.

Right.

You know, I mean,

which is don't build resentment,

build education, build confidence,

you know,

especially in an area where they know

they're going to make mistakes.

Right.

I had an incident once as a CISO

a long time ago where accounts payable

clerk clicked an ADP invoice that was

ransomware.

And, you know, and this is, you know,

this is way back, like, twenty seventeen,

maybe, you know, back in the day.

Ransomware is relatively simple compared

to today's stuff.

You know,

you can reverse it pretty quickly back

then.

But the CEO, you know,

he called me and he's like,

should we fire this person?

I'm like,

you have not approved any training for

these people.

I was like,

I'm surprised that it's not happened a lot

more than now.

I was I was like, you know,

if you remember our conversation,

you know,

We had this a while back, I said,

we need to train them,

we need to do this,

or this stuff's going to happen.

And the finance department lost about,

you know, about four hours of work,

you know, it took us to restore that,

you know,

that file server and some other stuff that

go on and clean and do the stuff

out there.

But, you know,

it was one of those where,

and you know, the question was,

was it going to be any punishment?

No.

no,

we're not going to punish anybody because

of this,

because now you're going to breed this

culture of fear in everybody.

And to be fair in looking,

and we looked at the situation, right.

Was every day she got ADP invoices by

email.

That's what she did every day.

And every day it was a PDF and

every day it was the same name and

it was, you know, from different people.

And she would open it,

look at the invoice, make a decision,

you know,

she was an accounts payable clerk.

Right.

And so all they did was just,

you know,

learn that behavior at other places and,

and,

She did what she did, right?

Her job was accounts payable clerk,

not cybersecurity.

You've got to break into that somehow and

get them to learn outside of it.

But yeah, you can't do it with fear.

I detest any organization that does it

through fear.

Well, I'll go one further and say,

show me an organization that fires people

over clicks on phishing emails,

and you're going to have a company that

will go out of business at some future

date because the resentment will build and

build and build.

It's just not the way to do it.

And psychology knows this.

Education knows this.

The military knows this.

When will cybersecurity professionals

listening to this stop with the punishment

and sticks?

Use carrots.

Use public positive words.

recognition and gamification and you'll be

so much more successful.

I promise.

You know,

I worked at a corporation just years ago

and they sent out a phishing test and

it was which band would you like for

the Christmas party, right?

And one of them was a specific band.

And so everybody fell for it.

Like, oh,

I'm going to ask about the Christmas

party, right?

It was kind of a funny thing, right?

Well,

what turned it around was even better was,

you know,

the next year that was the band for

the Christmas party.

Yeah.

They were like, no,

obviously we got some good insight.

You know, marketing was like,

we're not giving this up.

This is data.

You know what I mean?

That's right.

And what was funny, though,

is the following year when they sent out

the email to get your ticket,

it was an external email and nobody

clicked it.

So the CISO had to go out and

put out an email.

Hey, this is an okay email.

You can click this email, right?

Because like zero participation across the

company to sign up for the Christmas

party.

But you know what's interesting about that

story is that the company maybe stopped

sending out or maybe didn't.

I don't know.

Maybe back then there wasn't a better

answer.

But what it tells me is that people

are not learning how to identify

a phishing email properly, right?

If that ticket came in on a Ticketmaster

link and it was actually Ticketmaster and

it was a legitimate thing and you did

one iota of research that it was all

legitimate,

then you should be able to click on

that.

But if it was Ticketmaster with an at

sign for the A or that wouldn't work,

right?

Let me think, master,

where Ticketmasters with an S on the end,

that's a common attack, right?

Then you should be able to be taught

how to spot that and say, oh,

this could be a really big phishing email

and you should report it.

And then that should be rewarded and that

should be called out to the rest of

the team.

Yeah.

So we see this over and over again.

People go too far on the metronome, right?

Like we're doing too many attack fish.

We do something else and they're still

missing the boat,

which is teaching people the rubric by

which they can identify harmful emails.

And that's what we really excel at.

And the other thing too, right, Craig,

is you're not just teaching them to save

their business.

Like you, you need to make it a,

Hey,

this is good for the rest of your

life thing you need to learn.

Right.

You know,

because this has happened to my family.

This happened to people I know where

they've gotten things that you're like the

typical scammer.

Hey,

we're from LifeLock and you need to call

us now because your account's being set,

you know, teaching them that at work.

It like, for example,

one of the things I do and I

teach cyber safety everyone as well as

I'll talk about, Hey,

how to protect your checking account,

your personal checking account,

throw that into your work training.

Right.

And then they learn something they can

take home with them, not just, oh,

it's another boring cybersecurity class I

got to go to,

which is only going to be about not

putting my password on post-it notes on my

monitor at work.

Well, no,

let's talk about personal password

managers that this can help you and your

family out for your life.

That breeds a whole bunch of just good

feelings towards the company when you're

like, hey, it's not just here,

but we're trying to help you outside the

company as well.

Right.

You're absolutely right.

I have a little phrase I use.

It helps you personally and professionally

when you learn these skill sets, right?

Cyber literacy applies to our families as

much as it applies to our workplace.

And companies that talk about the cost of

delivering cyber awareness training and

phishing simulations fail to realize it's

a benefit to your employees when it's

couched in the right terms and with the

right messaging.

that this helps you all personally and

professionally, right?

I had someone stop me in the streets

here in Portsmouth, New Hampshire,

where I have many, many clients.

They said, hey, Craig, I think,

I promise you,

I would have clicked on this email in

my Yahoo inbox a month ago,

but I did your HootFish and I saw

the urgency of the email and it got

my hair standing up on end and I

looked a little closer and I identified it

was a fish and I didn't do it.

But I think I would have clicked on

it a month ago.

Thank you for giving me the personal

knowledge I needed to be successful and

confident online.

That's the other thing.

Efficiency is improved too.

I've had people stress about an email not

knowing how to tell if it's safe or

unsafe for forty five minutes before they

escalated and asked the question in a

business I was supporting not too long

ago.

And that's an inefficiency that you can't

afford as a business.

Right.

You need people to be able to make

the decision like that.

This is unsafe.

Look,

there's a typo squatting in the name.

There's a period in the wrong place.

There's an S on this domain.

And then you have much more efficient,

confident, secure business employees.

And you just don't realize, you know,

how many business out there still don't

mark external email.

You know,

it just kills me to this day that

that's not a basic thing.

Like, hey,

this is from outside the company.

Pay attention.

You know, it's, yeah.

So awesome, Craig.

Look, I really thank you for coming today.

I appreciate it.

And everybody,

the links for Craig's business will be

here in the chat as well.

And Craig and I will be able to

answer questions.

This will also come out in tomorrow's

newsletter.

We'll be able to answer questions there

too.

And again,

for any of and all these needs,

I would totally recommend getting with

Craig and CyberHoot for all your needs for

this.

Thank you very much.

And one last thing, Jason,

we give Cyberhood away free to

individuals.

We're not going to charge anyone an

individual registration fee.

You can sign up for free and get

our patent pending phishing simulation,

which is the positive in the browser thing

that doesn't

trick you or anything.

It just explains the rubric you need to

use to apply to your personal and

professional email.

Just go to cyberhoot.com slash

individuals.

And if you really like it,

bring it to your IT guy or your

MSP and have them sign up for it.

And we'll bring it to everyone.

That's cool.

That's a great service, Craig.

Thank you for that.

My pleasure.

Awesome.

Thanks again, Craig.

Have a great day.

So everybody,

that wraps up this CyberTalk on agentic AI

and phishing,

which was an amazing talk with Craig,

a person very,

very knowledgeable about it.

And just a huge thank you for Craig

for sharing his three decades of hard-won

experience.

If you've enjoyed this conversation,

please like, subscribe,

and share it with someone else in the

industry.

Also, remember the Cyber Hoot.

You can sign up individually,

which is a great little bonus for anybody

in your professional sphere or friends or

family.

I would totally recommend it.

And if you'd like to be featured on

a future Cyber Talk, you or your business,

please reach out to me either through the

page or at baremetallcyber.com or through

me here on LinkedIn with me.

So thanks for watching,

and you guys have a great evening,

great afternoon.

Cyber Talks: Phishing in the Age of Agentic AI: Craig Taylor on Culture, Literacy, and the New Human Firewall
Broadcast by