#22: AI Empowerment: Enhancing Skills, Not Replacing Them
E23

#22: AI Empowerment: Enhancing Skills, Not Replacing Them

Welcome back to another episode of the Junction.

I am joined by my co host Chase.

Nice to have you back in studio.

You've been on the road? Yeah, quite a bit.

Flying quite a bit.

It's one thing that we haven't been able to automate.

Yeah, it's my status.

Getting upgraded.

Did you just say you got upgraded

on all your flights this past weekend?

I think it helps when you

fly into really small regional airports.

You get bumped a little bit higher in that stack.

Yeah, my name always shows up, like number 26 38,

you know, last in line on the upgrade list.

Yeah, I did get upgraded one time. It was really fun.

It was so awesome.

Everybody else got canceled or moved off of the flight,

so I was the only one on the upgrade list. Wow.

And even then, the nice flight attendant lady,

she was like, yeah, I don't, it's looking

pretty good and you might get upgraded.

I'm like, I'm the only one.

I'm gonna get upgraded. Right.

She really left you hanging. Yeah.

If we could automate that, that would be awesome.

The upgrade part, the upgrade

and achieving status, you know.

Well, now all you need to do is just get

the credit card and pretty much just spend more than

the next 27 people on the list, apparently.

I'm looking at you, Scott, small spender.

You gotta spend more than you fly, apparently. Yeah.

That's one way to do it.

Well, I'm excited to be back in the studio.

We have a lot to talk about.

I really wanted to focus today on, or at least

a good portion of it, on an update around some

of the stuff that we're actually building internally.

So we've talked a lot about different use cases,

how other companies are using AI early on, actually,

when we first started the podcast, we focused in

on what are finance professionals doing?

What are marketers doing, how

are salespeople leveraging these tools?

Since then, we have actually built out some proof

of concepts internally, being as we have a ton

of really smart people that work for Venn technology,

that know how automation and different applications can be

applied to make things that we do more efficient.

Well, we can start with the error database is the one

that I think has at least the most potential for us.

The platforms that we work with, there's

all types of different error messages.

Some of them are very convoluted, and

some of them are heavy on syntax.

Some of them have big lofty words that if

you're not an accountant or you're not a database

administrator, you may not know what they mean.

And so the idea behind this is to take

in the platform name like Salesforce and the error

message itself and then see if we can find

places in our database where this has popped up

before and suggest what the solution might be.

Because sometimes the error message is, hey,

you need to update your so doc.

And for the layman's person they're like, what is that?

But, and so are you.

This, these error messages aren't cataloged

somewhere by the Apple publisher?

No, no, they're not.

Maybe that's a business opportunity for somebody.

But the error messages are generally like an

API validating the payload on the fly prior

to it committing the record to the database.

And we send that message back to the source

platform and then the user in theory should be

able to read the error message and be like,

oh yeah, this is what happened.

Let me do something.

Generally the error message is very convoluted and

doesn't even actually specify the real error.

So what we're trying to do is take that error

message, pipe it in, find a suggested solution to then

send to that user immediately, right away and not necessarily

wait for them to send in a case or call

us up or text or whatever.

But when they encounter that error, look at

that database and then have a suggested resolution

that we send out on the fly.

Right as that error happens, we're piping that solution,

that suggested solution straight back into their source platform

because ultimately that client or that person is going

to be like, I don't know what this means.

Let me email Vin.

Is this only if you're using these systems in tandem

so if you have an integration in place or could

this be something that other, a business user could get

value out of if they only work in Salesforce?

Potentially.

I think for most of our use cases we're

getting error messages that happen asynchronously so the user

is not involved in the exchange of information.

Therefore they don't see the error message right away.

Potentially if you were just operating in Salesforce, generally

you're going to see an error message pop up.

I think about like the apex error

emails that we get all the time.

And even I am, you know,

challenged by some of those emails.

I think there's potential. I mean, I just get

the contact your administrator message.

Yeah, I know what that means.

I don't necessarily like having to

reach out all the time. Feels annoying.

But I guess I'm trying to set the

table for, you know, someone's listening and trying

to understand what the value of that is.

And within the context of integrations, so often

we talk to people, whether they're in a

finance seat or marketing or sales, looking to

connect two or more systems, because maybe that

standard connector, that standard out of the box

integration doesn't fulfill their unique use case.

So we help build connections between

those systems using the APIs.

And sometimes there's this perception or idea that someone

says, well, you can't just build it and go

away, like, just let my integration run.

Well, the reality is these

publishers are updating their APIs.

Your business process might change, or there

could be edge case things, maybe an

incorrect format of your data or something. Right.

There's something happening that

requires ongoing maintenance. Sure.

Well, you always, I mean, you think about maybe

when you call up like a tech support and

they're like, oh, yeah, you didn't do this.

And you're like, I don't.

What did you just say? Right.

It's kind of like an interpreter.

You don't even understand it sometimes.

Can I get the manager on the phone?

The AI bot is just on a loop. Yeah.

Gets escalated.

Same, same bot, different tone.

Yeah, I'm the manager who apparently

has a sweet southern accent.

Yeah, I feel like I can do

that because I'm texan or something. I don't know.

You get that rite of passage. Totally.

I'll just sit over here and keep saying baggage

claim baggage on your executive platinum gold diamond status.

Hey, don't hate me. Could you hate me?

Sorry, I'm going to automate that next.

All right, so this error handling, the biggest benefit,

it's not just for us internally to be able

to quickly identify, but you're suggesting that we could

catch it systematically and send it or translate it

or tell that business user, here's how to go

solve it before the next line of support. Right?

I mean, it's giving the solution to the problem in

advance, knowing that we can't do something about it.

Maybe the data is wrong and the error message

is very convoluted, but we do have the payload.

We know what we're trying to accomplish.

We could, once we understand that the AI says,

looks at all these other similar error messages, we

could send back and say, hey, it looks like

you had a comma when you were trying to

put in $1,333, in fact, doesn't like commas.

I'm making this up, remove the comma and resync

it, and then it goes back and you're golden.

Well, now we prevented that person

from having to email us.

They got to solve the problem on their own.

They maybe learned something, and we've just kind of reduced

the friction from the people, the process and the technology,

simply by suggesting what they should do and letting them

try it before they reach out to us.

It's a big thing in case management is trying to prevent

people from getting to the actual people, you know, like give

them the content they can self serve on or.

Exactly, yeah. Have you tried this thing?

Here's our tutorial.

Generally, those are pretty annoying. Right.

But the idea here is that we'd actually offer

up like a real potential solution that they could

try prior to then sending in an email.

The use case we also haven't talked about yet

is the upside for training additional support team members.

Right.

So someone comes in and is familiarizing

themselves with the many various systems that

we work in the different API languages.

Do you think that that's kind of an ancillary

benefit of being able to train up or get

familiar with some of these errors more quickly because,

hey, we've solved this problem or we've seen this

17 ways and this is the solution? Oh, totally. You have?

There's a, there's a significant, there's probably the most

opportunity in the training world as it relates to

like institutional knowledge databases, if you will.

I don't know if that's a thing.

I just made that up.

But for knowledge that is inherent to the company,

we can give people access to that much faster

than it would take for them to think about

it or like onboard over the next twelve months. Right.

And they pick up those onesies and twosies.

Giving them access, even to a chatbot

that has direct access to all that

information will speed things up very quickly.

There was an article actually sometime last year, I think

it was IBM, some big company found that when they

were onboarding call center agents, where AI performed the best

was with the new hires because they could quickly ask

questions and get up to speed.

But where they saw the least amount

of effectiveness was for the veterans.

I mean, they know all the ins

and outs, they know all the answers.

They don't need to ask the chatbot anything.

The vets are the ones that trained it.

Yeah, they trained the chatbot. Right.

So I think there's like, there's a

cross sectional there where we've got people

that are well versed in particular industries

like accounting or maybe Salesforce, for instance.

Right.

But they're not well versed in the other side. Right.

And that's where we're gonna be able to provide a

lot of value in speeding up error handling, speeding up

training internally on folks that don't have the accounting side,

but they've got the salesforce or they've got the CRM

knowledge, but they don't have the ERP knowledge.

Sure, that's a good point.

The other half of that, yeah.

Will be super helpful for them.

Have we encountered any challenges as we've started to

go down this path and build this out?

I think the challenge is you have to understand all

of the things that you need to put this together.

So you need a database of information, you need

the ability to interact with it, and then you

need people that understand the answers to the questions.

Because it's one thing just to log

a bunch of errors, for instance.

It's a whole other thing to say.

Well, this particular error message doesn't

mean anything because what it's actually

referring to is this other thing.

Well, that means you need somebody that knows the

answer to that question and you need to have

a scenario where this has happened before and it

needs to be between, for instance, salesforce and intact.

Anyway, it's a unique combination

of that people, process, technology.

We're in that one instance.

Somebody figured it out.

And what we don't want to do is that person

moves on or they forget or it's like that one.

It's like the needle in the haystack.

We found it, and then what happens most of the

time is like peace, you know, lost that needle.

We'll go, we'll find, we'll go find it

again and spend another 4 hours trying to

figure out this problem all over again, which

is what ultimately we're trying to avoid.

So just again, to set some context,

we developed a AI committee internally and

we, that's who's been working on this.

So when we talk about we um, there's a group

of, you know, folks who have raised their hand and

offered their expertise in various parts of this process.

So this initial idea was documented in a

proposal, if you will, of here's the business

case, here's the perceived benefit, here's the level

of effort or estimate of what it's going

to take, here's the dependencies or the risks.

And I don't know if there was a

timeline component to it because these are considered

kind of ancillary projects to what we do.

Yeah, it's like skunk works just iterate on it over

time until we figure out how it should work.

But that's the other thing that we've done with the

other project is we're starting to analyze all the transcripts,

record those, then from that, build out sections of the

sow of what the requirements are or should be.

We're getting there.

The thing that is probably the most challenging is

combining not only your transcripts from maybe three different

meetings, but also maybe ten emails worth of correspondence

and then maybe also a couple of PDF's.

It's combining all of that information to

build out what the problem statement is

and what our potential solution is.

We're all doing that in our heads

and then writing it down on paper. Right.

So we're trying to replicate that with,

I wouldn't say it's like extremely successful,

but there's glimmer of opportunity.

That means we will continue to pursue it because

if we can do it barely now, right, and

then we give it all of the data, it'll

probably do you a little bit better, right?

And we just keep iterating on.

It reminds me of the intern discussion we keep having.

The intern is going to get better

because we're going to keep training it,

giving it access to more information.

At some point it'll be good enough

to do a draft of the sow. Well, that's perfect.

Now we can have that human in the loop to

review it, put it through some approval processes, maybe even

have OpenAI revise it, just like we do in our,

you know, in our chats, like, hey, that sounds great.

What about this?

But both of those are areas where we can increase

the level of speed that we, that we have without

having to, you know, vastly expand our resources.

So it also, I think, provides coverage for just, you

know, I mean, there's human error in that recall.

Sometimes.

Especially we've talked about being

back to, back on calls.

If you're, you know, talking to

various, different prospects across different systems,

there's that level of checking.

Because I could see one argument that says, well, if

you're just going to go build it out with AI,

like, what am I even paying you to do?

Like, why we're sitting on this call?

But it's like, no, actually this is protecting our conversation

around, like, so that I don't for maybe forget, and

I'm not suggesting that our team is, but I know

that for me, when I'm applying this to maybe I'm

having a call with a client around their experience, kind

of like a case study testimonial, if you will.

I like to, I used to just stress myself out

over taking notes, like, you know, to the t, and

part of it was committing it to memory.

I found myself, I've challenged myself in the last

several calls with clients talking about their experience to

not take any notes, unless it's just simply like,

I just, I don't know, there's kind of that

habit of, like, taking hands off the keyboard.

But I find that I am so much more engaged

with them versus trying to get down every, like, nook

and cranny, every single word, because I need that quote.

And now I can just continue to, like, pull insights

and spend my time there and lean on AI. The transcript.

Yeah.

It was such a, so I think we've

talked about this on the show, right?

But we've set up the ability to basically take a

meetings transcript and now ask questions of that transcript at

any point in time now, right after the meeting's over

or, you know, three weeks from now, and it's just

flipped my mindset on being fully engaged and actively listening

on the phone call rather than being like, okay, hold

on, let me, wait, can I write that down? Hold on. Okay.

Now where were we?

You know, like, it just makes things a whole lot

more fluid for us to have natural conversations knowing that

everything that is we're talking about is now in a

transcript that we can ask questions of. Yeah.

And I even find myself right before a call starts,

I'll go find the transcript from the last call.

And in our meeting, we don't know if

we really have a name for it, right.

But in our meeting notes, I think

that's the name that I've named it. Right.

I'll ask, hey, can you summarize the call?

What were the action items from

the last, from when we chatted?

Just a quick, like, three questions. Slack.

Or you're using the Venn chatbot interface?

No, this is in Salesforce.

Oh, in Salesforce? In Salesforce.

Okay, I'll ask these three questions.

I'll go find the meeting like we met last week,

and I'll ask those three questions real quick just to

give me a reminder of where we're at.

Um, I also do it when, when questions pop up

of like, hey, did we, did we mention any ballpark

numbers or, you know, um, what was that one thing

that stuck out that I can't remember what it was. Right.

And I'll give it context clues.

It'd be like, oh, yeah, this

is what you're talking about.

Um, it just totally changes the way

that you, uh, navigate a meeting, especially

as somebody that is, uh, leading it. Right.

You can just fully actively engage, don't have to take

notes because your notes are being taken for you. Right.

Um, and then you can go back and

refer to those at any point in time.

Super handy.

And we've talked about how there's technology out

there that exists, that does this today.

But it's pretty neat, I think, that

we've built out on our own platform.

It's not one of those subscriptions, I don't think, that

we can drop, if you will, because we built it.

Like, there's always that do build or buy. Right.

Like, you could go purchase a subscription to gong

and, you know, get some of these insights.

But because we've built it, we can

iterate or customize it right to.

Yeah, well, I think where you're going is a

lot of the gongs out there are tackling a

specific problem and then going beyond that and saying,

well, we can also do this. Right.

I think Gong was built out

so you could watch call recordings.

If you've got a team of people and you want to

watch this call and you want to watch that call. Right.

We can do that. I mean, we do.

We can do that with ours.

The way that we go about it.

The intent behind it, though, is to have access

to that transcript that we can ask questions of.

What have been the biggest challenges

in building this out so far?

Um, the biggest challenge that we've had with, um, with

that in the early days was the context length.

Every 15 minutes or every 15 minutes video was

bombing out because it was just too long.

Now, we're well past context limitations, so

we get a whole lot less errors.

But I think right now, where we

struggle is asking the right questions.

It's really easy to just summarize the call.

That's cool. Right?

But beyond that, what else can

you do with a transcript?

We, um.

One of the questions that I have

it ask every single call is, what.

What were some of the objections from the call?

And that's super insightful.

In fact, there was a call that, uh, came

in today that I'm on the meeting, but I

don't attend, and that popped in, and it.

It, um, made me realize, hey, we need

to get in there and do something about

this, because they're raising objections that we can

solve for that we're not picking up on.

So it was just kind of like a last second.

Like, oh, man.

Like, these are things.

Are you talking about from a managerial perspective?

You can push on that.

Like, you can coach. Exactly. Yeah.

Like, it gives you insight even

if you didn't attend the call. Yeah.

Well, so from seeing that, it made me wonder, like,

maybe we should be asking, you know, where there.

Sounds big brother esque, but I think it's for the sake

of you know, you know, doing better as a company. Right.

Were there times when, um, when somebody didn't

sound confident in what they were saying? Right.

And, um, as consultants, right.

We want to be the best at what we do. The expert.

The expert.

And if we don't sound like the

expert, it's probably something we should tackle.

Um, but it does analyze tone

and sentiment really well, too.

And, I mean, there's just a whole world of things

that you can, can gain from asking the right questions.

Absolutely.

I've noticed I pinged you about it earlier today

because I haven't been getting these into my slack,

but another member of our team does, and she

will share the funny ones with me.

So, for example, in a recent one on

one, I don't recall who was, I think

we were talking about the temperature outside, right.

She was like, oh, it's cold in

this place I'm at right now.

I gotta make sure I have a blanket.

And the headline was, members of the call

were distressed about the temperature in the room.

And it's like, from a sentiment perspective, like, this is not

a sales call, but I mean, it just cracks me up

when it calls out some specific things like that.

Or mal expressed distress for joining

the meeting three minutes late.

Which leads to. It did recommend.

It says, this could lead to or point

to a time management mismanagement thing, which.

Cause I'm back to back to back and I'm

starting to actually realize, yeah, I probably need to

add a little buffer between my meetings.

But, you know, the transcript and the questions

that it was asked called that out.

It's like being a potential risk.

So you translate that into, like, a prospect call.

That could be some very small thing, but insight.

That helps. Oh, totally.

Make sure that you allocate enough time, you know,

for that next call to advance the sale.

I can almost envision this is

beyond big brother, but, like, different

types of agents analyzing your transcript.

Like, it's like in those videos where the political guy is

trying to get the thing and he's like, well, should I

do the green tie or should I do the red tie? Right?

And, well, this focus group said this, and

this focus group, these different agents are be

like, well, if you want to appease this

type of person, you need to do this.

And if you want to talk about this.

And like, well, you sounded, you didn't

sound so hot on the HubSpot, though.

You should refresh your.

I can just envision all these different bots.

You need to comb your hair.

You need to flip your hair this way.

Now you need to wear this and you

need to freshen up on the API documentation.

Just people pinging left and right, trying to

make you better at what you do.

I do think it's an interesting.

So there was a connection to this.

One of our team members recently

shared an article about that. This is.

I think of it as a hot

take, why great AI produces lazy humans.

And it went into a couple studies.

One in particular that caught my attention

was around the recruiting use case.

And so they equipped, you know, people

with the job title of recruiter.

That's their job responsibility on the daily.

And it was basically claiming that they became lazy,

careless, less skilled because the AI is so good.

And we do.

I have to.

I mean, I feel pretty like I.

I'm not using it so much in my

day to day that I've become complacent.

But I was talking with a team member today

about, you know, hey, there was some concern around.

I don't want to make sure that it's

in line with what we're actually saying.

Well, we're using a transcript, right.

So really what we have to look

out for in that context is hallucination. Sure.

But this recruiting case study is interesting.

I actually have utilized this to analyze batches of,

like, here's a resume, here is a phone screen.

And utilize that data against another data set

to call out gaps in various applicants.

Like, almost like a one to one.

So this.

Can you please compare?

What's the plus and minus?

Let's talk about the pros and cons.

It's not that it's driving your, it shouldn't be

driving the hiring decision, but I just thought it

was interesting that it's essentially saying it's so good

that it's making the person in the loop bad.

I can see, at least in my own day

to day things where that would come into play.

And it would be because I got so used to

how quickly I can get answers that I would wait

until the last second to ask it whatever question or

get whatever recommendation, and then I'd only have 30 seconds

to look at it and be like, yeah, that's good.

Or, it's good enough, right?

And then I'm on the phone call, and somebody

much smarter than me has been like, yeah, two

plus two is actually four, not five.

And they'd call out something really simple.

That's where I see, I don't know if I call it laziness.

For me, it's probably almost time management, not prioritizing

my time well enough because I've got this.

Not an overwhelming sense.

I've over normalized the use of AI.

I think thinking about the intern, right.

It's real easy to be like, hey,

intern, go build a sales deck.

And only reviewing it for 30 seconds,

you're probably gonna have the same problem. Right.

What also pointed out that it made

them, they didn't get better over time. Oh.

So it's almost like you'd rather work with an AI

intern that you do have to continue to coach, right.

In a way or correct.

Versus it being.

You're looking at it as like,

this thing's almost smarter than me.

It's calling out stuff that I didn't pick up on. Right.

Because then you start to become the.

Maybe the roles flipped, you start to

take on the marketing intern approach.

But I thought that was interesting, too.

So if you just continue to operate under that, like,

you know what, it's done pretty good so far.

Really good.

I'm going to keep trusting it.

I'm only giving myself 30 seconds in

between tasks to then review it.

And now you're not, you know, we're not using

that critical thinking component that I think is so

important to the people out there that say, you

know, we're gonna lose jobs to AI if we

continue to essentially fall asleep at the wheel with

this plugged into our different parts of our jobs.

Yeah, you could risk that.

I think the way that these things work, I

don't think anybody is legitimately losing their job.

If.

If they are, it's probably because it

was already on the way out.

Um, but from what I've been able to play with and.

And truly figure out, it.

It's not good enough to do things, uh,

that take up a significant amount of, um,

maybe I'll say brain space, right?

Like, if you ask it to answer two plus two,

well, of course it's going to know the answer, right?

But if you ask it to do this really

complex math and prove so in math, I know

the math heads are going to be like, oh,

man, he doesn't know anything he's talking about. Right.

But you can prove through math

that this answer is true.

If you try to get the large language model to prove

that it should use this, this sentence over this sentence.

And now we're like multiple

tasks into the problem statement.

It's going to take a long time for

it to get really good at that.

But if you want to ask simple, just straightforward.

Hey, when does the sun rise from east to west?

You know, like, very simple minded tasks,

it'll be very good at that.

And if you've got a simple minded job or your tasks

are very simple minded, of course it's probably going to do

that and it does it already, you know really well.

It's where you have to pair all those

tasks together to, to build a bigger product

or a bigger service or a bigger whatever. Right.

I also think it points to the, what are you

doing with, how are you measuring the output, the outcome?

So with anything, even if you do get a model

trained up so that it's so good that it writes

killer social posts or blog posts based off of what

you're feeding it, are you still measuring the effectiveness of

how that content is performing on the web?

That's where I see this whole, or in

this case of the recruiters, if they're just

allowing the AI to score stuff and they're

going, that's pretty good, that's good enough.

And are you actually looking at the effectiveness of those

hires over time and going, okay, we've applied AI to

this particular job and group of candidates over three months

and look at how do they perform in their 1st,

30, 60, 90 and you're seeing a total disconnect from

how you were doing it the old school manual way,

then that's a good indicator.

So even if we are kind of like allowing

AI to take over some of these jobs, then

the data is going to find you. It's going to catch up.

If you're not paying attention to that or whatever

that normal is, is now exceeded by somebody.

That's even better.

I think about that email that

you forwarded me was so good.

I was like, that was a person.

They did their research.

So Mel got an email from somebody.

It was a cold outreach, right.

And mentioned her new title, mentioned a couple of

other, all three of my last roles mentioned two

new hires and the two new hires. Yeah.

And my first thought was, man, that

would be really difficult to like AI.

Can I use it as a verb?

I don't know, I just did you AI'd it.

Did AI ads.

You've tried to make this a thing in previous episodes.

AI'd it.

It's gonna be a thing. I'm gonna be famous.

I'm gonna be in Webster dictionary, word of the year.

No, he's on record.

That'll never happen.

Is that your April fools joke? Bet.

Oh no, stop.

So your first thought was, how can I take

this and replicate this as our own outreach?

Like you were sitting there analyzing it from a.

Was that a person or was that AI?

I was pretty certain that it was a person.

I was thinking through, how could I use AI? Correct.

Which is my original assumption with you.

Yeah, that's what I.

I mean, how great is that email? It's really good.

Very personable.

You're likely to respond, but I go

back to actually what you just mentioned.

How effective is that?

Sure, it stands out to us, but

does it stand out to everybody?

And does that person get a high reply rate?

Because if not, I mean, to build that

out, I think I could do it.

But if the click through rate is 0.001%, then

I just wasted a whole lot of time. Right.

But I think that goes back to, sure, there are some things

that you don't actually know what it's going to be till you

build it, but we talk about this all the time.

People processing technology, take the

technology out of it.

Is it an effective, like, you have to use the template.

Is that.

Has that been an effective outreach before the tool

or the AI existed, then you can go replicate.

Okay, that's right. Like.

Well, so did you, did you respond? Not yet.

Wait, are you going to.

I mean, I was trying to like

do the whole one year on PTO. Be on PTO.

Oh, it was a holiday.

I got it on a Good Friday. All right, cool.

Which they didn't account for, but

it still totally got my attention.

I wonder, do people track how long

it takes to respond to emails? Absolutely. Yeah. Yeah.

I mean, you're killing his KPI right now by not.

Oh, no, I'll apologize for that if I take the meeting.

No, I was super impressed by that.

I don't know exactly where you're going with it

other than just to build that out would probably

be a, you know, I was like automated.

Automated.

You like draft emails, right?

If there's a tool, if someone's listening and

you have a tool that you would recommend

for that or an approach, let us know.

I know there's a lot of companies out

there saying that they are doing this AI.

Oh, yeah, there's lots.

But like, literally doing it and doing it well,

like to that email, you should paste that copy

somewhere so people can see what we're talking about.

Yeah, I will. Okay.

I can do that. All right.

So speaking of holidays.

Cause we just talked about how

we were off a Good Friday.

April fools.

Are there any.

Yeah, like, I was expecting way more AI buzz.

Unless it just hasn't hit yet.

Like, there's just something that hasn't quite made

it to our feed, but pretty lackluster.

Yeah, I'm not.

I didn't see anything that stood out.

711 had a hot dog flavored drink. All these, like.

Yeah, these food and beverage companies

coming out with, like, weird flavors.

There was like some.

What was the Sriracha toothpaste?

Oh, Sriracha toothpaste.

Queso slush from Sonic. I don't know.

I actually might try one of those eventually.

You do that, you know, the one that's probably the

most famous, maybe most folks won't remember, is on April

1, like, a bajillion years ago, Google launched Gmail.

They launched Gmail on April fools, and everybody thought it

was a joke because you couldn't get free email.

There wasn't.

There wasn't a thing.

And, I mean, look at where you are now, right?

So I feel like April Fools is probably.

This is the poor marketer in me, but, like,

one of the best days to launch something.

Cause you could get a lot of.

We did launch Chad GPT. Oh, that's right.

On April fools last year, we had

some pretty big names come through, too.

I don't remember that.

And I saw, like, a brand this

year tried to do something like that.

I'm like, you're so late to the game.

You know, it helps, though, when

your chad is actually pretty snarky. He's funny. Yeah.

Yeah.

We do have a real live Chad at Ben technology,

who was, in fact, the GPT behind our technology.

We pay him a dollar per million tokens.

Oh, it's good. Yeah, I don't know.

I'm just kind of, like, not very impressed. But they.

There are.

There are a lot of articles or headlines out

there just saying to be wary that the.

You should expect the AI scams to increase on

April Fool's Day, which I just pretty much don't.

I put my.

You already know, I kind of live under

a rock on some of this stuff.

Like, all of Dallas Fort Worth could be burning, and

someone would have to come tap me on the shoulder

and say, mel, I need you to evacuate the building.

DfW's on fire.

Did you know that next week the sun

will be blotted out by this giant rock? Yeah.

Isn't that crazy?

Yeah, I think I might just, like, not

plan to even leave work, because I've heard

there's, like, this emergency, state of emergency or

travel advisory, and it's already a journey.

Send me your timesheet's gonna be 24 hours.

Yeah. So.

Okay, so April fool's coming into the eclipse, though.

Like, is there anything, as you mentioned, that.

Is there anything that you think that we're

going to see, in the way of AI.

Yeah, I don't know.

I think we mentioned this on the

last one we recorded, but Facebook is.

I'm sure it was for real for a hot

minute, but it was taken over by the african

child building out a monkey with plastic bottles.

And then I'm sure it kind of

got a little bit worse from there.

And then hopefully, at least by now, people

are like, okay, the kid didn't build Noah's

ark out of water bottles, right?

But it doesn't look too promising to know that.

That probably started out with the original post, whatever it

was, and then it got a ton of likes.

But I'm thinking about sora.

I'm thinking about OpenAI's unreleased voice to or, sorry,

text to voice that they've been talking about.

And that's actually not new.

Those have been around for a while.

But I think the access to that,

the easy access to it is going.

You're gonna have more of those, I'm going

to say, indian call centers, right, where the

scam guys are, like, imitating Trump or Biden

to, hey, you need to donate now, right?

To get you to do things that you shouldn't have done.

The other thing that I thought was

interesting was I think this was open.

AI had some recommendations that to banks that

they start pulling, like voice security questions, right,

where analyzing Mel's voice, that's pretty scary.

This sounds like Mel.

Okay, yeah, Mel does want me to

send $10,000 to a nigerian prince, right?

But what has me more concerned is the phone

calls where, for instance, somebody's replicated my voice from

the podcast or any of the other YouTube stars.

I mean, we're practically there, right?

Like anybody, any other voice, they use the

voice to then call somebody from their family.

And hopefully probably giving people

ideas now, which is terrible.

But those are things that we got to

watch out for, because you can make that

voice say anything like, hey, I'm in trouble.

Or, hey, I've got a really great idea.

Or, hey, you really owe me a lot of money.

And I think we're going to run into a lot

of fraud type activities where, well, if you're thinking about

stocks, I'm thinking about who has fraud type tools.

That will be.

It's like the zoom.

Zoom got really big during COVID right?

Cause everybody needed to hop on the phone.

I think we're on the precipice

of everybody needs fraud tools.

We need to spend.

I committed to this last episode, and

we are going to do it.

We will dedicate an episode to just that.

And I want to get really tactical on.

Yeah, I can't even follow that. Real tools. Real tools.

I think it's great to talk

about real strategies, real tools. Companies that are.

I know there's companies that are out there doing this.

I mean, kind of going back to, like, everything that

we're reading about April Fool's Day or, like, even the

eclipse, all the headlines right now are like, just be

aware of fraud, be aware of hallucinations.

Like, the biggest threat that they're saying

at the moment is like, don't trust.

If you ask Chachi BT, why shouldn't we,

like, can I look at the sun?

You know, it's like, gonna come up with

these things and it's gonna tell you.

10,000 people blinded. Yeah.

Permanent, you know, vision loss and.

Which is not true for the total period

of the solar eclipse, but, like, ask follow

up questions is what they're saying.

Look for reputable sources.

So it goes back to the justification.

What was the source for. For this?

If it's coming from some mywiki spammy

solar eclipse blog, then it's probably not

the source you should think on.

There's like, what's, like, the thing where it's

like, you always have to keep questioning.

I'm pretty sure this is in the Bible, right.

It says, keep on knocking.

I think about, you just gotta keep questioning

or always question the validity of anything.

Almost in these days, over the weekend at and

t had 70 million people's Social Security numbers leaked.

Mine was one of them, I'm sure.

70 million.

There's only 300 million people in America.

Basically everybody at this point, we're all in it.

Your Social Security number is

probably out there somewhere. That's another one.

I'm an 18.

I didn't get that email. Didn't catch my attention.

That's right.

You'll get a $3 check in about three years.

Perfect.

Yeah, well, okay.

On that, that's where we're going to end it.

We will commit to going away and doing our due diligence

on preparing for that episode so that we can get a

little bit more tactical about the things that you can be

doing in your day to day to kind of be aware.

It kind of goes back to, like, all the trainings

that you do around, like, phishing attempts and security.

But, like, I really do think that

there's probably some which goes back to.

These tools are being used, can be used for good

and for bad, and how can we apply them?

We'll definitely do that.

So in the meantime, I hope

everybody stays safe out there.

Don't burn your eyes.

Yeah, don't do it.

Just wait.

Wait for it to come and go.

And don't get stuck in traffic, either.

All right, well, keep it automated.

Episode Video

Creators and Guests

Chase Friedman
Host
Chase Friedman
I'm obsessed with all things automation & AI
Mel Bell
Host
Mel Bell
Marketing is my super power