#23: AI Scams & How to Protect Yourself
E24

#23: AI Scams & How to Protect Yourself

Welcome back to the Junction, a podcast by Venn Technology

about AI and automation without the jargon hosted by me,

Mel Bell, and my co host, Chase Friedman.

We're thought leaders, industry experts, and visionaries

here to unpack the latest trends in

AI and how we're using it.

Listen for practical advice and a little bit of

banter on how to improve your business and career

by being at the junction of it all.

We're so excited you're here.

Let's jump in.

Welcome back to another episode of The Junction, y'all.

We actually did what we said we were

gonna do, and we did some homework. We committed.

If you did not listen to the last episode, please

go back before you hit play on this one.

Take it back to the last episode number.

Episode 23.

Yeah, or like episode like season two, episode three,

something like that, you know, take it back to

the last episode and listen all the way to

the end to where I say we are committed

to doing our homework, and we're going to come

back next episode with real life practical tips about

how to protect yourself from AI security threats.

And we did it.

At least we scratched this.

We're going to scratch the surface today a little bit.

Well, by the time this publishes, I'm sure

there'll be like 20 more things, right?

Oh, probably they have to worry about, but also,

like, 20 more ways you can secure your business,

your life, you know, all those things.

I find 20 new things to worry about in a given hour.

So, like, that is, you know, just keep it going.

Let's go.

Yeah, I mean, you just pop open your Twitter

feed and, you know, there's something new to worry

about or something new to get excited about.

Well, that's your first problem.

Are you still on Twitter?

Yeah, actually I'm on x.

Hard pass. Not there anymore. I tried it.

It stresses me out.

I just look for recommendations from Clippy these days.

I'm telling you, we're going to make Clippy a thing.

Microsoft is going to rebrand Copilot to Clippy AI.

Yeah.

I wonder if they still have the trademark.

You know, maybe we could take it over, like vent.

Vent technology. Click.

Oh, you want to take on Microsoft?

Yeah, not really.

I want to partner.

We're going to partner with Microsoft. Yeah.

We're going to be co pilots together.

No, that failed.

But actually, I was thinking about.

So you had some banter with somebody on LinkedIn

and one of your buddies, and I was thinking

about Clippy, and I was trying to remember, like,

what it actually did, and I don't remember it.

Actually doing it.

You brought joy.

He bounced on your screen.

He brought joy to the wonderful

world of the Microsoft office suite.

Back in the day when things were way.

He just wanted to know, do you want a tour?

Do you want a tour?

Do you know where the save button is?

You want to start a new document?

Need a word count?

I don't even know.

We'd probably have to pull up a.

We could probably ask chat to be tea. Yeah.

Or we could just YouTube it or get

on x formally, if anyone's been listening. Along.

The last couple seasons, you know,

I've had this, like, internal struggle.

I feel conflicted that I use AI at work and I don't

knee jerk use it at home or, like, in my personal life.

And I've really been challenging myself to

solicit the responses that I would normally

google on my, just my app.

Yeah, I'm not using it nearly to

the extent, like, at work, I'm uploading

all these transcripts and things like that.

But what I would normally Google, like, the other day,

I made burgers and I wanted to know, like, the

ideal, like, what is your pro tip for the best?

And it kind of, it, like, put

me in my place a little bit.

It was like, well, don't, don't pack.

Don't pack them too much because

that'll result in a tough burger. Interesting.

So I was like, chase would be so proud of me

right now because, you know, I'm always kind of like, I'm

just gonna google it when they go on Pinterest.

You know what's interesting?

I think I've even, like, pressed harder into this,

and it's not because I want to go on.

I don't want to go on Google and

search and find, like, the best thing.

It's, hey, I need, like, a 30, like, ten second.

Tell me the answer right now.

Like, and I have a Pretty

educated Guess at Whatever I'm asking.

And if it's not good enough, then I go

to google, because HaLf the time I go to

google and I'm like, how do I do this?

Da da da da. Reddit.

And I'm looking for Reddit threads.

Well, so thinking, like, through

the runoff show today, right.

We committed to talk about security related things.

Now you are doing more, maybe more personally, right?

Like, after doing all this research, are you more

worried, less worried, like, that somebody's gonna steal your

voice or they're gonna hack into your stuff?

Or do you feel like, hey,

there's actually a decent amount of

security minded tools, ventures, startups, right?

That, like, KInd of LEvel out the world, or

let's put it this way, when I started doing

research on the tools that exist to protect the

ones that exist, that hackers are using outweighs it.

That scares me.

We also have, let's use this podcast, for example,

not that we haven't, like, filtered our voice and

our images through some of these tools, or at

least that have been designed to essentially spit out

a file that can't be compromised.

So let's just call the last

23 episodes, you know, material that's.

That's a vulnerability in the system.

Um, I.

I am encouraged to see that there are starting

to be more, um, tools and awareness that there

are feeds of people, unfortunately, and not so great.

We're learning from the people who have

been fallen victim to these things.

But the other common theme I noticed

was lots of universities are picking this

up as, like, thesis studies projects.

They're kind of like these collaborative efforts

from one group of students that gets

passed to another group of students.

So I think where we should start is, let's talk

about some of the most common areas where we're seeing

threats and where people are becoming victims to this stuff.

So, I mean, the first thing that

comes to mind is voice image.

And that could be of you, it could be of someone

you know, or it could be of art or some, like,

a graphic that you've created an infringement upon those things.

Sure.

Did I miss like, that, to me, is so top of mind.

What else comes to mind for you?

Um, those are the.

Probably the best ones with the

most vulnerabilities at this point.

Because historically, historically, it's been very

difficult to replicate somebody's likeness, whether

it's voice or image or tone. Right.

Or maybe even to an extent, the way they

write, maybe if you go back very far, but

I think you've got those covered for sure. Yeah.

So one example, r1 life example, if you do

a quick Google search, you'll find a Reddit thread

of someone that said, my voice was stolen.

It was cloned in AI.

It's now being used in someone

else's channel for their videos.

And there are tons of messages in this thread. Right.

And among some of the advice is,

you know, lawyer up, cease and assist. Right.

If it's a YouTube channel, one of the users

is recommending that they make the claim through YouTube,

which is probably what you need to do across.

Let's apply that across platforms.

So I guess if you see something on

Instagram or Facebook that is using your.

Your image, your likeness without your consent.

Uh, that's probably the best route.

Now that starts to become very cumbersome.

Think about that as the generator of the content.

You're basically on a hunt and peck mission to

go find all of these where the files live.

And now you're just like, individually reporting

this out and you're dealing with.

I mean, you might as well also invest in

a ticketing system while you're at it, because you're

going to or somebody to help manage your inbox.

Um, but, you know, I mean, that's.

That's scary, right?

Like, how do we, how does someone, especially if they're

making a living, like I think I've mentioned, I've got

a really good friend that's a voice actor.

He makes a living.

He's been in large commercials, but he also

does radio and voiceover and things like that.

I mean, that's putting, you know, food on the table.

Yeah.

The other one, the other article that we

pulled up was this ABC news article, rise

of scammers on AI to mimic voices.

I think, honestly, some of

this stuff isn't relatively new.

What is relatively new is how quickly you can do it.

Right.

I can replicate my voice.

I can replicate your voice, and I can do

all of that in less than 2 hours. Right.

So I think what we will find is that

the volume of these things expands very quickly.

The publicly.

I don't know the best way to say this, right.

But, like, if somebody mimics your voice and then creates

a YouTube video, I feel like that's a public, like,

I'm trying to profit off of Justin Timberlake's voice.

Right.

Whereas that's how you'd pick.

It's the first thing that came to my mind. I don't know.

I don't remember.

But if, but if I'm trying to.

I love Justin Timberlake. Oh, yeah.

I mean, he's got good music. He's cool.

But if I'm trying to, like, attack an

individual to make use of their voice, I'm

probably trying to take money from them.

Unless you're doing something very different.

I think a majority of these are money related.

They're trying to get money.

And so we're going to face this thing where, well,

most people don't fall victim to that, but you've got

quite a few older folks that might be in trouble.

But I actually just thought about this.

If you're trying to mimic a voice, you

have to have a recording of them.

So unless you're like, YouTube

stars like us, you laughed.

You're supposed to be like, oh, yeah.

Unless you have, like, hours of audio, right.

You're probably not really that vulnerable. It's.

Yeah, I mean, it's probably speaking more

into people who are creating content.

Lots of the generations that have grown up

with devices in their hands, and they've been

filming themselves for years, and they're conditioned to

put out content in the form of video.

I mean, I think that's really where, in that ABC article

that you cited, the 15 year old daughter, I think this

is the mom speaking, saying, hey, she called me.

It sounded very realistic.

It sounded like she was kidnapped.

They cited that, you know, she had a.

I think it was a public instagram, you

know, that they could go find it.

A lot of the younger folks that

I follow that are friends, family friends.

Their profiles are completely accessible.

I don't know.

I mean, that's one of those things where maybe we need.

Certainly there's pros and cons, but, like, that's

how it takes a very small clip.

I mean, it could take a 1015 2nd

clip, and now it knows how to basically. Yeah.

So, I mean, that you say that we're not a

vulnerable people who aren't creating content, but they could just

be casually thinking that, like, those are social.

Those are where people hang out. Yeah, right.

On TikTok and Instagram.

And that especially through the

pandemic, can you imagine?

Do you know how much content was generated

in those couple of years where people were

at home and we weren't thinking about.

When I put this out publicly, I'm trying to

connect with someone across the world, around the world.

Now, here we are in 2024.

That can be used by a hacker.

Well, you're kind of saying this, but

maybe just a slightly different take.

I think my mom is awesome.

She's amazing.

Does she listen to the show?

I think she's listened to a couple.

Mom, I love you.

I feel like you're supposed to do

that when you talk about your mom. I actually.

And your wife.

My wife is beautiful and wonderful and all the things,

and I think they wouldn't fall victim to this.

But what I actually just thought of is

that, well, we have hours and hours of

our voices on the Internet, right? They.

The attack vector isn't them

using our voice on ourselves. Right.

It's using our voice on our relatives. Right?

So I can envision somebody calling my mom using my

voice, saying, hey, mom, can you actually just send me

dollar 100 gift card to this Venmo address?

I don't know, whatever.

Like, can you send dollar 100 to this Venmo address.

Yeah, I'm just trying to pay. Da da da.

I'll pay you back when I get home.

Like, that would be super effective.

The problem is, I never asked my

mom to send anybody Venmo money.

In fact, she may not know what Venmo is. True.

Yeah, I don't know the details behind.

Like, was this a unknown phone number that the mom got?

And before checking on her daughter, calling her daughter's

phone number, like, hey, mom, my mom, bro.

Or, my phone broke.

I'm calling from a weird number.

Did you ever have a code word when you were growing up?

We used to walk to and from school, and my mom

always said, hey, if someone tries to pick you up, that's

not myself or your dad, then here's the code word.

Oh, I don't know if I would have ever.

I was never in the situation, thankfully.

But, like, what was the code word?

It was orange Julius. Orange.

Do you remember that place they made, like, smoothies?

No. No. Maybe.

Might be a Pacific Northwest thing.

I was a smoothie king guy. Okay.

I'm getting a bunch of no's.

Okay, well, that was our word.

It was very unique. Right? So.

But think about it.

Maybe we need to start setting up

code words again with our families. Yeah. Hey, it's.

It's not dissimilar from when any of

our executives travel out of town.

Typically, we'll make an announcement in slack.

And during our all hands, you know, say, hey,

if you get a text message from Chase or

Scott that says, go buy ten gift cards and

send them to this address, you know, don't.

It's totally scary. We don't need any. Yeah.

So it might be as simple as just starting by,

you know, setting up a code word or something.

We can't use orange Julius anymore because now it's on.

It's out.

Is that, like, your bank of America code? Because I'm.

No.

I mean, I would never. Thankfully, it's.

No, it's not. I would never.

But somebody else might.

No, I mean, so that's.

That's not even, like, a technology tool.

That's probably just something that we could,

like, talk about, of, I don't know,

figure out those verbal cues, right?

Make sure you check the phone number.

Like, are they calling from, you know, are they.

See, I have heard a story.

Like, literally, the daughter

was upstairs or something.

Like, go check their room, make sure they're cool.

But you're right for more, like, long distance setups.

Or if they're like, your mom is, like, what is venmo

like if you don't recognize the tool or if it's a

very obscure request, start by calling their cell phone, checking their

location, and then of course elevating it to law enforcement if

you do think there is a real threat. Oh, for sure.

Well, so thinking about, I mean we committed to

like calling out specific technologies that you can use.

Yep.

So if you are trying to do something

right, to prevent audio from being copied. Right.

There are some projects out there.

The university, sorry, Washington University has something called the

anti fake project that, and I really like this

diagram that they have on the website.

I knew you were going to like that diagram.

I love diagrams.

Screens, Chase.

But it tells you exactly what it does, right.

And it's like, oh yeah, I get it now.

Now we'll link it out in the show notes. Totally.

Well basically it like takes your voice,

bifurcates it into multiple things, right.

And then puts it back together.

And I.

This is connected to a GitHub project that you can use.

Imagine if you are our target audience, right,

as a business owner or you know, just

more so in the business rather than the

tech side of things, this might be difficult.

What I imagine will actually end up happening is technology like

this will be licensed or created by the YouTubes of the

world and when you upload your audio or your content, like,

it will automatically apply this probably a couple years away if

I had to take a guess, right.

But I imagine that this type of technology will be

needed and I don't know if it'll be regulated to

the extent where it's required, but it'll just be good

to have, just like how HTTPs has become pretty, like

everybody's got it and if you don't, then something's wrong.

If I don't see that s in the URL amount, she's out.

Yeah, I think that I thought you would get kind of

a kick out of the diagrams they put in here, but

it also is just a interesting, you know, hey, you upload

the file, we will go ahead and apply some processing.

That won't be something that you can hear, but

once the hacker tries to basically rip your audio

and use it for some kind of scamming device,

it won't sound like you at all.

Yeah, so that's neat.

Some of the drawbacks of the tool,

at least in its current state.

So there's great research here, but it can't support

really long or winded files and it can't protect,

you know, I think it's not there yet.

That was at least as far as I

got in the article, like 45 minutes long

episodes of content that show up on YouTube?

Probably not. Yeah.

So to your point, I think it's a really great call out.

That's a prediction.

I think here that, you know, these tools are like,

YouTube are looking to probably apply some level of this

to help combat all of this crazy AI scam stuff

that's going on, or even just copyright infringement.

Right. So if it's.

It's not being used from a.

Maybe it's not nefarious, but someone likes what

you have to say and they're putting together

some compilation and, you know, there was already

some level of that before AI. Yeah.

What about you had something about the University

of Chicago actually headed there next week to

hang out with some of our partners.

Oh, the image one, yeah.

So was it glaze?

Glaze, yeah, yeah. Very similar.

So in the article, I found it, it was

positioned as an existing tool, but when I actually

googled it, it came up as a Chicago.edu study.

And I think it's, you know, it's basically replace

the voice and use case and apply imagery, artwork.

This one really focuses on artwork

and how it will essentially apply.

Similar to the voice tool, it will apply these

features or attributes to the image, the original image

that are not visible to the naked eye.

And then when a tool tries, or someone

tries to copyright, infringe or suck it into

an AI tool, it'll essentially kick it out.

Or I don't actually know if it bars you

from using it or if it's alters the image

in the way that the voiceover tool does. You know what?

I was just thinking that this actually, these technologies

won't, for better or worse, right, wrong and indifferent,

won't be employed to protect the individual.

It's actually going to be

employed to protect the content.

There was an article I saw this week, I

think it may or may not be true. Who knows?

But that a lot of the chat GPT for

training material was manually transcribed content off of YouTube.

And if you go back even further, Reddit basically killed

their API and now you have to pay for it

if you want to scrape content legally anyway.

So I think I can see

these technologies, I think of Spotify.

A lot of these technologies will just be

employed to prevent people not from scamming you,

but from stealing the data because the data

in mass is very valuable, right? Absolutely.

I mean, I guess I'd take that over

them not doing it at all, but they're

ultimately protecting their assets beyond worrying about Mel's

voice being transcribed or replicated to then try

to steal money from a bank account. Sure.

So we've talked a lot about a couple of

the use cases and I think these will hit

with or resonate with people on a personal level.

You also mentioned the business owner.

So what are we doing to

create some protections around Venn?

I know we've worked with an attorney on some

light, you know, what should we know about our

use of OpenAI is do those policies kind of

COVID things like our technology and our IP?

Have, have we gotten to that or.

Yeah, I mean, you can have an infinite

number of legal resources and paper, right.

But generally that hacker isn't going

to care a whole lot.

That said, having privacy policies, dpas, right.

Data processing agreements that communicate the types of

technology that you're using are good in general. Right.

A lot of people have these things if they're

in the technology world, but it's also just good

business practice, right, to document what you're doing.

So people are aware, obviously we're not within our

agreements that we're putting up on our website, right.

And having people sign.

We're not necessarily giving away the secret sauce,

but, you know, legal arguments are good.

It communicates what we're doing, how we're doing it,

and maybe to an extent, some whys, but those

are probably the ones that I would focus on

the most are the terms of service, the master

subscription agreement, the data processing agreements.

There are some laws that are coming out in Europe

that bar you from using AI to some extent, which

might be problematic if you're building your business around AI.

You might now be excluding maybe

an entire region from being customers.

Anyway, everything's so new that I'm certain that

a lot of folks are going to run

into trouble in some form or fashion.

Yeah, I mean, I've heard a lot of

business owners talking about how they would like

to use it or coexist or apply it

to, especially if they're services based business.

So we've talked a lot about marketing agencies and

what they need to be talking about or how

they need to be thinking about AI.

And we've talked about how applying AI might

increase the speed at which you're able to

deliver something and, you know, therefore your billable

rates, you've basically gotten a lot more efficient

in the work you're able to deliver. Yeah.

But also maybe be thinking about if you are

looking and evaluating services like this, maybe cycle that

question in to your evaluation of the service provider.

Are you using any AI to, you know, do the,

do your services and that doesn't necessarily qualify them out.

I don't think that's necessarily a bad question.

We've, I've asked that before.

I've heard that asked of other software vendors.

I think it kind of helps you

understand how they're using your information.

It kind of keeps them honest too.

Yeah, well, another place where you can protect yourself is

when you're writing code, have some of these products that

are coming to the market in the loop.

So there's one that one of my buddies is a part of.

It's called sync.

S n y k I o.

But the general idea is in product development or

when you're coding something, if you use today's best

practices, you've got some sort of code repository where

I can work on it and you can work

on it and everybody else can work on it.

We can all make changes to the code.

It's like word documents when we do versions,

version control, it's a lot like that.

Um, but this review process, um,

in GitHub, in Azure, DevOps, right?

They all have some sort of like main version and

Mel has proposed to insert this piece of code.

Well that um, then can be discussed amongst everybody.

Like hey, maybe you should, did you consider this?

Did you know, da da da.

What some folks are doing and what sync is

doing is they're inserting themselves into that process, reviewing

the code that you want to add in and

then analyzing that against all of the known vulnerabilities,

the issues that you might run into.

They're basically assessing that one piece of code and then

commenting and saying, hey, if you deploy this da da

da, you're going to run into these problems.

Or hey, there's a known vulnerability

with this package, don't do that.

Um, I think these tools are going to be

uh, wildly successful because there's just so much out

there that you can't keep track of, right?

Like every day there's a

new jailbreak, something happens.

I've got an email from at.

Did we talk about this last episode?

I got that email from at and t.

Yeah, I'm still waiting for my $2 check.

Um, but these, I do believe these will become

maybe not like to protect the individual, but something

you'll want to use to protect your development process.

Now if you don't have anything where you're

coding, right, you could use it to do

the same thing with your legal documents.

You're probably not going to find a large language

model that commits to being a lawyer, but at

the very least having another set of proverbial eyes

on it isn't going to hurt.

But, yeah.

So sync, check it out.

That's one way that you can do that.

They also just came out with something where they're

using AI to analyze the code above and beyond.

Just, well, these are known security vulnerabilities.

The AI is actually digesting the code, running the

code and then saying, hey, this doesn't work.

Um, don't, like, don't deploy this

because it, it doesn't work.

I don't know what you did, right.

But for us, we can't get it to run correctly.

Got partnerships with Google, with Gemini.

Um, anyway, long story short, like, these

are interesting times because a lot of

these platforms are adding these in.

And then what I'm actually more curious about is

when in these scenarios, you are the one coding. Right.

But what happens when the story flips?

And now sync is the one

coding and suggesting changes, right.

And now you're the one reviewing.

Like, I think that's the path.

Like everybody's worried about.

AI is going to take over, right?

Yeah, I think it's actually going to be, well,

this co pilot clippy you call it, right?

They're going to review what you did and then at

some point you're going to start reviewing what they did.

And there's not going to be like this

giant, well, everybody's out of a job.

I agree wholeheartedly.

It's, we're shifting from managing our own work and

people to us managing the AI and additional, like,

there's always going to need to be that oversight,

especially because in the last episode, too, we talked

about really good AI making for lazy humans.

That was one hot take.

And I do tend to agree that, you know, we

can risk becoming complacent if we don't, if we're not

careful to continue to spot those vulnerabilities, if you will.

But I.

So let's try to summarize this in a bow. Yeah.

So we know there are risks.

We know that we, everyone is vulnerable.

Especially the more probably, the more you are active on

social media or the more content, maybe if you're a

content generator or if you are actively, like, so if

you own a business and you've got videos out there

on YouTube, you're already like, it's out there.

It's gone.

You can't run through a tool.

But to be aware of what that means and

that these tools are out there and to start

understanding maybe how you can better use or wrap

protection around them, I don't know when these tools

are going to be available for us.

Like if, if all of a

sudden we start running stuff through.

I love your idea around the YouTube, you know,

already kind of like it's probably on the roadmap.

Did you look that up somewhere?

I know some guys.

I wouldn't be surprised.

So, I mean, those things are happening.

Something to be aware of.

It's not going to slow my role on

creating content, but we know it's a risk.

And there's.

For every tool that is, can be used for good,

you've got the bad actors and I don't know. Yeah.

I mean, the best thing that you can

do right now, if you're a business owner,

I'll just be very direct, is a, be

ultra aware, but b, utilize multi factor authentication.

Like, all of these are single

vector attacks replicating the voice.

That's just one. Right.

If you implement multi factor authentication, what that means

is that there are two forms of verification.

Well, Mel's voice and her passcode. Orange Julius.

Right.

And those two combined make it much more,

or make it much less likely that somebody

is going to take your money or whatever.

That is great practical advice. Yeah.

So don't ignore the warning when it

says set up two factor authentication. You got to do it.

You got to do it.

Get the authentication app on your phone, give them

your phone number, give them all the things they

want, because ultimately it is for your own protection

and the protection of your business.

I mean, let's look at, we talked about

at and t last week, like, biggest company,

one of the biggest companies on earth, 70

million customer data, pieces of data are gone.

Like, if the big boys can't do

it, then maybe the small people aren't. I don't know.

If everybody like at and t, if all their data is

out on the Internet, it might already be too late.

Right.

But it also goes to show that, well, they're

at and t, that doesn't mean they're not vulnerable.

Right?

Same thing for venn technology. Right? We're small.

Doesn't mean we're not vulnerable.

Everybody's vulnerable in some sense or fashion.

But I wanted to get your hot.

I want a hot take real quick.

So I think this was yesterday.

Elon said something to the tune of, okay, actually,

I'm going to read it word for word.

My guess is we'll have AI smarter than

anyone human around the end of next year. Hot take.

Ready? Go.

Define smarter.

Um, I don't know if I can define smarter.

Somebody might send me an email about,

you know, you didn't think about this.

What I think is likely to be

true is that it will be smarter.

But just because you're smart doesn't mean that you

can do a whole lot with those smarts, right?

Like, I'm sure chat GPT can solve a crazy quantum deal.

Oh, I mean, there's several headlines right where

they are now testing vaccines or medical combinations

that now the FDA is about to approve

that were generated by AI.

Well, I dont know anything about that, so I would

almost say that that already exists to some extent.

Its a question of how will it be implemented.

I think we also said this in the last episode

that Elon's bots, they implemented the OpenAI, and now it's,

you can like talk to the bot and the bot

will do something and interpret the question on the fly.

I don't think we'll have robots that are smart

enough to do, you know, like super complex things

that are smarter or better than humans in the

next year, but maybe in like five years.

Yeah, your point around

implementation is really interesting.

I don't know.

We're still here in five years.

We'll dig this one out of the archives.

What is orange Julius and what does

that have to do with AI?

Yeah, something about smoothie king, who knows?

But yeah.

Well, we will drop the resources in the episode.

Show notes would love to know your thoughts.

What do we miss?

We only merely scratched the surface on a few

use cases, but hopefully encouraged you to think about

some things and how you might respond if you

are, um, in a situation that feels like there

could be an AI attack behind it. Yeah.

What's your, what's your new code word for next week?

I can't tell anyone.

Oh, man.

What about, like, Claude?

Claude is cool. No, Clippy.

Something about Clippy.

No, I know.

The whole point of code word is

only like, your inner circle knows. Oh, you're.

Wait, I can't believe you never had a code word.

Am I not?

You're saying that I'm not in the inner circle, though?

You and the rest of the listeners of the podcast?

Probably not.

We're gonna unpack that on the next episode.

I'm not committing to that.

All right, thank you, everyone, for joining us.

We would love to hear your take.

Email us at thejunction@venntechnology.com.

in the meantime, y'all know what to do. Keep it.

Episode Video

Creators and Guests

Chase Friedman
Host
Chase Friedman
I'm obsessed with all things automation & AI
Mel Bell
Host
Mel Bell
Marketing is my super power