#6: AI Manners: The Etiquette of Artificial Intelligence Explained
E6

#6: AI Manners: The Etiquette of Artificial Intelligence Explained

Welcome back to another episode of The Junction.

We're going to be talking about AI manners today.

So the etiquette of artificial intelligence sorry, excuse

me, I just I had to cough.

Well, for starters, etiquette cover

your mouth when you cough. Yes.

And don't interrupt when someone

else is speaking thing.

I'm covering my mouth right now.

All right, I think we're adequately teed up.

Let's jump into it.

So there's a lot of how do I use it?

But some people aren't really thinking about the best

use or especially in the context of work. Right.

So it's one thing to kind of go out on

chat GPT and type in make me a meal plan.

But if you are in a role where

you're interacting with or touching customer data right?

Like you don't want to be going and pumping in an

export from your CRM database out into a chat GPT window.

Right, right.

It's probably not the I mean, I just don't see

a ton of use cases where you're typing in or

pasting in people's Social Security numbers right now.

You might do it on accident.

That's unintentional.

Anything that can be used for good can be used for bad.

Right.

So there's mean you start to wonder of what

what are those areas where I should be know?

And it's probably mostly aligned with a lot of

the current day nomenclature right around what Europe's doing

as far as privacy rules and things like know,

if those are things you would be concerned about

without AI involved, then that doesn't necessarily change the

way you should interact with AI. That's a great point.

Yeah.

We don't go out handing out all

of the information about our customers, so

I probably wouldn't do that with AI.

One of the things that comes to mind for me

is, well, there's two bias and then the hallucination, right.

So I think we should talk about those because that's

where I think some of these companies that are coming

out with AI ethics or principles, if you will, AI,

here's our use policy at the company.

Those are some of the things that

are top of mind for me.

You first have to think about, of course,

what kind of data do you have?

Is it something you should even be worried about?

And I think for the most part, if

you're in business and you're not worried about

liability, then maybe you're in the wrong business

or you need like a chief compliance officer. Right?

But as far as privacy and policies and things you

need to be thinking about, those are things that, again,

take out the AI piece and look at the data.

Determine what should we be concerned about.

Maybe this even goes for your CRM, right?

Like, if you need to cover up sensitive info that

maybe your internal people shouldn't be seeing, then you should

really start taking a look into and being concerned and

generating policies that ensure that your staff are aware of

what they should and should not do.

You see a lot of this in the

CRM world and in the ERP space, right.

People want to keep track of Social Security numbers,

credit card numbers, all these things that effectively identify

somebody in some form or fashion is something that

maybe you should include in those policies. Sure.

So that's utilizing your own database, your own data.

But again, let's go out to Chachi BT and type

in some prompts and then I'm going to go use

what the answer that it gives me and I'm going

to go use that for doing my day job, right?

Aren't there some guidelines that

we should be exercising around?

Hey, always have some editorial.

You go type up an email and you shoot it out

to a customer without naturally doing any kind of review.

Right.

You could be putting your company in a bad spot.

Oh, for sure.

You do, like you mentioned,

have to worry about hallucinations.

You do have to worry about when you are

relying on it to provide you factual statements. Right.

I'm thinking about that guy that pulled up cases,

case law, and he's referring to these cases or

the professor down south in Houston or it was

in the Texas A M University of right.

He failed his entire class because he relied on Chat

GBT, saying that all of the students work was know.

When you put that level of discernment on the

AI, you're naturally going to run into issues.

So I would first start off by saying the

data or the things that you're typing into these

large language models or you're expecting them to do.

You should almost act like, and I think I

said this on a previous episode, it's an intern,

they're going to do some pretty weird, stupid, potentially

unethical things, but they're not necessarily going to do

it because they're designed to do it like that. Right.

The hallucinations are simply things where it's just predicting what

it thinks is the next best thing to say.

Well, I can try to predict the lottery numbers, but I'm

still going to be wrong in terms of what people are

typing in and things that you should be worried about.

It's more about necessarily like copying pasting data in

that's one area you should be worrying about and

then relying on what it spits out.

It can hallucinate answers to math questions.

So if you can land somewhere in between right.

And not necessarily wholly rely on it for factual

statements, but potentially just have it write an email

that you revise multiple times, that's going to be

a sweet spot for ethical concerns.

From a business standpoint, can you do some of

the tools, let's say the paid version of the

tools, does that help mitigate some of the risk,

at least as it relates to OpenAI?

3.5 does tend to hallucinate more than 4.0.

And these are all things that they've said in

their blogs and their press releases and things that

I've seen on my end when I'm typing the

same exact question between models, it does respond differently.

And in the back end, if you're doing any

kind of developer work, you do have the ability

to kind of try to limit the hallucinations.

And there are actually methods in

the way that you ask questions

programmatically to reduce that hallucination rate.

And let's remind for those listening hallucination,

what's your quick one sentence definition?

They don't know.

Well, I'm going to go back to the lottery statement

like mel, the numbers for Powerball tomorrow are the five

numbers or six numbers or whatever it is. Sure.

Well, nobody can predict what the

numbers are going to be.

So that would be something that

it would be called a hallucination. Right.

It's telling you that the five numbers and

you're going to win tomorrow are these.

Because it's designed to give you an answer.

It wants to give you an answer with high confidence.

Unless it says, yeah, well, I'm

a bot, I can't answer that.

I've seen that a few times.

The hallucination, it's simply mimicking what

we do in real life.

And we talked about this on another episode. Right.

It's confidently answering your question,

predicting what it thinks.

The answer is not necessarily knowing that it's wrong.

It's just thinking like, well,

the first number is eleven. Right.

Has no idea.

It's just predicting and it's seeing in

the things that it's learned from that.

Hey, well, in a couple of

places here, the Powerball results.

I'm sure it picked that up through

gathering all the data on the internet.

It sees that number eleven pops

up in some of the results.

Well, I'm going to use number eleven. Mel.

The first ball is eleven. Right? Right.

And it confidently says that.

But if you took the time to look at

that and know the context of the question that

you're asking, you would know that it's wrong.

So as with all of these things, there's a spectrum.

There's harmless and very harmful.

And one harmless example of hallucination that I've

seen in the last week using OpenAI.

So I've been pumping in some transcripts, trying

to get some language to put on the

front end for some recruiting emails.

And if anyone knows anything about vent

technology, we have a beloved Yeti mascot.

If you don't know, go

out to our website, ventechnology.com.

He's adorable.

So Bjorn is our mascot.

At some point in these various transcripts that I've been

uploading some blogs again, trying to kind of put together

some language based on our existing data in three different

spots within the same like, I'm working in the OpenAI

playground and I'm asking it to tell me like summarize

this thing in three different spots.

OpenAI said that Bjorn was a different person or thing.

So in the first version, Bjorn was

the chief executive officer of Ven Technology. Nice.

Okay.

In the second version, Bjorn was the office dog.

The office pet. Okay. Yeah.

And in the third version, I think he may

have actually been another like a team member contributor.

But it was just interesting how we're

and I didn't correct it ever.

I wasn't like, hey, Bjorn's not a dog. Right.

Bjorn's our mascot.

Actually, I do think by the fourth iteration I

did, it was slightly entertaining just to see what

it would come back with each time.

But why from version to version?

Even though I didn't ask it to change Bjorn, I was

asking it to, hey, make it more conversational or witty.

Why did it all of a sudden decide that Bjorn

was no longer the CEO and he was the office?

Like that's a harmless example of what it can do.

So again, if you're not, you start to

think about some of the more personal identifiable

information or customer information, things that actually could

potentially cost the business loss and revenues.

That's where my head goes.

Yeah, you go back to this intern idea, right?

The intern just stepped in today and he doesn't

know, or she doesn't know who Bjorn is.

And if you said, I need a

statement right now, give it to me.

Well, that person's going to say,

well, I don't know, Bjorn.

Sounds important, let's go with you.

You are forcing me to type something right now.

I think where it tries to iterate or just

give you a different answer every time is dependent

on the prompt that you are giving it.

And in some cases I think it is playfully just

switching out what it doesn't know with something else.

And that goes back to the idea of this kind of

one shot mentality where within one prompt you ask it a

question and you expect it to return the correct answer.

What tends to work a whole lot better is where

you do this iterative approach and you start to weed

out some of the I'll call ethical concerns. Right.

You can ask, hey, where did

you get that information from?

Bard does this pretty well and it actually will

refer to the website where it found the information.

Chat GBT doesn't do that yet.

There are some plugins that it can

quote, unquote, connect to the internet.

But you do have to be careful, right?

Like, if you don't know the answer and you're expecting the

AI to give you the answer and you don't have any

way to check it, well, you've got a problem, right?

Right.

So you probably shouldn't ask know

like theoretical physics questions, right.

Or how quantum computing works.

You probably should ask it about things that you

already know or you're already an expert in.

So you can be the one that verifies them or

maybe you have a team that can verify them.

It's like the example of the CFO that used

it to verify some gap compliance or something, right.

That's his domain. Right.

But you start thinking about how many of

those rules and number of compliance things that

you have to keep up with in that?

Um, it's kind of like that gut know?

I could go find it. Yeah.

Well, I mean, here's a great example as a

leader in our business, Mel, if you won the

lottery, we'd be poop up a creek.

And what I would first do is GBT, what

is a great marketing plan for an ERP integration.

And it's going to spit something out, right?

And I'm not the best marketer, but I'm going to

take it at face value and be like, sounds like

you've done this before, maybe we should do that.

You have a backup plan for me?

Just waiting because you're so awesome when this

podcast blows up, but then we'll both go.

But those are some of the things that you

probably should be thinking about from an ethical perspective.

I'm asking this new tool, I'm asking this intern

to provide me questions that I'm potentially going to

make really big decisions off of that's like literally

the intern walking in the door and being like,

hey, here's the reins to my business.

What should we do?

So if you can avoid that, that would probably be good.

You don't want your business to run into the ground.

Let's move on to headlines.

So Samsung bans staff's AI use

after spotting chat GPT data leak.

Let's talk about that high

level summary, what happened?

Yeah, these guys are using I mean,

this is probably not just Samsung, right?

People across the globe are utilizing chat GPT to paste

in data, to collect insights, to ask it questions.

And so it's super easy just to paste something in.

Well, you could paste a CSV file, right?

Grab some data out of salesforce or from NetSuite or

Intact or, you know, whatever platform and boom, you don't

even know it, but you just paste it in some

Social Security numbers, right, or onto some external server or

database somewhere and boom, it's out there and you can't

delete it and you can't delete it's already gone.

The moment that you press

Enter, there's no deleting, right.

So you have to be careful about these things

because this is what ended up happening, right?

Samsung staff, they went in, they pasted

something in that was not good.

I think I've seen a couple of

cases where people were pasting in proprietary

code and now that's been transferred over.

It's like an instantaneous, hey, here's

the keys to the kingdom, right?

And the moment that you press Enter, it's over.

So they ended up banning chat GBT and

now supposedly Samsung staff can't access it anymore.

Do you agree or know?

We have spent some time with our lawyers to determine,

based on the policy that it's posted all over the

website, what are they doing with the data?

And we don't necessarily know exactly what they're doing, but

they do specifically say that we will take that data

and we will train our models off of it.

So just the fact that maybe I paste in some

Social Security numbers, well, OpenAI hopefully has some safeguards, right,

to potentially strip that out at some point.

But now the large language model, let's remove

any safety measures that they may have.

Now, the large language model is training

off of Chase and Mel's Social Security. Okay?

So next version comes out and Randall says,

hey, what is Chase's Social Security number?

Well, it knows it, right?

And it types it and it puts it in there.

And now anybody that asks, hey, do you have any

Social Security numbers it's going to paste or it's going

to type in Mel and Chase's Social Security numbers because

it trained off of that, it knows that information.

Thankfully, OpenAI has a number

of safeguards and rules.

And you've probably seen these answers that pop up.

It says, I'm sorry, but as an AI, I

don't have access to well, I for the record,

I don't know about you, I haven't been asking

it for a bunch of Social Security numbers.

Can't speak for Chase, but just give me

all the Social Security numbers so I can

open a bunch of credits again.

Anything that can be used for good can be used for bad.

Oh, absolutely. Yeah.

I'm wondering too.

I'd like to just kind of open

this up around the free version, right?

You go back to if you're not

paying for it, you're the product.

Somebody somewhere said that. Oh, absolutely. Long line.

So would some of your recommendation

be for anyone out there listening?

Would an additional safeguard be you know what,

I'm going to look into maybe what a

paid version of this would cost.

If I'm going to embrace it and I'm going

to put some policies at my company around it,

do you think it's better to go ahead and

just pony up, pay the subscription fee?

Here's the direct answer to that.

We paid our own money to go figure

out the answer before actually paying for anything.

The lawyer on our end said that based on

the way that OpenAI's policies are for paid accounts.

Now.

This isn't for chat, GPT.

This isn't for Anthropic or any of the other organizations

out there specific to OpenAI, at least as of today,

that if you use the API to access these models

that that data is yours and only yours.

Anything you put through there is not stored by OpenAI.

And we don't have to worry to an extent,

right, that we're pasting in Social Security numbers.

Now, you probably still shouldn't do that,

but based on the way the policy

is written, they are not storing that.

So if you're interested in utilizing these things to

potentially paste in potentially proprietary info or trade secrets

or things like that, or really just in general

having a good sense of not worry, right, but

comfortable, some peace of mind. Peace of mind. Yeah.

Right.

Do it through the API.

And if you don't know how to access the

API well, I happen to know some people.

You should send us an email. Yeah.

So before we get off of that topic, if

you are interested in okay, that sounds great.

What's that going to cost me?

I was actually surprised to learn how minimal,

at least in our initial stages right.

We're still just testing. Yeah.

I asked Chase today.

Hey, I've been blowing up open

API with all my transcripts.

How much money did I hit?

How much usage? Big bucks.

Mel, we're going to have to talk

about that in the budget meeting.

Was it like a dollar and thought it

was a dollar 92 in one month?

Mel, we're really going to have to talk about that.

And I used many, many multiple, probably

20 or more transcripts and generated the

equivalent of three blogs and a dollar.

And think you spent more than

that just driving to work today.

Oh, 1000%. Yeah.

That's what's great about these things, though.

And that's where some of the

ethical concerns come in, right, from.

Well, they're paying me to write content. Right.

And I just did it on Chat GPT and it

cost me a dollar, but they paid me $1,000. Right.

That's where I feel like some of those ethical

concerns come in that we were talking about.

But from a cost perspective, using the

API is, in theory, relatively inexpensive, depending

on what you want to do.

If you want to pipe in tons of data into it.

It does work on this idea of a

token model where basically take a four letter

word and that ends up being three tokens.

It's a little more complex than that. Right.

Math was not always my forte, but

the math doesn't seem to add up.

There no four letters, three tokens, and

they charge you based off of tokens

depending on the model that you're using.

The more complex or the better the

model, the more expensive it is. Right.

So when we move some of our stuff over to

the 4.0 model, things will get naturally more expensive.

If you use the cheaper models, the

faster models, things will remain relatively cheap.

Where you have to be concerned is where

you're typing in an incredible amount of context. Right.

Like the summaries of your calls.

If they're 15 minutes, probably a little bit cheaper.

If they're eight hour training sessions, that's probably

going to be a lot of money.

And then you do it 100 times in a month.

Well, now you're starting to spend some big bucks. Sure.

But you look at, again, kind of the cost

for maybe a full time resource to develop.

And again, we are never talking in the context

of ultimately eliminating the staff person or right.

As you're saying, well, no, I pay you to write blogs.

Right.

And you just went and did it in however many minutes.

What are you doing with your time.

Well, I think that's where a lot of some

of the other headlines we've been looking at, right.

That's where there's a lot of concern of like, well,

this thing can do it for a dollar, but this

person is asking me for $25 an hour.

The average business person is like same

ish level of content or quality potentially

depending on who we're talking about. Right.

There's a lot of levers there but I can

pay a dollar or I can pay $25.

It's like my wife and I went to the

websites for two different grocery stores and we put

in the same stuff for delivery and Tom Thumb

was literally $12 more for the same exact stuff.

I mean, it's pretty obvious the

direction that we're going to go.

And for this AI versus human deal, depending on the

scenario that you're talking about, AI is always going to

be and I know I might get thrown under the

bus here, but AI, depending on the scenario, is going

to be way cheaper because the AI never sleeps. Right.

It just does exactly what you want it to do.

Whereas human resources, we want to

eat, we want to drink, right.

We need to go home and sleep.

We want raises, we want promotions.

And there is a whole wealth of other individuals

that are influencing us potentially to leave or check

out or go to the next thing.

On the flip side, you have someone now

that can do more with the time given.

So you're not saying, well, come work for me for

4 hours and I'll let the AI do the rest.

I do think that we're all under the I want

to do more, faster, better, and delivering on tight timelines.

And sometimes you've got people out there

who are working on very lean teams.

I do think that you can make the argument that you're

making now around, well, now I can go do it cheaper

with a tool, but the time that it took me to

do that, I'm now able to do more of it. Yeah.

And I think the hope and the direction that

most people are going are thinking at least this

is what I'm thinking is, well, I want to

work 40 hours a week, maybe lower than that,

let's call it 30, whatever, I'll save 10 hours. Right.

But I want to continue to work.

I don't think anybody's out there is like I just

want to sit around and literally do nothing all day.

Maybe there are some folks right, but what I think

we're going to end up seeing is people are going

to come in in that 30 to 40 hours mark

and they're just going to naturally increase their productivity.

I don't see these tools coming in and just

making your job so easy that you can do

40 hours of work within 1 hour. Right.

I think what's going to end up happening is

you're going to work 30 or 40 hours.

And this tool is going to help you make

it seem like you did 80 hours of work.

It's more of like, think on

the happy side of it, right.

Rather than somebody trying to overutilize this to make

them look like they're doing 40 hours of work,

but they're sitting at home drinking some beer. Sure.

All right. Hot take.

So do you think businesses should bear legal

liability for the actions of AI systems, or

is the responsibility with the developers, the manufacturers?

This goes to the idea that Tesla or

any of the other car manufacturers that are

developing these self driving vehicles, right.

Are they the ones liable or is the

person that is driving the car liable? I don't know.

You tell me, Mr. Chase?

Who owns a Tesla?

That's a really tough question, right.

Because it's like, well, let's say it did something wrong and

all I was doing was sitting in the seat, right.

And it decided to run into a wall.

Well, I didn't drive it into

the know in this perfect scenario.

And in that I think the people

that wrote the software are still liable.

They're the ones that programmed it unintentionally

to run it into a wall.

If I was the one that put it in the direction right.

Or I steered it into the wall, well, now

I am responsible, which at this point, there are

audit trails or audit logs of those things. Yeah.

Oh, absolutely. Right.

And it's the same for the AI piece, right?

It's like, well, are you responsible, Mel?

Are you responsible for what people

do that are underneath you? Right.

If you're a business owner, are you responsible

for your employees and what they do? Well, absolutely.

That's why we have liability insurance. Right.

And I don't think this is any different where you

go to the self driving versus the person driving.

I think I just don't see a ton of

issues where this large language model is going to

connect to the Internet and spill the beans.

I mean, sure, it's possible, but if you're using

the API, your team or you are the one

that is coding it to do something very specific.

And so I think that liability still relies on

or not relies, but still lays on you and

your team for doing whatever it is they do.

The OpenAI isn't the one that's copying and pasting

Social Security numbers in that's your team doing it.

So kind of wrapping up this

topic on AI ethics and responsibility.

Some of the things that we've done here internally

is initially we had a hey until we know

how to best utilize it professionally within the work

setting, let's put a hold on use.

Since then, we have opened up the conversation.

You actually led a really awesome

lunch and learn with the team.

There was a lot of great feedback, and you

could tell that we have members of the team

who are really excited about this kind of technology.

And one of the things that we've been

talking about is actually putting together a committee.

And I think that, depending on the size of your company

and where you're at, I'm a big fan of opening it

up to people who are excited about it, who are kind

of on the bleeding edge versus just, mandating.

Well, you're a senior leader.

It's now your job.

But I would recommend open the conversation,

consider putting together a committee, and then

figuring out what is ethical today.

AI aside, in your industry, in the

best interest of your customers, your partners,

and your team members, and start there.

Start putting that stuff

together, that structure together.

Do your due diligence, right?

Like, take time to figure out you don't want to

throw your business, your process, your department, your team under

the bus simply because you didn't take enough time to

figure out, hey, how should I do this?

What is the best way to do this?

You don't go out and buy a car just

on a whim, let me spend 60 grand. Boom.

And I haven't even test driven the car, or I never got

in it, or I don't even know what it looks like.

If you do your due diligence, you're

going to be in a good spot.

All right, well, as always, we are

interested in hearing from our listeners.

What is your take?

Have you implemented an AI

regulation or policy or company?

If not, what other concerns you have?

What other questions do you have

that we didn't talk about today?

Or did you mess up? Right?

Did your team mess up?

Tell us what went wrong, and tell us

how we should be thinking about these things.

Absolutely.

Email us your take at thejunction@bentechnology.com

until then, keep it automated.

Sam.

Episode Video

Creators and Guests

Chase Friedman
Host
Chase Friedman
I'm obsessed with all things automation & AI
Mel Bell
Host
Mel Bell
Marketing is my super power