Is this a good warning that we should keep in our app?

We are taking input the drink that the person had, and we pass it on to the assistant using gpt-4, and it returns true or false on whether it is safe or unsafe to drive based on the input.

We are worried about false positives. What if the app returns true for alcoholic drinks as well, and people actually start driving quoting the app. Like 1 oz of beer it is returning true. 7 glasses of water - also true. 5 pints of beer - false. So we want to keep a warning in the app -
“Data generated through AI. Express Caution.”
Does it sound like a good warning to keep?

Hey there!

Weeeell, after the Rabbit R1 debacle (which uses GPT for image analysis), I would be very skeptical to implement this as a realistic ability right now.

There’s also not really a way to tell how many drinks someone had through an image of a drink (unless it’s accompanied by many glasses). Take into account state driving laws (in the US) are different, and one beer may or may not actually put you over the legal blood alcohol level.

On the flip side, do you want to know a better trick that I think would actually work better for GPT to gauge this?

Drunk typing :rofl:.

If a user suddenly starts misspelling things a bunch, and asks GPT if they should text their ex, GPT can easily flag that and tell the user they are unsafe to drive lol.

It would be much easier to tell when someone is drunk by analyzing their language than taking a picture of a beverage.

For all GPT knows, yellow liquid in a container could be piss. It could identify Shirley Temples as being an alcoholic beverage. There’s just not enough context for this with imagery, and too much opportunity to make it an alcoholic’s dream enabler/justifier to say “Hey look! AI thinks I’m fine, so I’m fine, right?”

EDIT: If you’re determined to set this app in the wild, I won’t stop you. But if somebody gets into a car accident and people are injured, the first place they are going to point the finger at are the developers of this app. It puts you as a developer at significantly higher risk for problems than others.

Yeah, that’s what we don’t want to happen. We are not using images, we don’t have the budget to use vision. We are relying on human input in a text box.

What if we make user write a block of text, not large but sufficient enough to gauge whether they are drunk or not. detection done through the assistant.

Haha, yeah I kinda figured ;). It’s all good though!

If you were to do this, this would likely be the way to go.

Again though, this is a very tough thing to conceive if we were to put this out in the wild. I’m usually the person that will tell you to ship first, but for this, you would actually need to test its viability to ensure it can actually detect these things. This also runs into an issue of the individual themselves being capable of writing well but being over the BAL for drinking and driving. That cannot be detected without a breathalyzer.

Finally, the biggest issue I see here is that you’re assuming the person is going to be honest, and that the users will still use your app if GPT is honest back. The individual has to willingly choose to do this, and then choose to listen to GPT. Both of those hurdles are very hard to achieve for this.

If GPT says something a drunk person doesn’t want to hear, I can tell you right now they are going to throw it out the window and never use it again.

As much as I want to nurture people to build new apps and things, I don’t feel like you can realistically make this safe to use. It just sets everything up to make it extremely easy to abuse and enable people to drink and drive more, not less. If somebody really wants to be on it to make sure they’re okay, they would buy a breathalyzer, not talk to GPT.

All of these models (except really claude 3 opus) have issues with being rather too agreeable. Combine that with the potential to place someone in a dangerous situation, it becomes harder and harder to see this as a viable idea.

It’s a cool concept, but one that might be better off selling with a breathalyzer product. Someone could buy a small cheap thing to plug into their phone, and then GPT could analyze the data received from the breathalyzer. That is the only way I can suggest moving forward with this.

We rather not keep something so volatile in the app. I mean, no ones got the time to write something yay big into a box. They would rather copy paste from the box above and use it. So it is still not a good measure. Ultimately, there is no good way this can go into the beta, because regardless of how much we iteratively correct the instructions of the assistant, we cannot really deploy it as a part of the app because the risk is huge.

1 Like

Yep. It’s one of those things that are great concepts, and is one I respect, but execution would be near impossible.

We are going to replace that with digital roulette. Of course we cannot collect money from the user, but we can atleast keep the user engage by making them play a game which in simplest form, is Reveal the number.

image

A disclaimer “for amusement purposes only”? Because I programmed it the safe way.

image

3 Likes

I strongly recommend that any app of this sort just says:

You are not good to drive.

Regardless of the inputs.

Otherwise you’re setting yourself up for a lawsuit.

1 Like

We removed it from our offerings. We don’t want to start our journey on a lawsuit.

We added something else instead of that.

LOL. Sheesh, DrinkBot, mouthwash, really? Your grandpa was much cooler than you

1 Like