Restrict chat API's to allow only certain functionalities

I’m using the below chat completion API for email content generation

https://api.openai.com/v1/chat/completions

I’m wondering if it is possible to restrict this API to provide only email content, such that this API should not produce any code. Do we have this provision in any of the open AI API’s and model ?

If restricting this API is not possible then is there a way where we can use relevant prompts to restrict it ? If yes, Could someone please help me with an example prompt to achieve this ?

Hi there!

You can’t restrict the API per se. In general though, there should be no problem with achieving the desired output via prompt engineering. The model should normally not produce code unless you ask it to or your prompt is ambigious in this regard.

Did you face any challenges with a particular prompt you were testing out or what caused the concern?

1 Like

Yes, I had given a prompt to not generate any code at the end of my question. When i use this prompt in chat gpt it doesn’t generate code. But when i use chat completion API, irrespective of the prompt it generates code.

Note : My goal is to restrict chat gpt from giving out codes, irrespective of whatever prompts the user sends, I should be able to modify the prompt in such a way that it does not generate any code.

Your prompt doesn’t make a lot of sense.

The same user is asking to generate code and then not to generate code. That’s surely going to confuse the LLM?

Perhaps you are better to perform a separate, later step to “remove any code from this text”

In general, you are going to have to design your architecture to prevent issues arising from abusive user entry - relying on the LLM alone is probably not a good strategy.

There’s a couple of issues I see here:

  1. In the user prompt provided you are essentially having a contradiction: you are asking the model to provide code and then say it should not provide code. That’s causing confusion to begin with. So you need to start by rephrasing your first sentence and be more explicit about what it is that you want the model to produce (a description?).

  2. Sometimes asking a model to not do something can have the opposite effect and as such I would - after rephrasing your first sentence - also try leaving out the second sentence.

If you want to have ultimate control over the output you’d need to create a validation mechanism whereby you run outputs created by the model through a validation prompt that ensures that no code is transmitted back to the users.

Perhaps others have alternative suggestions but this is my take.

1 Like

This is a great way for you to experiment with different prompts.

You can add a section called Restrictions and say the following:

  • don’t give any form of code in your response
  • don’t give any html or java code ….

Thanks for suggesting, but this didn’t work.

Tried rephrasing the prompt, but it still gives the code.

I cannot have a follow up prompt or question. Any texts the user enters I must be able to rephrase the prompt in such a way that chatgpt is completely restricted to producing codes. Just wondering if this is achievable using any combinations of prompts ?

This is not “chatgpt” it’s the Open AI API which is based on the same technology but is a different product.

I’m not sure your architecture is clear here, but imho it is not safe to directly use outputs of the LLM or User directly to perform tasks. You must protect your system with layers, e.g. a function that runs the tasks that can itself perform some pre-checks.

If you remember the days of SQL injection, this is a trap that has been fallen into a lot before.

imho, take a step back and reconsider your approach and architecture.

It’s absolutely valid to use the LLM to act as a natural language interface, but make sure you also protect your system from abuse in more classical ways too.

Hi Jasmine - your prompt still exhibits the same characteristics as before. Start your prompt by stating what it is that you want the model to produce. Be as specific as possible. Avoid contradictions in the way you phrase the prompt.

Importantly, use an iterative approach in building out your prompt. Start as per the guidance above with a clear and specific description of what you want the model to produce, i.e. initially leave out the restrictions. Then test it out to see what responses you get. If you then still get undesired responses, you can iteratively integrate restrictions into your prompt. However, they should not dominate your prompt. The focus of your prompt should always be on what it is that you want the model to produce and not what it isn’t.