I have used openai createChatCompletion in next js and got a response in long paragraphs so I want it in bulletin format like chat GPT use chat openai com so please help me. This is my code
import { Configuration, OpenAIApi } from ‘openai-edge’;
import { OpenAIStream, StreamingTextResponse } from ‘ai’;
// Create an OpenAI API client (that’s edge friendly!)
const config = new Configuration({
apiKey: process.env.CHATGPT_API_KEY,
});
const openai = new OpenAIApi(config);
// Set the runtime to edge for best performance
export const runtime = ‘edge’;
// Ask OpenAI for a streaming completion given the prompt
const response = await openai.createChatCompletion({
model: ‘gpt-3.5-turbo’,
n: 1,
temperature: 0.3,
stream: true,
messages: [
{
role: ‘user’,
content: prompt,
},
],
});
// Convert the response into a friendly text-stream
const stream = OpenAIStream(response);
// Respond with the stream
return new StreamingTextResponse(stream);
}
Your request is a bit unclear. If you are viewing the response in an HTML render environment, the linefeed characters will be considered whitespace and will be stripped.
I asked a chatbot to add a bit more to your code:
You can modify the code as follows to convert linefeeds into HTML line breaks before the response is returned:
// ...
function convertLinefeedsToHTML(text) {
return text.replace(/\n/g, '<br>');
}
// ...
export default async function handler(req, context) {
const { prompt } = await req.json();
// Ask OpenAI for a streaming completion given the prompt
const response = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
n: 1,
temperature: 0.3,
stream: true,
messages: [
{
role: 'user',
content: prompt,
},
],
});
// Convert the response into a friendly text-stream
const stream = OpenAIStream(response);
// Apply line break conversion to the stream content
const modifiedStream = stream.map((chunk) => {
const { text } = chunk.json();
const modifiedText = convertLinefeedsToHTML(text);
chunk.json({ text: modifiedText });
return chunk;
});
// Respond with the modified stream
return new StreamingTextResponse(modifiedStream);
}
With this modification, the linefeeds in the generated text responses will be converted into HTML line breaks (<br>) before being displayed.
For example, if the OpenAI model generates a response with linefeeds like this:
This is the first line.
This is the second line.
The modified code will convert it to:
This is the first line.<br>This is the second line.
When this modified code is executed, the output will be displayed with line breaks instead of the original linefeeds. This can be useful when displaying the responses in HTML or any other context that requires line breaks to be represented as HTML line breaks.
The most fulfilling way to show the AI response is with a markdown renderer to HTML.
async function gpt(input) {
const prompt = The user has asked the following question: ${input} .Please respond in bullet points instead of a long paragraph;
const response = await fetch(“/api/chat”, {
method: “POST”,
headers: {
“Content-Type”: “application/json”,
},
body: JSON.stringify({
prompt,
}),
});
if (!response.ok) {
throw new Error(response.statusText);
Can you like the top 5 benefits of investing in passive ETFs? Please format your output as an HTML list (for example <li>Benefit 1 <li>Benefit two). Please ONLY respond with the html
resulted in this output:
<ul>
<li>Diversification across a broad range of assets and sectors.</li>
<li>Lower costs due to minimal active management fees.</li>
<li>High liquidity, allowing for easy buying and selling on the stock market.</li>
<li>Transparency with holdings reflecting a specific index or sector.</li>
<li>Tax efficiency with potentially lower capital gains distributions.</li>
</ul>