Bad request Error... how can I fix it?

Ok, this is the part where I might not connect the dots just yet… and obviously I am not doing something not good.

  1. I get a message from the user and I put it in the openai.embeddings.create using text-embedding-ada-002

  2. I feed that vector to pinecone to query the database… get a response back from pinecone and put it in a variable.

  3. feed that variable to using gpt-3.5-turbo???

Here is the portion of the code:

const embeddingResult2 = await openai.embeddings.create({
input: message,
model: “text-embedding-ada-002”,

const vector2 =[0].embedding;

let index2 = pinecone.index(“test1”);

const pineconeResult2 = await index2.query({
vector: vector2,
topK: 5,
includeValues: true,

const matches = pineconeResult2.matches; // array of matches

// console.log(matches);

const result = await{
messages: [
{ role: “user”, content: message },
{ role: “system”, content: matches },
model: “gpt-3.5-turbo”,
// console.log(result.choices[0].message.content);

return res.status(200).json({ output: result.choices[0].message.content });

Error I get:

status: 400,
headers: {
‘access-control-allow-origin’: ‘*’,
‘alt-svc’: ‘h3=“:443”; ma=86400’,
‘cf-cache-status’: ‘DYNAMIC’,
‘cf-ray’: ‘81829183baa12311-ORD’,
connection: ‘keep-alive’,
‘content-length’: ‘22873’,
‘content-type’: ‘application/json’,
date: ‘Wed, 18 Oct 2023 17:40:47 GMT’,
‘openai-organization’: ‘user-some some some bla bla bla :P’,
‘openai-processing-ms’: ‘7’,
‘openai-version’: ‘2020-10-01’,
server: ‘cloudflare’,
‘strict-transport-security’: ‘max-age=15724800; includeSubDomains’,
‘x-ratelimit-limit-requests’: ‘3500’,
‘x-ratelimit-limit-tokens’: ‘90000’,
‘x-ratelimit-remaining-requests’: ‘3499’,
‘x-ratelimit-remaining-tokens’: ‘89980’,
‘x-ratelimit-reset-requests’: ‘17ms’,
‘x-ratelimit-reset-tokens’: ‘13ms’,
‘x-request-id’: ‘b0a28cca3b4890f2509c4f518a40727b’

error: {
message: "[{‘id’: ‘a’, ‘score’: 0.765218079, ‘values’: [-0.0149904462, -0.0152659, -0.00175782561, -0.01121384, -0.0253416821, 0.0142003307, -0.0381575, -0.0123229008, 0.00869126897, -0.0320105478, 0.00961186271, 0.016773643, -0.000732125307, -0.00301548629, -0.0080533782,
type: ‘invalid_request_error’,
param: null,
code: null
code: null,
param: null,
type: ‘invalid_request_error’,
page: ‘/api/upsert’

I don’t see where the variable “matches” is coming from. But it seems the fundamental problem is that you are sending an embedding vector score result to an OpenAI AI model and not the text of a database match.

yes, that is what I was thinking…

I need to dig a bit more into the response from conepine to see how to extract that information… and feed it properly to OpenAI…

message is the variable which holds the text input from the user… in this case it was just a “hello”… which wouldn’t match anything.

In this:

{ role: “user”, content: message },
{ role: “system”, content: matches },

That’s where you have “matches”. Comment out system and see if the chat AI just answers the question.

Where the system message should be static programming that comes first, and you use either an additional assistant or user role with annotation about the source of injected data (although there is no rule or even documentation, confusing the chatbot’s system programming with more data is something you don’t want to do.)

Yes… question is… how do I extract useful text back from the pinecone embedding?

In this documentation link I see no text from their response… unless I am looking in the wrong place, or doing the wrong operation to get a useful response back to OpenAI

node.js Pinecone examples are sparse, but here’s a starting point of how to get metadata (the matching text) from the database.