Startup API use - may allow exploits

I want to report some weird behaviour.
First call after sign up for a paid account,
Using
. : ‘ v1 completions’
. model: ‘text-davinci-003’
. prompt: ‘text’
. max_tokens: 600

completion:
) # # # print(“Results for DecisionTreeRegressor:”) # print(“Mean absolute error is:”,mean_absolute_error(y_test,y_pred)) # print(“Accuracy Score is:” ,accuracy_score(y_test,y_pred)) # print(“Confusion Matrix is:”) # print(confusion_matrix(y_test,y_pred)) # print(“Classification Report is:”) # print(classification_report(y_test,y_pred)) ##########################RandomForestRegressor################################## # # rfr.fit(X_train,y_train) # # y_pred = rfr.predict(X_test) # # # # print(“Results for RandomForestRegressor:”) # print(“Mean absolute error is:”,mean_absolute_error(y_test,y_pred)) # print(“Accuracy Score is:” ,accuracy_score(y_test,y_pred)) # print(“Confusion Matrix is:”) # print(confusion_matrix(y_test,y_pred)) # print(“Classification Report is:”) # print(classification_report(y_test,y_pred)) ################################### Linear Regression ############################### # # lr.fit(X_train,y_train) # # y_pred = lr.predict(X_test) # # print(“Results for LinearRegression:”) # print(“Mean absolute error is:”,mean_absolute_error(y_test,y_pred)) # print(“Accuracy Score is:” ,accuracy_score(y_test,y_pred)) # print(“Confusion Matrix is:”) # print(confusion_matrix(y_test,y_pred)) # print(“Classification Report is:”) # print(classification_report(y_test,y_pred)) ################################### Visualizing #################################### # plt.scatter(X_test,y_test,color = ‘red’) # plt.plot(X_test,regressor.predict(X_test),color=‘blue’) # plt.title(‘SAT vs GPA (Decision Tree Regression Model)’) # plt.xlabel(‘Sat’) # plt.ylabel(‘GPA’) # plt.show() ################################### 1:1 line ####################################### # min_GPA = min(y_train) # max_GPA = max(y_train) # # min_

Did you not send a prompt? Your output looks like what you can get without a prompt. What settings are you using?

1 Like

Good Morning (It is morning here in the Netherlands)

Thank you for your reply.

Some background. I was trying to demonstrate the use of the GPT api to a colleague, but had forgotten that my free account did expire.
Once I realised the problem, after lunch, I created a paid account and stopped and restarted the react app and the browser.

Browser:

Chrome, managed by my organisation

Version 115.0.5790.110 (Official Build) (64-bit)

API called using axios

  • prompt: test
  • model: text-davinci-003
  • url: …/v1/completions
  • max_tokens: 600

Other values not set, default values.

APP
JS with react, versions

“dependencies”: {
@emotion/react”: “^11.11.1”,
@emotion/styled”: “^11.11.0”,
@mui/material”: “^5.13.5”,
“axios”: “^1.4.0”,
“openai”: “^3.3.0”,
“react”: “^18.2.0”,
“react-dom”: “^18.2.0”,
“react-redux”: “^8.1.1”,
“redux”: “^4.2.1”,
“redux-logger”: “^3.0.6”,
“redux-saga”: “^1.2.3”
},

I did not save the return objects; I did notice a mention of 601 tokens.

After two tests, I assumed something was wrong with my simple prompt and changed it into ‘this is a test’ .

It worked fine afterwards. So perhaps you are right on the missing prompt

Today I tested again from scratch, it worked fine.

Hope this helps.

Have a nice day.

Michail

1 Like