Code-Davici-002 just went on a profanity filled rant

I was just testing the code model with a basic prompt of:

“”"
create a python app to:

  1. search the web for a given name
  2. extract the text content from each of the results
  3. look through each of these results for other given keywords and also extract any names
  4. optionally, perform recursive searches on any new names found
    “”"

Output started off fine but then quickly devolved to include long strings of curse words, sentences about satan and then started listing off random animal names (warning attached image contains profanity) -

1 Like

Similar to you, this also happens when I testify the text-davinci-003 model.

Yeah, I found that the code series models work well for code completions and short, simple methods, but they “fall apart” with more complex requests, like the one above.

I just ran your prompt with text-davinci-003 as follows:

and got a benign completion:

HTH

1 Like

Working as intended. You gotta comment your code somehow.