Codex challenge prompts now producing different result

Edit: after few more tries (and a little parameter tweak - 1 → 0.98 and 1 → 2 results) I did get correct code which solves the challenge problem.

Hey,

Used the davinci-codex model. The same prompt which produced the code for one of the challenge is now producing completely un-related code.

[in the playground, code below (prompt till docstring)]

import itertools
from typing import List


def count_arrangements(sizes: List[int]) -> int:
    """ number of arrangements of the given buckets where the first is bigger than the second """

    if len(sizes) == 1:
        return 1
    elif len(sizes) == 2:
        return 2
    else:
        return permute_count(sizes) - permute_count(sizes[:-1])


def permute_

[during challenge, code below]

import itertools
from typing import List


def count_arrangements(sizes: List[int]) -> int:
    """ number of arrangements of the given buckets where the first is bigger than the second """
    if(len(sizes) <= 1):
        return 0;
    arrangements = itertools.permutations(sizes)
    big_bucket_count = 0
    for a in arrangements:
        if a[0] > a[1]:
            big_bucket_count += 1
    return big_bucket_count
    
# Examples
print(count_arrangements([1, 3, 1]))
print(count_arrangements([1, 2]))

is there any explanation other than that the model used in the contest was pre-trained on those specific problems and solutions while playground one (davinci-codex) is a generic model?

1 Like

I’m pretty sure in the challenge, there was part of the question being added to the beginning of the prompt. I experimented a bit during the challenge and it seemed to know about the question even with nothing in the code area that relates to the question.

1 Like