Make answers align more closely with dataset completions in fine-tuned models?

Almost there. This look right to you?

This is what I got for code to do dot product calculation in php. ChatGPT3 was the only place I could find it:

function dotProduct($a, $b) {
  $result = 0;
  for ($i = 0; $i < count($a); $i++) {
    $result += $a[$i] * $b[$i];
  }
  return $result;
}

Results:

php dot_product.php cat sushi
Results 0.77132901214151

php dot_product.php happy glad
Results 0.87059063038805

php dot_product.php paris france
Results 0.89271812225079

php dot_product.php cat kitten
Results 0.85994720919874

php dot_product.php dog cat
Results 0.86354635106737

php dot_product.php cat dog
Results 0.86384264229314

php dot_product.php cat kitten
Results 0.85994720919874

php dot_product.php dog puppy
Results 0.87417191528759

Unable to find anything I can understand on Google to compare with (specific to php).

Thanks!

Here is the Ruby code I use for this:

  def self.dot_product(a, b)
       a.zip(b).map { |x, y| x * y }.reduce(:+)
  end

According to ChatGPT, my Ruby code above converts to PHP as follows, which is nearly a word for word match with what you got, @SomebodySysop :wink:

function dotProduct($a, $b) {
    $sum = 0;
    for ($i = 0; $i < count($a); $i++) {
        $sum += $a[$i] * $b[$i];
    }
    return $sum;
}
1 Like