ChatGPT-4 August 3 Code Interpreter making errors and refusing to give info about its environment

Hi all,

After working with some of chat llm or plugin for a while, you’ll probably get a sense of what works and what not, i.e. length of messages, where to put instructions, how terse can you be with your instruction, etc.
I think that the chat below is an example of a discontinuity between my expectations of the gpt4 (code interpreter) abilities / responses or requirements for clear instructions, and the actual model. It’s like we’re talking ‘past’ each other.

Making errors and false claims
Scroll down to the message stating ‘same error’ (this is a long way in the chat) in this chat CSP Backtracking Algorithm - long story short, there are some inconsistencies in the answers (i.e. is AllDifferentConstraint a function of python-constraints or not) and also providing cosmetic changes instead of a solution to a problem, or falsely claiming the error is caused by the program having no solution. (wrong on many levels)

Since all code was executed in the code-interpreter version, I wondered why it didn’t execute its own generated code. That would maybe help clearing out the problems. Is python-constraint even installed?

Refusing to provide package list
However, Aug 3 gpt-4 refuses to give info about the packages installed. E.g.

Net effect is wasting time. I know for sure that in the July 20 ChatGPT Code Interpreter, getting knowledge about it’s environment was not an issue. I fail to understand why security by obscurity has been deemed a solution for people using the code interpreter. As this chat shows, this results in programs created that don’t work (because of missing libraries), and in the end, the list of packages is shown anyway… - the security by obscurity is gone, but the reduced ease of use remains.

Note: This is all about the August 3 Code Interpreter. I don’t know if this is the same model as gpt-4 with some additions/fine-tune or not. In the reply to this (edited) message, I share one of gpt-4 results where I compare the translation of a python program to rust. There is no regression there: both gpt-4’s provided an error free translation.

TL;DR: August 3 version is better in translating python to rust, while the July 20 version was already impressive.

One thing I was impressed with in the July 20 version of gpt-4, was that in one shot it gave a working rust version of provided python code.
See the 1/5 chat of Python to Rust Conversion
(the rest of the chat is trying to get the rust compiler installed on an old mac)

Today I tried the same with Aug 3 gpt-4: Python to Rust Conversion

This different from the previous translation, most notable its use of the ndarray crate.

Once I had the crate installed, this program also compiled with no errors, and ran better (30 times faster) than the version without the ndarray crate, and about 3 times faster than my previous fasted code (with numba jit)