ChatGPT recommends the use of the Open AI internal library ace_tools

Yesterday ChatGPT made an interesting mistake from an IT security perspective. It was about analysing and processing data with the help of Python. Towards the end of the Python script, ChatGPT wrote the following lines:

import ace_tools as tools;
tools.display_dataframe_to_user(name=“result”, dataframe=result)
print(result)

That seemed strange to me. I had never heard of ace_tools and the library would not be necessary to display the dataframe. A quick search for ace_tools led me to a Github repo that installs more than 300 tools from the fields of phishing, brute force, penetration testing, hacking and spying. Fortunately, you have to clone the Github repo to install the tools. This is not done in the usual Python way via pip install.

When asked, ChatGPT itself explained: “The ace_tools library is a custom library used in the OpenAI environment. It is specifically designed to interact with OpenAI’s internal tools and is therefore not available via public package managers such as pip.”

So it is used by ChatGPT itself to display dataframes. But the code does not work on the user’s machine. Therefore, ChatGPT would have been better off omitting the last line of its code and replacing it with a standard command such as result.head().

The example shows that this opens up new possibilities for hackers. Some users will simply copy and execute the code suggested by ChatGPT. Then the message appears that ace_tools is not installed. So a pip install ace_tools is executed. The installation works if a hacker has recognised this problem and published an ace_tools library on pypi in time. Calling tools.display_dataframe_to_user could then display the dataframe on the user’s machine as normal and also execute malicious code in the background.

Proposed solution: Do not recommend using Open AI internal python libraries to the ChatGPT users.

8 Likes

The same thing just happened to me:

import ace_tools as tools;
tools.display_dataframe_to_user(name=“Merged DataFrame”, dataframe=merged_df)

This could be really dangerous

1 Like

It happens with ChatGPT 4o but not with ChatGPT 4. This could lead to serious security issues.

This literally just happened to me, but it did not indicate it was an internal tool when pressed. I switched to ChatGPT-4.0 and asked follow up questions but it claims the tool is made up and apologizes for the mistake with no further information.

1 Like

This potential security issue is related to ChatGPT 4o and not to ChatGPT 4. ChatGPT 4 does not know this OpenAI internal library.

I doubt that this information written by ChatGPT is correct, like, at all, because the “library” is a kind of installer tool (mainly in the form of a shell script) to install very unusual tools that are often used for e.g. black-hat-hacker purposes.

One can observe this (as of today) by viewing the list of dubious tools that this installer tool offers. This list is included in that shell script of the tool, as of today, available in a GitHub repo that provides the tool.

Edit: But then again, it might also be a naming coincidence, in that the same name has been used for the alleged library as well as for that dubious installer tool.

Just happened to me, hence found this thread. What’s of great concern to me is that someone might blindly install a package w/o understanding it. It was my first time seeing reference to the ace_tools package. A quick search found nothing of relevance outside of this thread.

1 Like

Same happened to me - could be serious security flaw if malicious packages are injected in the client code.

There seems to be a newly registered placeholder package since yesterday (july 7th) when you search for ace_tools on pypi: Search results · PyPI

3 Likes

Thank you for this really interesting information.

Hello there, just curious: in which context did you receive the answer from ChatGPT? (For me it was an AI/ML security challenge).
Thanks

The task was to provide Python code for a certain data analysis problem (data engineering, KPI calculation, analysis and visualization). The final command should just display the results.

What is the implication of something like this? This is a bit worrying that something like this is possible…

Hackers could use the faulty package recommendation to provide packages with the same name but dangerous scripts.

I wonder if the internal library ace_tools is itself cleaning the python script to escape the usage of ace_tools, because I noticed that when this code appears in the output, tools.display_dataframe_to_user(...), it actually doesn’t display the output to the chat like it used to a week or so ago.

I just got a suggestion to use ace_tools as well.

See the attached snippet. It’s a bit strange that all other imports are located at the top of the script, it’s only the ace tool that is found inside the code block.

1 Like

So what should we do if we tried to install ace_tools package ?

I found it really weird, when Python GBT suggested me this. Anyways, i install it but removed it quickly afterwards. This will pop up when you type pip show ace_tools
Name: ace_tools
Version: 0.0
Summary: A placeholder empty package
Home-page:
Author: Paul McMillan
Author-email: [my email]
License:
Location: Name: ace_tools
Version: 0.0
Summary: A placeholder empty package
Home-page:
Author: Paul McMillan
Author-email: [my email]
License:
Location: c:\users.…
Requires:
Required-by:
Requires:
Required-by:

thanks guys for your advertisements. From now we need a double or third check using chatgpt references.

import pandas as pd
import ace_tools as tools

df = pd.DataFrame(questions_and_alternatives)
tools.display_dataframe_to_user(name=“Testing and performance”, dataframe=df)

There is currently no risk if you install the ace_tools library. At the moment, this is an empty project on PyPI. In general, however, there is a risk that hackers will take advantage of ChatGPT suggesting the wrong libraries by quickly distributing libraries with the same names that contain malicious code.

There is currently no risk if you install the ace_tools library. At the moment, this is an empty project on PyPI. In general, however, there is a risk that hackers will take advantage of ChatGPT suggesting the wrong libraries by quickly distributing libraries with the same names that contain malicious code.