Irresponsible claims are undermining AI safety efforts

There are a number of claims made recently which I believe to be wildly over exaggerated. AI safety is a serious issue and when you make unsubstantiated claims, people take it much less seriously.

There needs to be greater effort into only making claims which are transparent, truthful, and have been very carefully falsified.

One particular claim I found to be deeply unsettling because it’s very far from the truth:


This is absolutely false. Very significant human intervention and assistance is required in order for these experiments to take place. What’s more, the ‘real world lab’ is which is the actual culprit in this scenario as it allows for automation of chemistry experiments.

It would be trivial to create software which can ‘autonomously’ do these experiments. It would not require LLM or even much in the way of ‘AI’, just access to a good knowledge base.

A lot of people are making claims about AI risk and they use ‘biosecurity’ as their goto. Agreed, biosecurity is a serious issue, but limiting AI will not solve the problem, only removing biosecurity information from the web and controlling access to chemicals will.

People can still google for this information, they can still do these experiments. AI is not the problem. Access to information and chemicals is the problem.

If people are actually serious about reducing harm, they would solve the actual problem rather than trying to conflate it with their agenda.


Negative news gets more clicks unfortunately.

1 Like

I agree with everything said in this conversation so far, access to chemicals is already heavily restricted, and there’s many chemicals that cannot purchase without very specific authorization and a business account at ligma-aldrich.

Robo-lab’s like the one mentioned aren’t entirely staffed by machines, there’s always multiple humans (and usually ML systems as well) to ensure that everything is as safe as possible, it’s not possible to just create an account and run experiments.

1 Like