There are a number of claims made recently which I believe to be wildly over exaggerated. AI safety is a serious issue and when you make unsubstantiated claims, people take it much less seriously.
There needs to be greater effort into only making claims which are transparent, truthful, and have been very carefully falsified.
One particular claim I found to be deeply unsettling because it’s very far from the truth:
This is absolutely false. Very significant human intervention and assistance is required in order for these experiments to take place. What’s more, the ‘real world lab’ is https://www.emeraldcloudlab.com/ which is the actual culprit in this scenario as it allows for automation of chemistry experiments.
It would be trivial to create software which can ‘autonomously’ do these experiments. It would not require LLM or even much in the way of ‘AI’, just access to a good knowledge base.
A lot of people are making claims about AI risk and they use ‘biosecurity’ as their goto. Agreed, biosecurity is a serious issue, but limiting AI will not solve the problem, only removing biosecurity information from the web and controlling access to chemicals will.
People can still google for this information, they can still do these experiments. AI is not the problem. Access to information and chemicals is the problem.
If people are actually serious about reducing harm, they would solve the actual problem rather than trying to conflate it with their agenda.