Teaser: A experiment was conducted with 100 participants to work on new biological threats, with 50 of the participants having access to GPT-4.
OpenAI included a call for participation to the community so I decided to share this here.
I think it takes some creativity to envision how this would work in practice and is definitely an interesting read. Especially considering that without guardrails, which are often in the way of hands-on development, there could be a crack down on public accessibility to the best models for the general public.
Really interesting article. I thing every company need to do such red teaming to protect their data when giving LLMs access to their private data.
I believe to some degree this is not wanted, in the open source community for example.
I agree. Thanks for sharing.
I’ll follow your posts as this one was valuable.
I have the worry that at some point somebody will do something very stupid and dangerous with the help of a powerful AI tool and then we, as in everybody else working responsibly, get to deal with the public fallout.
I don’t see too much marketing value in these activities but they are rather a proactive approach to pre-emptively manage something that is bound to happen.