Can OpenAI’s Strawberry program deceive humans?

Can OpenAI’s Strawberry program deceive humans?

OpenAI, the company that made ChatGPT, has launched a new artificial intelligence (AI) system called Strawberry. It is designed not just to provide quick responses to questions, like ChatGPT, but to think or “reason”. This raises several major concerns. If Strawberry really is capable of some form of reasoning, could this AI system cheat and deceive humans? OpenAI can program the AI in ways that mitigate its ability to manipulate humans. But the company’s own evaluations rate it as a “medium risk” for its ability to assist experts in the “operational planning of reproducing a known biological threat” – in…

This story continues at The Next Web

Leave a Reply

Your email address will not be published. Required fields are marked *