As an Amazon Associate we earn from qualifying purchases.

Writer Fuel: Bypassing Generative AI Safety Measures May Be Easier Than Previously Thought

ai - deposit photos

Scientists from artificial intelligence (AI) company Anthropic have identified a potentially dangerous flaw in widely used large language models (LLMs) like ChatGPT and Anthropic’s own Claude 3 chatbot.

Dubbed “many shot jailbreaking,” the hack takes advantage of “in-context learning,” in which the chatbot learns from the information provided in a text prompt written out by a user, as outlined in research published in 2022. The scientists outlined their findings in a new paper uploaded to the sanity.io cloud repository and tested the exploit on Anthropic’s Claude 2 AI chatbot.

People could use the hack to force LLMs to produce dangerous responses, the study concluded — even though such systems are trained to prevent this. That’s because many shot jailbreaking bypasses in-built security protocols that govern how an AI responds when, say, asked how to build a bomb.

“Writer Fuel” is a series of cool real-world stories that might inspire your little writer heart. Check out our Writer Fuel page on the LimFic blog for more inspiration.

Full Story From Live Science