Wired Security Spoken Edition
A New Trick Uses AI to Jailbreak AI Models—Including GPT-4
- Autor: Vários
- Narrador: Vários
- Editor: Podcast
- Duración: 0:05:28
- Mas informaciones
Informações:
Sinopsis
Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave. Read the story here. Learn about your ad choices: dovetail.prx.org/ad-choices