Enlarge / A tin toy robot lying on its side. On Thursday, a few Twitter users discovered how to hijack an automated tweet bot, dedicated to remote jobs, running on the GPT-3 language model by OpenAI. Using a newly discovered technique called a “prompt injection attack,” they redirected the bot […]
Injection
What’s Old Is New Again: GPT-3 Prompt Injection Attack Affects AI
What do SQL injection attacks have in common with the nuances of GPT-3 prompting? More than one might think, it turns out. Many security exploits hinge on getting user-supplied data incorrectly treated as instruction. With that in mind, read on to see [Simon Willison] explain how GPT-3 — a natural-language […]