Using Shield Prompt to prevent Prompt Injection attacks
Prompt injection attacks are broadly categorized into two major types: 1. Natural Language Patterns These are prompt injections are written in human-like instructions that try to manipulate the model
Mar 10, 20267 min read44
