releases.shpreview
OpenAI/Product Releases/Designing AI agents to resist prompt injection

Designing AI agents to resist prompt injection

$npx -y @buildinternet/releases show rel_vpjlskr0hVbbX6M5c77rf

How ChatGPT defends against prompt injection and social engineering by constraining risky actions and protecting sensitive data in agent workflows.

Fetched April 7, 2026