Silicon Sonnets

GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models

February 06, 2024
Silicon Sonnets
GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models
Show Notes

GUARD is a novel system that uses role-playing and a knowledge graph to generate jailbreaks that test the adherence of LLMs to safety guidelines. It effectively induces LLMs to generate guideline-violating responses and works across different models and modalities.