Chillin
Pinned Loading
-
-
Thought-Forgery
Thought-Forgery PublicThe discovery of a new LLM vulnerability that bypasses safety by forging the AI's internal monologue.
-
-
Adversarial-Correction
Adversarial-Correction PublicA novel prompt injection methodology that weaponizes the "autonomy" of modern LLMs By disguising harmful instructions as "orthographic errors" within a narrative correction task
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.

