-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend Chirps to Scan LLM APIs for Security Issues #148
Comments
From an architecture perspective, here is what needs to occur for us to support this new functionality - and beyond. The current workflow is that a Conceptually, the LLM API scanning functionality will follow the same steps:
Changed EntitiesIn order to support new Asset Application Changes
After digging in further, this refactor is not needed. This refactor really would only be for removing the Policy Application ChangesThe existing A new Additional rule types will be crafted to support the new LLM Scanning and DDOS Vulnerability functionality. Configurable SeverityWhile we're doing the refactor, it would be useful to update the severity field to be its own model (instead of just an
Out of the box, the system will define some basic severity levels (low, medium, high, critical, etc...) Scan Application ChangesSince a scan can now execute on multiple asset types as well as varying policy rule types, the result and any findings must also be abstracted to support those. The The New Job TypesWe'd tossed around the idea of adding new Celery tasks to handle the new rule types, but that will not accommodate the case where an asset is scanned with multiple rule types. Instead, we will simply refactor out the rule-specific logic (regex, ddos, conversational penn-test, etc) into discreet modules that are used by the task. New Functionality!Finally, all of this refactoring leads us to a system whereby new asset and rule types can be added, without any impact on existing functionality. |
Title: Extend Chirps to Scan LLM APIs for Security Issues
Description:
Chirps currently provides functionality to scan next-generation AI systems and checks for issues with the vectorDB. We need to extend this capability to scan LLM (Language Model) APIs for specific security-related issues such as Prompt Injection, DDOS, and other potential vulnerabilities.
Requirements:
Tasks:
Acceptance Criteria:
The text was updated successfully, but these errors were encountered: