-
-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot access editable source of v2 diagram - www-project-top-10-for-large-language-model-applications/2_0_vulns/artifacts /v2.0 - OWASP Top 10 for LLM Applications and Generative AI - LLM Application HLD - Presentation DLD.jpeg #388
Comments
👋 Thanks for reporting! Please ensure labels are applied appropriately to the issue so that the workflow automation can triage this to the correct member of the core team |
thanks for raising this @idj3 , we do not allow an editable version of the document to prevent unintentional changes into the pipeline can you please list concerns, errors or improvements for the document in here and i'll collaborate with you? |
Hi @GangGreenTemperTatum , my main comment is that we should consider adding an 'orchestrator' component between the client application and the LLM service - that is where many security safeguards are often concentrated (incl. content moderation, masking, throttling, authentication, etc), grounding (e.g. RAG calluots) and it can span trust boundaries. |
👋 Thanks for reporting! Please ensure labels are applied appropriately to the issue so that the workflow automation can triage this to the correct member of the core team |
adding comments from our Slack thread @idj3 , let's continue the discussion here for vis of the group and community :) thanks @Ivan!
> Some of the main features that this capability can include:
i like your points, but im worried this can stray away from a "high level abstraction" which the document serves as, it's not a full blown threat modeling exercise
with that said, any suggestions and are you in agreement here? if we list remediations, where does it end? is it not sufficient to link the OWASP top 10 entries which sub-bullet attack scenarios and remediations? this is the point we are trying to emphasize, but the diagram is to elaborate "where" this can occur in a typical LLM app and trust boundary(ies)
> Also, we should provide general (brief) description for all the components to include in the final v2 document, I thought that was one thing missing in v1.
a callout box, or alternative suggestion you have? or should we refer to the glossary in the wiki? |
I agree that too much detail can be counterproductive, but since LLM10 document aims to provide both vulnerability overview as well as mitigation strategies, IMHO we should have some standardised architecture that guides where/how those controls can be implemented. re description, imho simple list below the diagram is better than callouts as it declutters the picture. |
sorry, slightly confused here as this is the template for all vulnerabilities, is there something specific for this vulnerability or do you mean in general?
i agree, can you annotate an example on top of the current architecture diagram for your understanding of thought logic here? async maybe best with a hectic defcon schedule next week and mainly for me to understand what you "envision this looking like" and open to collaborate on ideas
i do have to be honest and think we should stick to the default Definitions in the wiki to ascertain a single-source-of-truth and avoid drift, would it help if I added a hyperlink reference here, wdyt? i really appreciate the feedback and you sharing recent experiences from the CSA also |
@GangGreenTemperTatum : 2 - I like the new diagram, I think of "orchestrator" being part of 'application services' or perhaps to sit between app services and LLM. 3 - i can't access the Definitions page, the link sends me to the diagram |
|
Remember, an issue is not the place to ask questions. You can use our Slack channel for that, or you may want to consult the following Slack channels:
#team-llm0X
, I.E (#team-llm03_data_and_model_poisoning)When reporting an issue, please be sure to include the following:
Steps to Reproduce
What happens?
…
What were you expecting to happen?
…
Any logs, error output, etc?
Any other comments?
…
What versions of hardware and software are you using?
Operating System: …
Browser: …
The text was updated successfully, but these errors were encountered: