Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security Threat model analysis for ASWF projects #615

Open
jmertic opened this issue Mar 5, 2024 · 10 comments
Open

Security Threat model analysis for ASWF projects #615

jmertic opened this issue Mar 5, 2024 · 10 comments
Assignees
Labels
4-tac-meeting-short Short agenda item for the TAC meeting ( 5 minutes or less )

Comments

@jmertic
Copy link
Contributor

jmertic commented Mar 5, 2024

Please share any additional details on this topic

To help address some of the questions on security for projects and to help prepare them for a future security audit, we'd like to have some of our projects go through a security threat model analysis. An example of the output of this work can be seen at the link below...

https://ostif.org/wp-content/uploads/2023/11/Eclipse-Mosquitto-Threat-Model.pdf

Detail what actions or feedback you would like from the TAC

Interest from a few projects to do this.

How much time do you need for this topic?

5 minutes or less

@jmertic jmertic added the 4-tac-meeting-short Short agenda item for the TAC meeting ( 5 minutes or less ) label Mar 5, 2024
@jfpanisset
Copy link
Contributor

I believe Trail of Bits is well regarded in the industry, and having them review some of our projects would be very beneficial, with the following caveats:

  • these would be "point in time" audits, but if the project adopts recommendations provided by Trail of Bits, hopefully that will help avoid introducing new issues in future development
  • such engagements can't be cheap: it would be useful to get at least a ballpark number on how much would be spent per project so we can determine if this is a viable approach given ASWF projects

My impression is that any of our projects would be well served by such a process, assuming we can afford it.

@jmertic
Copy link
Contributor Author

jmertic commented Mar 6, 2024

Hey @jfpanisset - thanks for the comments. A few notes...

  • We work with OSTIF who then engages several vendors for competitive bidding. Trail of Bits is one of the vendors that bid, but OSTIF thoroughly vets others.
  • These would be "point in time", and they would be less invasive as a full security audit would be but rather give the project a better idea of security threats so that down the road a project would be better prepared for an actual security audit.
  • I will be working with our Budget Committee to ensure any costs are covered.

@jmertic
Copy link
Contributor Author

jmertic commented Mar 7, 2024

Ask for @cary-ilm @carolalynn22 @kmuseth @bcipriano @lgritz @fpsunflower @jstone-lucasfilm @reinecke and others - please review the report reproduced and see if this could be valuable for your project

@lgritz
Copy link
Contributor

lgritz commented Mar 9, 2024

Potentially, yeah, though I'd love to see a sample from a project that is more like ours.

Eclipse Mosquito is some kind of message protocol, so they're approaching the whole analysis from a fairly standard perspective, I think. The same properties of many of our projects that made it hard for us to understand the OpenSSF questions also make it hard to analogize with this report -- we have no logins, no passwords, no encryption, no database (let alone sensitive information in the database), no client/server, no network communications of any kind. Are they going to know what to make of us?

Pages 20-21 were the most intriguing to me. I could imagine that sort of set of recommendations for our projects. "If you're going to use a memory unsafe language, at least do A, B, and C." "Fuzz these inputs, look for this kind of buffer overrun, they can cause real problems, but don't worry about this other thing over here." Etc.

I'd be really interested in seeing the equivalent report from another project that is primarily a library, that doesn't do any of that communications stuff, but because it might be used by downstream as a component of an product that does do those things, what are the considerations we need to make on our side to ensure we're not the weak link in their security? But of course, without knowing what that app might be or what it does, because anybody can use our library anywhere.

@jmertic
Copy link
Contributor Author

jmertic commented Mar 9, 2024

Hey @lgritz, I completely agree, which is why I think a good approach is starting with one project and seeing what that looks like. Generally, projects I work with take this same approach in some form or another.

The groups they work with will understand the nature of the projects and get key insights from the projects in the early stages to identify places to focus. With any review, there are the typical areas of review ( buffer overruns, input validation, etc.), but then they would look to understand from the project what the typical usage patterns are to understand attack vectors.

You can check out the full list of engagements they've done and the reports at https://github.com/ostif-org/OSTIF/blob/main/Completed-Engagements.md to see if there are projects of similar usage.

@cary-ilm
Copy link
Member

The simplejson report at ostif looks closer to home, since it’s a library written in Python/C that just encodes/decodes data: https://ostif.org/wp-content/uploads/2023/04/X41-OSTIF-simplejson-audit-2023.pdf. They ran bunch of analyzers, did some fuzzing, found a few garden-variety bugs. The report also identifies miscellaneous lint: unused imports, unused function arguments, and it also calls out the practice of not signing commits and tags, although it rates the severity of that as “NONE”, which seems odd.

In my opinion, this sort of thing is of minimal value. My guess is an analysis of OpenEXR would likely give us just another flavor of what we get out of SonarCloud, yet another iteration of linting/analyzing/fuzzing. I seriously doubt an audit would identify any significant vulnerability in OpenEXR, so it’s not likely to significantly reduce the already-low risk of a serious vulnerability.

We would benefit from having a genuine security expert look over our project, review our processes, and certify that we’re doing things right. We’d like a certification that our project is security-mature. These audits are snapshots of things wrong at a moment in time, but we’d really like a statement from a reputable source of what’s right, that our project can be trusted. Or to confirm that typically-risky things aren’t present. I’m not sure the audit provides that.

The OpenSSF badge serves that purpose, although I don’t know how rigorous the badge review process is. It seems to rely on self-reporting. As it stands, every project needs an audit to reach Gold status. This is the only badge requirement that requires writing a check, so I suggest we ask the governing board if they feel that this issue is important enough to allocate budget for.

I once got a mild hand-slap from a package maintainer for not key-signing release artifacts (which we’ve now resumed doing), but that’s the only time I’ve heard anyone comment on our security status, aside from the trickle of public CVE’s which we’ve dealt with promptly. Personally, I think the security processes dictated by the other OpenSSF badge requirements is sufficient, so I would vote for striking the audit from the badge requirements, but if the governing board thinks paying for an external analysis is beneficial to the ASWF, I’ll go along.

@lgritz
Copy link
Contributor

lgritz commented Mar 11, 2024

It's not the specific "there's a bug here and a bug there" kind of audit that interests me, for the reasons Cary outlined. Mainly, it's a snapshot in time, and as bugs are fixed and new features added, those specifics will be obsolete in no time.

I am interested in one level above that, such as: "You're fuzzing input X, but did you consider that you should also be fuzzing input Y?"

But most of all, I'm interested in what I assume is meant by the "threat model" (if I'm making correct assumptions about what these terms of art mean), which might tell us:

  • A concise description of what assumptions we should have about the nature of our adversaries and what kinds of attacks we should be hardened against.
  • What use cases we consider in bounds versus what we are explicitly not going to worry about (much like gcc and llvm both recently said "it's inherently unsafe to give untrusted source code to a compiler, so don't ever run the compiler with escalated privileges, and we won't treat bugs resulting from that as security problems").
  • With those in mind, what kinds of bugs are garden variety and which kinds should we actually treat as important security bugs. In other words, when somebody says "I found this problem, make a CVE", when do we drop everything to fix it right away and when do we say "sorry, that's not an actual security bug."
  • Again, with those in mind, what processes should we be employing that we are currently not, or what are we being too cavalier about.

As a concrete example of what I think might be encompassed by "threat model":

For OpenImageIO, let's say we don't want to be the weak link if a back-end part of a web service uses OIIO for image processing.

One kind of input is the image files themselves -- could a carefully crafted malicious image file put the software in a state that would compromise the web service?

Another kind of input is the commands you give -- like using oiiotool to do operations x, y, and z. You could imagine a set of commands that, even on a valid image, could cause a crash or something else bad. Is that within the threat model? Can/should we say "the image may be untrusted, but the commands themselves are assumed to be trusted?"

Yet another kind of input is that the library can dynamically load plugins to handle image formats that aren't among the built-in set. That could run arbitrary code. Is that bad? Is that within the threat model, or should we say "you may get untrusted image files from the web, but we assume there are no malicious actors on your SYSTEM that could place a malicious .so/.dll where it will be picked up and executed by OIIO?" Or is that indeed part of the threat, and by default we should not support dynamic plugins unless the library was built with a non-default setting that allows it?

@jmertic
Copy link
Contributor Author

jmertic commented Mar 11, 2024

Hey, it was a good discussion! This is just a reminder that the proposal here focuses primarily on the security threat analysis rather than a full audit. The goal is to answer many of the questions about what each project should be paying attention to regarding security. I say this because the one Cary referenced is more of a full audit; the earlier one I referenced is more in line with the work and deliverable of a threat model analysis.

Thanks all for the input - it seems like there could be potential value here.

@cary-ilm
Copy link
Member

The Jackson-dataformats project looks even better aligned then, a collection of data serialization libraries implemented in Java. In addition to fuzzing and analyzing that found a bunch of out of bounds and out of memory errors, the report has a section on the threat model, which is closer to what we're interested in.

The results are rather obvious and predictable:

  • Attackers could abuse methods with invalid or malicious data on the jackson-datatype* library and affect process execution or steal information from the applications or the executing environment
  • Users that are using the applications which have adopted the library could pass in some invalid data accidentally or be affected by malicious crashing or attack redirection from attackers
  • Users that can affect, manage or control the classpath and environment of the applications that adopt the library.
  • Other users that can access resources or other process execution of the running environment of applications that adopt the library.

The two "attacker objectives" are "injection and remote code execution" and "denial-of-service". The "attack vectors" are input that's too long, or maliciously constructed input. The report jumps straight from general statements the vulnerabilities a hacker would use straight into the code analysis and the issues they uncovered. I didn't see much in the way of strategy for hardening. The projects use OSS-fuzz (like OpenEXR), and it wasn't clear if the fuzzers were created during the audit or existed beforehand.

My guess is a report would say something similar about our projects. There's a risk we're paying someone to tell us what we already know, but there's benefit in confirming that there aren't entire classes of issues we're ignorant of. In the Jackson report, I was hoping to see guidance about general strategy, or recommendations for systems to deploy, something beyond the list of bugs to fix.

An example of something they might say about OpenEXR: we measure code coverage of our test suite, but I don't think we know much about code coverage of our fuzz testing, and it's possible that fuzz testing doesn't go very deep.

The Jackson report collectively reviews 5 separate projects, in which I counted a total of 170,000 lines of Java. It does beg the question, could we pool several related projects into a single audit? OpenEXR, OpenImageIO, OpenColorIO as related projects that read, write, and process images and related data.

@jmertic
Copy link
Contributor Author

jmertic commented Mar 12, 2024

Hey, @cary-ilm. Starting from the end, yes, I could very well see us polling similar projects together into one analysis.

I think your guess on the report value is reasonably accurate. On the one hand, it's always validating to know when a security expert looks at the project's code and agrees on the project's threat model and attack vectors. It could also give the project some additional insights and strategies into areas to focus on ( and maybe even less focus on ). Even more important, it shows that the project has taken security seriously by leveraging a third-party firm to ensure the project is taking the right precautions and steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
4-tac-meeting-short Short agenda item for the TAC meeting ( 5 minutes or less )
Projects
Development

No branches or pull requests

6 participants