Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to disable the directive #25

Open
ghermeto opened this issue Jul 24, 2019 · 17 comments
Open

Ability to disable the directive #25

ghermeto opened this issue Jul 24, 2019 · 17 comments

Comments

@ghermeto
Copy link
Member

Can we give application owners the ability to disable the directives from any library that uses them?

If I own the application, I can always lint for libraries that have these directives and make sure they are never included, but would be nice if I can use these libraries and I make the conscious decision to disable it.

This would allow people to keep using tools like Sentry and other error tracking tools.

@michaelficarra
Copy link
Member

In the same way that you do not want to write your users' passwords to your HTTP logs, you also would not want to write stack traces from functions specifically marked as "sensitive" to your error aggregator. The reason they're marked as "sensitive" is that they expose information about the values they're working with through their calling behaviour.

Changing the Function.prototype.toString result of a function (from including [native code] to not including it) could also change the behaviour of your application given the popularity of inspecting its output today. A switch for that would very likely be unpopular among vendors.

Tools like Sentry can still be used in the presence of these directives. In fact, they will most often be improved by allowing the user to better separate their code from library/shim code.

@kamilogorek
Copy link
Member

kamilogorek commented Nov 7, 2019

What happens when the error itself originates within the function that itself, as well as most of its internals, are marked as sensible? How will it show inside the stack trace?

Consider someone incorrectly implementing a polyfill (with this trivial example):

function internalFunction() {
  "sensitive"
  throw new Error('whoops')
}

function somePolyfill() {
  "sensitive"
  if ('some' in window) return;

  window.some = function some() {
    return internalFunction(...Array.from(arguments))
  }
}

Without being able to disable it, it'd be very hard, if not impossible in some scenarios to find the root source of the problem.

Also for things like Sentry, that can rely on a specific order of frames (we don't know, as we can use function names, but we used to do this in old versions, so I'm sure there's a code in the wild that still does that), it can create some incorrect results if there'll be no way to determine whether some of the frames are hidden or not.

Function based:
https://github.com/getsentry/sentry-javascript/blob/22a2a4e42f59fa83052d71ad53c1786f5596d526/packages/browser/src/parsers.ts#L80-L112

Frames order based:
https://github.com/getsentry/sentry-javascript/blob/22a2a4e42f59fa83052d71ad53c1786f5596d526/packages/raven-js/src/raven.js#L1668-L1688

@bathos
Copy link

bathos commented Nov 7, 2019

The option to remove the directives exists for authors who serve the code — even if present in third party libraries, they can stripped if desired. But it’s rather important that if your code doesn’t control the sensitive code in question, you can’t choose to disable its security features.

There will be bugs in the kinds of very rigorous polyfills that would be interested in this behavior, just as there are bugs in browser internals that those polyfills are emulating. One doesn’t get a JS-exposed trace for C layer errors either, though, and whether e.g. a platform-defined built-in module has had its functionality implemented in JS or C or Rust should not be observable.

@kamilogorek
Copy link
Member

One doesn’t get a JS-exposed trace for C layer errors either, though, and whether e.g. a platform-defined built-in module has had its functionality implemented in JS or C or Rust should not be observable.

True, however, you, like a developer can control one and cannot the other.

@bathos
Copy link

bathos commented Nov 7, 2019

You can’t, in that example (platform-defined built-in modules). They’re part of the browser, they just might be implemented in JS even if not falling back on a ‘polyfill’ (the term gets blurry here, since the polyfill can be an entirely legit implementation itself).

There are other scenarios where this can occur too, like when your code runs in a sandbox within another application. In any case where you can control it, though, you can also strip it.

Possibly notable that minifiers today would strip it, since it would appear to be an unused expression statement — and perhaps this will continue to make sense as the default handling in such tools?

@michaelficarra
Copy link
Member

@bathos

Possibly notable that minifiers today would strip it, since it would appear to be an unused expression statement

That shouldn't be the case. I just tested on UglifyJS and it properly preserves the whole directive prologue. It was never safe to strip directive prologue entries (post-ES5).

@bathos
Copy link

bathos commented Nov 8, 2019

Yeah, it shouldn’t be. I’m surprised Uglify keeps unknown ones, though, since Uglify does a number of unsafe transformations by default.

Terser does strip them:

image

@misterdjules
Copy link

@michaelficarra

In the same way that you do not want to write your users' passwords to your HTTP logs, you also would not want to write stack traces from functions specifically marked as "sensitive" to your error aggregator.

I think it depends on the type of aggregator (e.g. storing on permanent storage vs storing in transient memory storage for near real-time inspection). In general it seems to me that the consumer of those stack traces is in a better position, at least in some cases (e.g. Node.js applications that have some level of trust on code they run), to determine what information from those sensitive functions they want to have access to.

Changing the Function.prototype.toString result of a function (from including [native code] to not including it) could also change the behaviour of your application given the popularity of inspecting its output today. A switch for that would very likely be unpopular among vendors.

This seems like a consequence of having those directives impact two different use cases. I opened an issue to discuss this topic specifically at #38. Currently those two use cases seem to me to be unnecessarily coupled together.

Tools like Sentry can still be used in the presence of these directives. In fact, they will most often be improved by allowing the user to better separate their code from library/shim code.

Filtering out parts of stack traces seems to already be supported by SEntry. I'm not sure they support differentiating between shims/library code and application code, but loggers or loggers' consuumers seem to be a more natural place to implement this type of filtering. It seems it would be difficult for library/shims code authors to determine in advance what insights into stack traces would be needed by operators of applications/services for their specific use cases.

@michaelficarra
Copy link
Member

@misterdjules It is true that there is a design trade-off here between the ability to write secure code and the ability to introspect arbitrary aspects of the running program from unprivileged code. Of course I see value in the latter, but it is important that we not lose the ability to write secure code on a platform like the web. We made this choice when designing and including private fields and closures. Unprivileged runtime introspection tools similarly have no access to that private state.

@ghermeto
Copy link
Member Author

ghermeto commented Dec 3, 2019

@michaelficarra what about if it provides a private API that can accessed only by engines? This way Node.js can expose that as a parameter allowing to be disabled, but you, as a user, wouldn't be able to disable it in a browser...

Would that be acceptable?

Just to be clear, we are not trying to disable the function implementation hiding from the Function.prototype.toString, only the hiding of the stack

@ljharb
Copy link
Member

ljharb commented Dec 3, 2019

Engines don't need a private API; they can already access whatever they like - private fields, source text, stack frames all included. For example, try new Promise(r => setTimeout(r, 100) in the browser repl, which will show you the internal state of the promise, something that's not synchronously available to the runtime.

@michaelficarra
Copy link
Member

Spoke to @ghermeto in-person about this. Just needed to re-hash How does this affect devtools or other methods of inspecting a function's implementation details? with him. To be extremely clear, this proposal does not restrict programs, processes, or APIs that remain external to the running program in any way. This means that engine monitoring hooks, the console, the web inspector API, etc will all be able to observe hidden stack frames. The only use cases that are affected by this proposal are observations made from unprivileged JavaScript code running alongside the code marked as "sensitive".

@kamilogorek
Copy link
Member

I'm not sure they support differentiating between shims/library code and application code

We do, but right now it's only based on the stacktrace frame's path, nothing else

@ghermeto
Copy link
Member Author

ghermeto commented Dec 3, 2019

@michaelficarra thanks! 👍

I got feedback that it would be important to have very clear definitions and examples of what constitute unprivileged javascript code and what would be considered privileged code.

cc/ @mmarchini

@nektro
Copy link

nektro commented Apr 26, 2024

my first impression is that I very much do not support this proposal and see it mainly as a new vector for malware to hide itself and a debugging nightmare. there might be environments where this is beneficial but I hope that on the web browsers and other engines make it very easy to disable this outright, should this become generally available. the open observability of code on the web is a hallmark feature of the platform imo. the problem statement described in the readme does not seem like something that is the language's to fix.

@ljharb
Copy link
Member

ljharb commented Apr 27, 2024

@nektro from dev tools, or as a user, you would always be able to view any code you wanted - this is just about the ability to view it from javascript.

@myndzi
Copy link

myndzi commented Nov 15, 2024

I randomly stumbled into this proposal and do understand both ends of this argument. It feels like there's a debate here along one axis for a problem space that has more dimensions than that.

The thing that stands out to me is that there is a role for code that is neither "actively debugging in dev tools" nor "untrusted and unprivileged", and that is observability tooling and libraries (Sentry being one example). This is generally software running in the runtime environment on behalf of privileged users (from the perspective of an application / website) to gain information about things that happened in the past. That "in the past" is important: something breaks, you get a notification (or a user reports a bug), now you have to try and figure out why it broke. If library code can literally root-kit its way out of even being seen as part of the stack trace, that creates an extremely unpleasant incident response experience.

There may be a compromise that helps to preserve after-the-fact observability that avoids the (frankly quite frustrating) suggestion of "just transpile all your dependencies and rehost them yourself":

If there is a mechanism by which the author/owner of some application can grant privilege to a specific resource, that inverts the level of effort required to gain "full and complete" observability of stack traces. For example, if I (as the owner/operator of some website) can say "the script at <cdn url>/sentry.js is allowed to observe the full and complete stack trace", then I can regain the ability to see all relevant information without providing arbitrary (and potentially malicious or untrusted) code the ability to bypass the security properties of this proposal.

I'm not certain of what the best mechanism to accomplish this would be. CSP might be a good fit?

Edit:

There are a couple other things worth mentioning about stack trace redaction that I didn't see explicitly.

  1. This proposal essentially moves from a world where we can "trust" the contents of a stack trace (as presented) to a world where there is always some level of doubt (or, possibly, complete unawareness) of this behavior.

    Even if the specific cases where it turns out to matter wind up being few, the cases where it might occur is "always", so it adds a cognitive overhead to debugging where there's no actual way to be certain the problem isn't in a location that was hidden away from what you're looking at. I believe this is not an insignificant impact: bugs are often non-obvious, and trust in the accuracy of the information you're using to debug is vital.

    It's possible that this situation could be improved by marking (in some way) that a stack trace has been modified -- or at least, involved code for which stack frames are redacted -- which I don't think poses a problem for the security concern of "different behavior observable from the stack trace in a secure library depending on some inputs" - since it would always be present in all code paths involving that library.

  2. Besides observability, test frameworks are another example of runtime code that has a good reason to observe the full and complete stack trace contents, whether that is for generating reports (which may not be suited to relying directly on "console.log magically provides different contents than error.stack") or formatting stack traces for easier consumption.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants