-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to disable the directive #25
Comments
In the same way that you do not want to write your users' passwords to your HTTP logs, you also would not want to write stack traces from functions specifically marked as Changing the Tools like Sentry can still be used in the presence of these directives. In fact, they will most often be improved by allowing the user to better separate their code from library/shim code. |
What happens when the error itself originates within the function that itself, as well as most of its internals, are marked as sensible? How will it show inside the stack trace? Consider someone incorrectly implementing a polyfill (with this trivial example): function internalFunction() {
"sensitive"
throw new Error('whoops')
}
function somePolyfill() {
"sensitive"
if ('some' in window) return;
window.some = function some() {
return internalFunction(...Array.from(arguments))
}
} Without being able to disable it, it'd be very hard, if not impossible in some scenarios to find the root source of the problem. Also for things like Sentry, that can rely on a specific order of frames (we don't know, as we can use function names, but we used to do this in old versions, so I'm sure there's a code in the wild that still does that), it can create some incorrect results if there'll be no way to determine whether some of the frames are hidden or not. Function based: Frames order based: |
The option to remove the directives exists for authors who serve the code — even if present in third party libraries, they can stripped if desired. But it’s rather important that if your code doesn’t control the sensitive code in question, you can’t choose to disable its security features. There will be bugs in the kinds of very rigorous polyfills that would be interested in this behavior, just as there are bugs in browser internals that those polyfills are emulating. One doesn’t get a JS-exposed trace for C layer errors either, though, and whether e.g. a platform-defined built-in module has had its functionality implemented in JS or C or Rust should not be observable. |
True, however, you, like a developer can control one and cannot the other. |
You can’t, in that example (platform-defined built-in modules). They’re part of the browser, they just might be implemented in JS even if not falling back on a ‘polyfill’ (the term gets blurry here, since the polyfill can be an entirely legit implementation itself). There are other scenarios where this can occur too, like when your code runs in a sandbox within another application. In any case where you can control it, though, you can also strip it. Possibly notable that minifiers today would strip it, since it would appear to be an unused expression statement — and perhaps this will continue to make sense as the default handling in such tools? |
That shouldn't be the case. I just tested on UglifyJS and it properly preserves the whole directive prologue. It was never safe to strip directive prologue entries (post-ES5). |
I think it depends on the type of aggregator (e.g. storing on permanent storage vs storing in transient memory storage for near real-time inspection). In general it seems to me that the consumer of those stack traces is in a better position, at least in some cases (e.g. Node.js applications that have some level of trust on code they run), to determine what information from those sensitive functions they want to have access to.
This seems like a consequence of having those directives impact two different use cases. I opened an issue to discuss this topic specifically at #38. Currently those two use cases seem to me to be unnecessarily coupled together.
Filtering out parts of stack traces seems to already be supported by SEntry. I'm not sure they support differentiating between shims/library code and application code, but loggers or loggers' consuumers seem to be a more natural place to implement this type of filtering. It seems it would be difficult for library/shims code authors to determine in advance what insights into stack traces would be needed by operators of applications/services for their specific use cases. |
@misterdjules It is true that there is a design trade-off here between the ability to write secure code and the ability to introspect arbitrary aspects of the running program from unprivileged code. Of course I see value in the latter, but it is important that we not lose the ability to write secure code on a platform like the web. We made this choice when designing and including private fields and closures. Unprivileged runtime introspection tools similarly have no access to that private state. |
@michaelficarra what about if it provides a private API that can accessed only by engines? This way Node.js can expose that as a parameter allowing to be disabled, but you, as a user, wouldn't be able to disable it in a browser... Would that be acceptable? Just to be clear, we are not trying to disable the function implementation hiding from the |
Engines don't need a private API; they can already access whatever they like - private fields, source text, stack frames all included. For example, try |
Spoke to @ghermeto in-person about this. Just needed to re-hash How does this affect devtools or other methods of inspecting a function's implementation details? with him. To be extremely clear, this proposal does not restrict programs, processes, or APIs that remain external to the running program in any way. This means that engine monitoring hooks, the console, the web inspector API, etc will all be able to observe hidden stack frames. The only use cases that are affected by this proposal are observations made from unprivileged JavaScript code running alongside the code marked as "sensitive". |
We do, but right now it's only based on the stacktrace frame's path, nothing else |
@michaelficarra thanks! 👍 I got feedback that it would be important to have very clear definitions and examples of what constitute unprivileged javascript code and what would be considered privileged code. cc/ @mmarchini |
|
@nektro from dev tools, or as a user, you would always be able to view any code you wanted - this is just about the ability to view it from javascript. |
I randomly stumbled into this proposal and do understand both ends of this argument. It feels like there's a debate here along one axis for a problem space that has more dimensions than that. The thing that stands out to me is that there is a role for code that is neither "actively debugging in dev tools" nor "untrusted and unprivileged", and that is observability tooling and libraries (Sentry being one example). This is generally software running in the runtime environment on behalf of privileged users (from the perspective of an application / website) to gain information about things that happened in the past. That "in the past" is important: something breaks, you get a notification (or a user reports a bug), now you have to try and figure out why it broke. If library code can literally root-kit its way out of even being seen as part of the stack trace, that creates an extremely unpleasant incident response experience. There may be a compromise that helps to preserve after-the-fact observability that avoids the (frankly quite frustrating) suggestion of "just transpile all your dependencies and rehost them yourself": If there is a mechanism by which the author/owner of some application can grant privilege to a specific resource, that inverts the level of effort required to gain "full and complete" observability of stack traces. For example, if I (as the owner/operator of some website) can say "the script at <cdn url>/sentry.js is allowed to observe the full and complete stack trace", then I can regain the ability to see all relevant information without providing arbitrary (and potentially malicious or untrusted) code the ability to bypass the security properties of this proposal. I'm not certain of what the best mechanism to accomplish this would be. CSP might be a good fit? Edit: There are a couple other things worth mentioning about stack trace redaction that I didn't see explicitly.
|
Can we give application owners the ability to disable the directives from any library that uses them?
If I own the application, I can always lint for libraries that have these directives and make sure they are never included, but would be nice if I can use these libraries and I make the conscious decision to disable it.
This would allow people to keep using tools like Sentry and other error tracking tools.
The text was updated successfully, but these errors were encountered: