You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am interested in using vm2 to sandbox a large/complex application. This is a generic node web service where I wish to isolate user requests from each other and from the "host," and where untrusted user code is evaluated—I imagine this is a common use case. Unfortunately the surface area is a bit large—will take some effort to define the right set of mocks, which will likely be a large set.
One strategy is to keep running your application on representative workloads, manually adding mocks for each failed standard library function/method. Ideally these mocks allow exactly the specific type of invocation expected and similarly mock all returned values to prevent methods/property getters from exploiting unexpected surface area.
The hard part about this approach is when returning objects from your mocks, ensuring that their methods or property getters are mocked. This is easy to miss, however, as some of these object trees can be large. (It's also tedious.)
Are there any general suggested approaches to creating the right sandboxed environment, or am I thinking about it the right way?
Might there possibly be any tool to "strace" all such (remaining) entrypoints that should be mocked?
The text was updated successfully, but these errors were encountered:
I am interested in using vm2 to sandbox a large/complex application. This is a generic node web service where I wish to isolate user requests from each other and from the "host," and where untrusted user code is evaluated—I imagine this is a common use case. Unfortunately the surface area is a bit large—will take some effort to define the right set of mocks, which will likely be a large set.
One strategy is to keep running your application on representative workloads, manually adding mocks for each failed standard library function/method. Ideally these mocks allow exactly the specific type of invocation expected and similarly mock all returned values to prevent methods/property getters from exploiting unexpected surface area.
The hard part about this approach is when returning objects from your mocks, ensuring that their methods or property getters are mocked. This is easy to miss, however, as some of these object trees can be large. (It's also tedious.)
Are there any general suggested approaches to creating the right sandboxed environment, or am I thinking about it the right way?
Might there possibly be any tool to "strace" all such (remaining) entrypoints that should be mocked?
The text was updated successfully, but these errors were encountered: