Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

P=NP #2

Open
theoreticalphysicsftw opened this issue Jan 20, 2021 · 1 comment
Open

P=NP #2

theoreticalphysicsftw opened this issue Jan 20, 2021 · 1 comment

Comments

@theoreticalphysicsftw
Copy link

theoreticalphysicsftw commented Jan 20, 2021

"It seems that the code executing on VM can be actually much faster than the native one thanks to technologies like HotSpot"

Execute VM inside a VM, then repeat arbitrary many times => unlimited speedup! Check mate atheists!

It's either that or you're just writing very inefficient native code... I wonder which one is more likely....

Just like in your case you're calling malloc()/free() just to allocate 24 bytes of data. Internally those general purpose allocators are very complex library functions from the particular libc implementations. They maintain data structures to track allocations of various sizes and when they need to allocate more memory than the fixed pools they already have they call mmap()/sbrk() (Linux) or VirtualAlloc (Windows) to ask the OS to map more physical memory in their virtual address space (which is not trivial in terms of cost as well). With all that said none of the libc implementations that I've seen implements those function in such a way so that they are fast for very small and frequent allocations (not only because such pattern is very uncommon among good C programs but also because you could've done much better job manually by building your own allocator knowing the exact small-size limits and allocation frequency). On the other hand due to the nature of Java such small allocations are more common (sometimes can be unavoidable) so I suspect the JVM guys have designed allocation strategy that does decent job on very small sizes. Internally they probably allocate memory at decently sized chunks and then give small pieces of them to your nodes (your nodes also probably end up much closer in memory to each other improving cache locality a lot).
Writing a good custom allocator in C could speed up your program by an order of magnitude. Even the most basic pool allocator implementation will probably easily destroy your Java code...

I can't believe people are still coping with this "Java is n times faster than native" meme that's been going for years now. Stop it, get some help.

@morisil
Copy link
Member

morisil commented Jan 21, 2021

Q.E.D.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants