-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Brilirs memory extension implementation is very slow #190
Comments
Indeed—using a flat vector of values could really help. But I also agree with your initial premise: at the moment, we don't have many benchmarks that really do much intense with memory, mostly because it's hard to write these by hand and we haven't compiled any realistic applications from higher-level languages. Doing that would be a whole project until itself but would also really help stress the GC task for 6120 (https://github.com/sampsyo/cs6120/discussions/297)… |
am working on (mostly have) an interpreter that will have memory that is natively allocated using malloc/free, and is showing a 97-98% speedup over brili, and about 84% improvement over brilirs! |
Hey, I assume your referencing sampsyo/cs6120#304? It's super cool that another "fast" interpreter is being implemented and I'm pretty interested in it's comparison against I took a look at your code and it makes sense that your interpreter will be faster(it seems to elide the checks that One observation about this particular test. Rust(for safety) allocates a default value for each element in the array(unless you use https://doc.rust-lang.org/stable/std/mem/union.MaybeUninit.html which maybe |
i mean this is down to the OS/implementation of malloc, right? I'm not zeroing out the memory, so i'd assume that yes, this optimization will happen |
As was shown in #189 (comment). The current memory extension implementation for
brilirs
is surprisingly slow.Currently, there are no good benchmarks that are memory intensive enough which makes it difficult to know what to optimize for. The closest program seems to be
test/mem/alloc_large.bril
which only allocates and frees the same sized chunk of memory in a loop.The current implementation of
Heap
which supports the memory extension inbrilirs
is ported frombrili
and could be greatly improved.A couple of possible pain points include.
Heap
is implemented as a map from the "base" of the pointer to aVec<Value>
. This is nice in that it grows and shrinks with the amount of memory in use at any given. However, the use of maps likeHashMap<usize, Vec<Value>>
has historically been a performance hit inbrilirs
due to needing to hash the key and perform the lookup.Vec<Vec<Value>>
or aVec<Value>
with the "base" of the pointer just indexing into the start of it's allocation in the array. In either case we will want the ability to reuse indexes as allocations are freed up.Vec<Vec<Value>>
is probably slower than the alternative because it will require two array access to read/write to a value. However, it's probably easier to implement.Vec<Value>
is more along the lines of implementing something like malloc which gives all of the performance benefits / fragmentation complexities involved.Heap
needs to still enforce errors on buffer over/underflows, use after frees, use of uninitialized memory, and memory leaks.Vec<Value>
on every call to alloc. Ideally,brilirs
should be able to reuse previously freed allocations. This would especially targetalloc_large.bril
and potentially provide a significant improvement overbrili
.Heap
as just one largeVec<Value>
or using something like a slab allocator and delegating this reuse to an external library.Heap
to use during the course of interpretation.brilirs
already employs some static analysis for type checking and it would be interesting if it could use a data flow analysis to get a very rough estimate on whether it should start out with a small or largeHeap
size. Some programs don't use the memory extension or use it sparingly which is the default case. On the other hand, we also want to optimize for programs which make of intense use of memory.The text was updated successfully, but these errors were encountered: