-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No bounds checking on fill/trace buf access #1
Comments
I am sorry, so long time not watch this repo, your image is very large? are you sure the image is a skeleton image? I think 131072 is enough to contain a edge. maybe I should malloc 1/10 lenght of the 3d image. I think 1/10 size is enough for any edge. numba cound not resize an array dynamically, we can only np.zeros(...) a longer one, and copy data in. |
Yeah the images we're looking at are very big, up to 1024x1024x1024 with lots of connectivity in the skeleton. I don't have the maximum edge lengths we acquired to hand, but I imagine some factor of the total image size would work well as an initialiser for the edge buffer. Main thing I needed to fix locally though was the Thanks!
|
can you export your data as png sequence and email to me? I think 1024^3 is not large, 131042 = 1024 * 128. may be there is something wrong other where! |
131042 * 8(int64) = 1M |
I'm having some trouble processing big 3d volumes using the library. The cause seems to be a lack of bounds checking on
buf
access infill()
andtrace()
,buf
being statically sized. Due to the numba.jit usage it manifests as a segfault due to the out-of-bounds write.Here's a PR to add that bounds checking, and allows caller specification of the fixed buffer size to
build_sknw()
:#2
I guess it could also be dynamically resized, but I'm not familiar with numba.jit flavoured python so I'll leave that.
The text was updated successfully, but these errors were encountered: