-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement grid_sample
for sampling from a spatial tensor at arbitrary locations, similar to PyTorch
#2674
Comments
I'm taking a stab at implementing this over at https://github.com/timstr/burn/tree/feat/grid_sample, so far I seem to have a working implementation of a I'm quite new to Burn and it's quite a large project, so any early feedback would be very welcome! |
One thing I would like to support is different padding modes for when indices extend beyond the data, such as zero-padding, repeat-padding, and mirror-padding. In principle this choice would be well captured by a simple |
I have the same requirement here, and I tried using the slice method, but this causes my grid to keep cloning, making the loop very slow. |
For convolutions, anything other than zero padding is just an explicit padding operation that is applied before the convolution. For padding, we only have |
Looks like you're on the right track! Have you looked at the contributor book? There are details about adding a new tensor op, maybe that could be helpful. |
I have implemented a custom backend that supports 4-dimensional padding with zero bilinear interpolation for grid sampling. Would you mind taking a look to see if there are any other improvements that could be made? I tested it on my 4070s, and it takes 5 microseconds. Burn and CubeCL are truly impressive. |
Feature description
In brief, I propose implementing PyTorch's
grid_sample
method in Burn. This method allows one to pass a tensor of points in 1D, 2D, or 3D Euclidean space and look up and interpolate values from a tensor with 1, 2, or 3 dimensions (plus features + minibatch). This differs from the existinginterpolate
method because the sample locations need not be a regular grid. This differs from the existinggather
method because it uses floating point locations instead of indices, interpolates, and its autodiff computes gradients for the sampling locations themselves as well.Feature motivation
grid_sample
is a powerful low-level primitive, closely related to the existing methods for interpolating and gathering, but with more freedom to sample values at arbitrary spatial locations and with gradient information for describing changes in the sample locations in addition to the grid values. Its possible use cases go beyond neural networks and it's well suited for more general-purpose (differentiable) tensor computation.The tensor being sampled could be an image and the sample locations could be from a non-linear projection, such as cartesian-to-polar. The tensor being sampled could be a terrain heightmap, and the sample locations and their gradients could be used to model rainfall and erosion. Or the tensor being sampled could be a volumetric scene, and sample locations could be particles in a simulation.
I've personally used PyTorch's
grid_sample
during my thesis research project to, among other things, implement a basic but GPU parallelized renderer, implementing sphere tracing with a signed distance field represented as a volumetric tensor (link to research, 3D figures were generated this way). I would love to be able to do similar work in Rust using Burn.The text was updated successfully, but these errors were encountered: