Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Iterative refinement insert slice: griddata version. #44

Merged
merged 67 commits into from
Apr 8, 2022

Conversation

thisFreya
Copy link
Collaborator

Inserts slices back into 3d matrix by using torch's sparse tensors to interpret xyz coordinates of each slice voxel. By taking advantage of this, torch reconstructs the 3d matrix immediately from the tensor.

In the future we will want to remove some lines from this in order to leave the inserted slice in tensor format, as the numpy conversion is costly and not computationally useful outside of getting a basic workflow going (which is the current goal).

@thisFreya thisFreya self-assigned this Apr 4, 2022
@thisFreya
Copy link
Collaborator Author

I added a test that uses generate_slices to pull a slice out of a simple map. Insert_slice then puts that back in. Of course, this depends on generate_slices working and that hasn't been pulled into this branch just yet. I did run the test in a jupyter notebook and it passed.

@geoffwoollard
Copy link

PR #40 merged

Copy link

@geoffwoollard geoffwoollard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use map_coordinates for now. It returns a dense array so no need for sparse tensors

Comment on lines 508 to 515
inserted_slice_tensor = torch.sparse_coo_tensor(
xyz + n_pix // 2, slice_real.reshape((n_pix**2,)), (n_pix, n_pix, n_pix)
)
count_tensor = torch.sparse_coo_tensor(
xyz + n_pix // 2, np.ones((n_pix**2,)), (n_pix, n_pix, n_pix)
)
inserted_slice_3d = inserted_slice_tensor.to_dense().numpy()
count_3d = count_tensor.to_dense().numpy()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sparse_coo_tensor does not do any interpolation.

you'll still have to use map_coordinates, which returns a dense 3d array.

until there is a (custom in pytorch?) linear interpolation that returns sparse idx at the inserted voxels, and the value inserted at these voxels. there will be no use for sparse_coo_tensor.

So for now just stick with what I did in the notebook using map_coordinates.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, problem for tomorrow.

Copy link
Collaborator Author

@thisFreya thisFreya Apr 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@geoffwoollard I've thought some more on this and I think the problem needs some clarification.

Unfortunately, map_coordinates won't serve our purposes exactly as it goes the wrong direction (it can only pull points out of an existing grid, not put them back in). The method that I've found that is most tailored to our purposes is scipy.interpolate.griddata. However, when using this method it returned a rather illuminating error - a 3D grid of points cannot be interpolated from just a 2D slice.

Thinking more about this, of course that is true - the problem of interpolating a point in 3D near a plane is ill-defined if that plane has zero width - is the result always zero unless the point is on the plane? How can a point be on a plane if it has zero width - float precision limitations would mean that all points not precisely defined during the creation of the plane would evaluate to 0 when interpolated.

This is to say that we need to clarify a width for slices if we want to interpolate values from them. Perhaps something dependent on n_pix, say 1 / (8 * n_pix) or something. What also needs to happen is that we need to generate the coordinates of the other "side" of the slice in order to give it its width - ideally by "tiling" the slice once along its normal.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@geoffwoollard What I've done is added a "depth" to the outputted rotated xy plane in generate_slices (the depth is added before rotating). This way, when we go to insert a slice, we have a volume which the slice occupies rather than just a 2D area. since this volume has depth << pixel size by design, it shouldn't affect the result validity. The slice (upon insertion) is assumed to be constant along this depth axis.

With this change, I was able to get interpolation working for insert_slice.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thisTyler

Yes, good point about map_coordinates. I had used another solution here:
https://github.com/geoffwoollard/learn_cryoem_math/blob/master/src/interp.py
https://github.com/geoffwoollard/learn_cryoem_math/blob/master/nb/fourier_slice_2D_3D_with_trilinear.ipynb

Can you push a notebook or code snippet to this branch illustrating the griddata issue and error.

It needs to insert density in nearby voxels, so it should be aware of the 3D nature of arr_3d it is inserting into and the "size" between voxels.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@geoffwoollard
Copy link

geoffwoollard commented Apr 7, 2022

Re this error

        xy_plane_padded = np.concatenate(
            (xy_plane + offset, xy_plane, xy_plane - offset), axis=1
        )
        xyz_rotated_padded = np.empty((n_rotations, 3, 3 * n_pix**2))
        for i in range(n_rotations):
>           xyz_rotated_padded[i] = rots[i] @ xy_plane_padded
E           ValueError: could not broadcast input array from shape (3,192) into shape (3,27)

https://github.com/compSPI/reconstructSPI/runs/5874963028?check_suite_focus=true#step:6:248

perhaps you can keep reshape the 3 planes to a different axis, i.e (3*n2,) -> (3,n2) or (n**2,3), and then vectorize how the rotations matrix multiply, so that all three planes are rotated at once by the same rotation.

@thisFreya
Copy link
Collaborator Author

Re this error

        xy_plane_padded = np.concatenate(
            (xy_plane + offset, xy_plane, xy_plane - offset), axis=1
        )
        xyz_rotated_padded = np.empty((n_rotations, 3, 3 * n_pix**2))
        for i in range(n_rotations):
>           xyz_rotated_padded[i] = rots[i] @ xy_plane_padded
E           ValueError: could not broadcast input array from shape (3,192) into shape (3,27)

https://github.com/compSPI/reconstructSPI/runs/5874963028?check_suite_focus=true#step:6:248

perhaps you can keep reshape the 3 planes to a different axis, i.e (3*n2,) -> (3,n2) or (n**2,3), and then vectorize how the rotations matrix multiply, so that all three planes are rotated at once by the same rotation.

I found the source of the error and am working around it. The 3 planes get rotated simultaneously due to a convenience of how they are stored.

@geoffwoollard geoffwoollard changed the title Iterative refinement insert slice: numpy version. Iterative refinement insert slice: griddata version. Apr 8, 2022
Copy link

@geoffwoollard geoffwoollard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good work refactoring.

The notebook should be put in folder and tested to ensure it runs.

Need more tests, especially of insert_slice on arbitrary rotation. I have opened an issue for this: #44

All tests passing so will merge now.

@geoffwoollard geoffwoollard merged commit 0aae692 into dev Apr 8, 2022
@geoffwoollard
Copy link

Continued in #50

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants