From Gemini #663
Code Review
This pull request addresses a consistency bug in the reshape operation for F-contiguous tensors. The fix correctly removes the special handling for F-contiguous tensors, ensuring they are converted to a C-contiguous layout before reshaping, which prevents data corruption. My review highlights a potential pre-existing bug in the reshape_with_copy function that this change now makes more prominent. Addressing this related issue is recommended to prevent potential memory access errors.
I am having trouble creating individual review comments. Click here to see my feedback.
src/arraymancer/tensor/private/p_shapeshifting.nim (72-74)

Removing this special handling for F-contiguous tensors is the correct approach to fix the reshape inconsistency. Now, non-C-contiguous tensors (including F-contiguous ones) will be handled by reshape_with_copy, which forces a copy to a C-contiguous layout.
However, I've noticed a potential issue in reshape_with_copy that will now be triggered more often. The implementation of reshape_with_copy is:
proc reshape_with_copy*[T](t: Tensor[T], new_shape: varargs[int]|Metadata|seq[int], result: var Tensor[T]) =
contiguousImpl(t, rowMajor, result)
reshape_no_copy(t, new_shape, result, rowMajor)
contiguousImpl creates a new tensor in result with new storage, and its data is packed starting at offset 0. But reshape_no_copy is then called with the original tensor t and sets result.offset = t.offset. If t was a view with a non-zero offset, this would be incorrect for the new storage of result and could lead to memory errors.
A fix would be to call reshape_no_copy on result itself:
reshape_no_copy(result, new_shape, result, rowMajor)
Since this part of the code is not in the diff, I can't suggest a direct change, but I strongly recommend addressing this as part of this PR or in a follow-up, as it could cause critical issues.
From Gemini #663