Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix for IntegralImageNormalEstimation #191

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

taketwo
Copy link
Member

@taketwo taketwo commented Jul 13, 2013

This pull request fixes the bugs in IntegralImageNormalEstimation reported by James Stout in this pcl-dev tread and also introduces a new minimum normal smoothing size parameter.

The introduced fixes seem to play well with

  • COVARIANCE_MATRIX

    cm

  • SIMPLE_3D_GRADIENT

    sg

  • AVERAGE_DEPTH_CHANGE

    adc
    (the green borders is an okay thing for this method)

As for AVERAGE_3D_GRADIENT, there are weird artifacts. Unfortunately I did not have time to dig deeper into this problem.
ag

taketwo added 2 commits July 14, 2013 00:01
This commit implements the fixes proposed by James Stout in pcl-dev
mailing list (http://www.pcl-developers.org/Bugs-in-IntegralImageNormalEstimation-distance-map-computation-td5708055.html).

Specifically:
- In depth change map computation the diagonal neighbors are now
  considered.
- In distance map computation Chessboard distance is used instead of an
  approximation of Euclidean distance.
- Use distance map values as smoothing window radius (instead of width).
This commit implements the change to IntergalImageNormalEstimation
proposed in pcl-dev mailing list (http://www.pcl-developers.org/Bugs-in-IntegralImageNormalEstimation-distance-map-computation-td5708055.html).
@@ -731,30 +731,65 @@
memset (depthChangeMap, 255, input_->points.size ());

unsigned index = 0;
for (unsigned int ri = 0; ri < input_->height-1; ++ri)
for (unsigned int ri = 1; ri < input_->height-1; ++ri)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first row and first column are now getting skipped. Most of them get touched by the NE and SW checks, but that's not sufficient (for example if there is a left-right or up-down discontinuity in the first row it won't be counted).

Actually this bug was also somewhat present before (e.g. left-right discontinuities were not checked in the last row). This seems to get handled later by BORDER_POLICY_IGNORE, although it looks like BORDER_POLICY_MIRROR isn't doing the right thing. Maybe just zero out the borders and columns? This effectively makes BORDER_POLICY_MIRROR a no-op, but honestly I'm not sure how useful that mode is (seems like it would produce really bad normals).

@taketwo
Copy link
Member Author

taketwo commented Jul 17, 2013

Thanks for the comments.

I also think that BORDER_POLICY_MIRROR is not quite useful. Should we get rid of it then?

As for the BORDER_POLICY_IGNORE, I think we could do the following. Previously the width of the ignoration strip was chosen to be equal to the normal smoothing size. Since now we have additional minimum_normal_smoothing_size_ parameter, I think it makes sense to use it instead (divided by two, by the way). Nevertheless, it is still important to have "up-down" and "left-right" discontinuities of the outermost rows/columns registered in the depth discontinuities map, so I will add additional for loops to handle it.

@wolfmanstout
Copy link

Regarding the border policies: your proposals sound good. I'm not sure if we can actually remove BORDER_POLICY_MIRROR since it's part of the public interface, but we could at least make it a no-op.

@taketwo
Copy link
Member Author

taketwo commented Jul 27, 2013

Hi, I am back working on this issue. I started to explore the weird artifacts of AVERAGE_3D_GRADIENT and it seems like they are due to wrong smoothing window size computation. I implemented it as you suggested in the ml:

The other bug is that the square dimensions are half of what they ought to be. The distance map value is used
as the lower bound of the window width, when in fact it should be the bound of the window radius. In other words,
a pixel with distance 2 to a discontinuity should be using a 5x5 window, not 3x3 as it currently does.

i.e. smoothing size used is D * 2 + 1, where D is the value in distance map. Shouldn't it rather be D * 2 - 1?

The image below shows the results for D * 2 - 1 on the left and D * 2 + 1 or the right.
n

It is strange though that for other methods some similar artifact does not show up...

@taketwo
Copy link
Member Author

taketwo commented Jul 28, 2013

Another thing I mentioned is so-called depth dependent smoothing. When enabled, static_cast<float>(depth)/10.0f is added to the normal_smoothing_size_:

float smoothing = (std::min)(distanceMap[index] * 2 + 1, normal_smoothing_size_ + static_cast<float>(depth)/10.0f);

Assuming that depth is less than 10 m, this addend is always less than 1. If the user sets normal_smoothing_size_ to an integer value, this addend has no effect because later on the sum is cast to int. You suggested to use round instead, which means that for distances larger than 5 meters smoothing will be increased by 1. I wonder if this makes any sense at all.

@wolfmanstout
Copy link

Hi Sergey,

Sorry for the slow response! I've been super busy lately.

D * 2 + 1 is what you want. Think of the discontinuity as occurring at the edge between pixels, and the distances are from the edge of the current pixel to that discontinuity. Then a distance of 1 means we can get away with a 3x3 pixel square. It makes sense that using D*2-1 would fix it, but only because it's hiding a bug elsewhere.

Regarding the casting that I think may causing trouble, I'm referring to the static_cast in the line where the smoothing is passed to setRectSize. Try a round instead. The cast you added looks fine.

@taketwo
Copy link
Member Author

taketwo commented Aug 20, 2013

Hi James,

Consider the following depth buffer:

05 00
00 14

Assuming 10 is the max depth change, the depth_change map will be:

255 000
000 000

This will yield the following distance map:

1 0
0 0

According to D * 2 + 1 formula we should use 3x3 window for the upper left pixel. But this window will cover two depth discontinuities!

@taketwo
Copy link
Member Author

taketwo commented Aug 20, 2013

Regarding the casting that I think may causing trouble, I'm referring to the static_cast in the line where the
smoothing is passed to setRectSize. Try a round instead. The cast you added looks fine.

Yes, I got that. I added round there, but it did not seem to change anything.

In my comment I actually questioned the utility of this whole "depth dependent smoothing" thing. As I said, it boils down to increasing normal_smoothing_size_ by 1 for distances greater than 5 m.

@wolfmanstout
Copy link

The diagonal case is a good example; I hadn't considered that. This seems like a failure of the distance mapping itself, not the window sizing. Specifically, the base-case we are propagating is disobeying its contract. There are a few ways we could fix this:

  1. Modify the base case to check for these discontinuities. This would be an additional 8 checks.
  2. Make the base case more conservative by halving the allowed discontinuity in the existing checks. By transitivity this would guarantee that the unchecked cases are also within the threshold.

Both options should fix the base case and inductively the rest of the distance map. The tradeoff is that the first option is slower, but it would sometimes lead to a larger window size where the 2nd case is too conservative. I'm fine with either solution.

I don't think this is the cause of the strange artifacts. As the 2nd solution points out, all this bug really does is double the max depth change parameter. It won't let any really severe discontinuities in.

@wolfmanstout
Copy link

Also, is there an easy way to run the tests you posted earlier? Are those just existing unit tests? When I have some time I'd like to run them and take a closer look at what might be going wrong.

@rbrusu
Copy link
Member

rbrusu commented Dec 16, 2013

What is the consensus on this issue?

@taketwo
Copy link
Member Author

taketwo commented Dec 16, 2013

This issue became irrelevant for my work in TOCS so I haven't had enough time/motivation to continue with it. Perhaps I will come to it later, but for now we may put this pull request on hold.

@wolfmanstout
Copy link

I'm in a similar situation. This isn't actually a problem for me, it's just
something I noticed when reading the code for research purposes. Based
Sergey's results, however, it's clear that there is real room for
improvement, ready to be picked up and finished by anyone who is impacted
by this.

On Mon, Dec 16, 2013 at 9:00 AM, Sergey Alexandrov <[email protected]

wrote:

This issue became irrelevant for my work in TOCS so I haven't had enough
time/motivation to continue with it. Perhaps I will come to it later, but
for now we may put this pull request on hold.


Reply to this email directly or view it on GitHubhttps://github.com//pull/191#issuecomment-30678019
.

@mcopejans
Copy link

This issue is very relevant to my current work. I agree that the proposed fixes result in the improvements discussed above, which I have tested for the COVARIANCE_MATRIX case. Would it be possible to merge this issue in the next version of PCL?

@taketwo
Copy link
Member Author

taketwo commented Nov 24, 2014

This pull request is not finished. As you can see there were some discussions/suggestions/ideas after my initial attempt, but I never got back to work on them. We can not merge the pull request as is, but you are welcome address all the issues we discussed and submit a new one :)

@vfdev-5
Copy link
Contributor

vfdev-5 commented Aug 3, 2017

I wonder whether this problem will be solved ?
Normal estimation fails when there are NaN points between non NaN points: example 1 and example 2.
Thanks

@stale
Copy link

stale bot commented Feb 21, 2020

This pull request has been automatically marked as stale because it hasn't had
any activity in the past 60 days. Commenting or adding a new commit to the
pull request will revert this.

Come back whenever you have time. We look forward to your contribution.

@kunaltyagi kunaltyagi removed the stale label Feb 27, 2020
@stale stale bot removed the status: stale label Feb 27, 2020
@taketwo taketwo added needs: more work Specify why not closed/merged yet status: stale and removed changelog: ABI break Meta-information for changelog generation labels Feb 27, 2020
@stale stale bot removed the status: stale label Feb 27, 2020
@stale
Copy link

stale bot commented May 19, 2020

Marking this as stale due to 30 days of inactivity. Commenting or adding a new commit to the pull request will revert this.

@stale stale bot added the status: stale label May 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs: more work Specify why not closed/merged yet status: stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants