Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bad effect #3

Open
muzizhi opened this issue Dec 30, 2019 · 8 comments
Open

bad effect #3

muzizhi opened this issue Dec 30, 2019 · 8 comments

Comments

@muzizhi
Copy link

muzizhi commented Dec 30, 2019

when I run the code (c++) from github,I got really bad results.
so I visualized the depth edges obtained by canny and soft edges,the result is too bad.
first I just change the parameter according to the paper.More specific,τhigh = 0.04 τlow = 0.01 τflow = 0.3. But the result is bad too.
sometimes it has many texture edges. More often, the depth edges it acquires are incomplete, and large unrecognized blank areas appear.
I need some help,do I make a mistake in parameter or the code is wrong?
3 pic
1 pic

@mpottinger
Copy link

I think the edge detection code in this implementation is incomplete. I noticed that too.

@muzizhi
Copy link
Author

muzizhi commented Dec 30, 2019

do you have some advice about this code?

@mpottinger
Copy link

@muzizhi Well not really, I decided that it was still too slow anyway even if it could be corrected.

It may be possible to speed up the code to realtime. 30fps but it is difficult.

ARCore will have a depth from motion API which will give results similar to this, and I already achieved better results with a TOF depth sensor

@muzizhi
Copy link
Author

muzizhi commented Dec 31, 2019

well, I have two more questions actually.
first: will the code effect of the python version be better than c ++, or it just look similar?
second: I'm also a little confused about evaluation. No code is provided in this section, I want to write the code referring to the paper. But it is difficult to understand, such as Occlusion Error: “We extract a profile of 10 depth samples {di} perpendicular to the edge.”how to choose the sample?And the occlusion.png in annotations is almost blank.Excuse me, can I refer to your evaluation part of the code for better testing?

@mpottinger
Copy link

@muzizhi Yes I think the Python version does seem to provide better results, however the Python code is also 100x slower or more, and not suitable for mobile apps.

Sorry that is about all I know right now, I have moved on to other solutions since playing around with it.

@muzizhi
Copy link
Author

muzizhi commented Dec 31, 2019

thx

@limacv
Copy link

limacv commented Aug 14, 2020

It seems that the author tried to reproduce the modified canny detection but failed, so the original OpenCV's canny detection is used.
So I find the bellow solution well approximated the paper that use soft edge. By using another API of the OpenCV's canny implementation. Here is what I tried and the result seems better.

in ARDepth.cpp, around line 480, replace the canny() with:

cv::Mat edges;  
{      
    cv::Mat grad_x, grad_y;        
    cv::Sobel(base_img, grad_x, CV_16S, 1, 0, 5);
    cv::Sobel(base_img, grad_y, CV_16S, 0, 1, 5);
    auto elem_mul = [&](cv::Vec3s& val, const int* pos) {
        val *= soft_edges.at<double>(pos[0], pos[1]);
    };
    grad_x.forEach<cv::Vec3s>(elem_mul);
    grad_y.forEach<cv::Vec3s>(elem_mul);
    cv::Canny(grad_x, grad_y, edges, 80, 300);
    edges.convertTo(edges, CV_64FC1);
}

@Tord-Zhang
Copy link

it seems that the result is temporally unstable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants