-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Gradient Ascent #3
Comments
Hi trishume,
|
If you have access to a university library you might be able to find it under "Machine Vision for Inspection and Novelty Detection" by Fabian Timm. Otherwise email me at [email protected] and I may send it to you. It's somewhere in among all those pages, it shouldn't be too hard to find. The basic idea is you take the partial derivative in the x and y direction of the "centreness" formula you can then use standard gradient ascent formulas to find the maximum. The derivative in the thesis will need a little tweaking to account for the eye-specific changes to the general circle-finding algorithm. Specifically to deal with only counting black circles rather than white circles by only counting one gradient direction. |
I am going to try and implement this some point down the line: |
@abhisuri97 yup, that's the one. Nice that his thesis is public online, I didn't know that was the case. That algorithm isn't specialized for eye tracking like in the other paper so it doesn't include things like weighting by darkness (which you could possibly skip, I'm not sure it helps that much), and only counting dark circles on light backgrounds (see my blog post, this is super important to adapt it to do). I recommend doing the partial derivatives yourself so that you understand the equation he gives and how you might adapt it. I did them once and figured out how to adapt it, but I don't remember, I just remember it wasn't that difficult to adapt or to do the derivatives, and I hadn't even taken high school calculus back when I did them, just looked up the rules for derivatives online, so it doesn't need anything advanced. |
I think I have a good idea of how to go about it. One part I can't quite understand how to do is computing the step size via Armijo's rule. Would you happen to know a good programmatic way to do this? |
@abhisuri97 I have no idea. I've never looked into it and haven't studied optimization enough to know what it is. Hope Google helps I guess ¯_(ツ)_/¯ |
@trishume |
Did anybody manage to implement it in some way? It would be great to speed up the process. Also, it would be nice to have some optimization for real-time processing, such as passing previous pupil coordinates and starting search from there (although not sure if there would be any benefits - we still have to search entire area in case of rapid changes). |
Currently the main tracker algorithm is quite slow, which necessitates the image being scaled down, which reduces accuracy. There is a method proposed by Dr. Timm (creator of the algorithm) in his thesis for using gradient ascent to speed up the algorithm from O(n^2) to O(n) where n is the number of pixels.
The basic idea is that the eye centre-ness field can be sampled at any pixel independently in O(n) time. Currently eyeLike samples all pixels and finds the one with maximum centre-ness. Instead of doing this it is possible to rearrange the formula to be able to compute the gradient (slope) of the centre-ness field at any point. This direction can then be used to "climb" the gradient towards the maximum in a small number of iterations.
The method for doing this is detailed in his thesis, I might be able to email it to anyone who is interested in working on this. The method in his thesis is stated in terms of general circle identification which would need some tweaking to make it work for eyes. It wouldn't require much code to do, but would require a good level of understanding.
The text was updated successfully, but these errors were encountered: