-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Kalman Filter to fuse DroneID and Vision information to object's 3D state in ENU frame #173
Comments
Just making notes for myself on this thread: |
After switching to use a particle filter (still one per track), the covariance actually converges even when adding a motion model, but it isn't always converging the position to (0,0). The filter shouldn't be doing this for the simple test-case I'm running so there's still more work to do. At first, the particle filter had similarly bad performance until I removed the part that ran the initialization step (uniform distribution of radii) every update step. |
At this point I've completely removed all assumptions that were hand-tuned to this specific test-case and the filter still converges to a value that's physically feasible so I think I can start expanding test cases now. Although, for the one I already have I need to change the definition of success because with all assumptions off, the filter also accounts for the possibility that the drone is smaller and circling the origin instead of just stationary at the origin. Remaining TODOs:
|
suiiiiiiiiiii |
Use Drone ID info docs and camera information to get the best estimate of the object of interest.
The text was updated successfully, but these errors were encountered: