Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Kalman Filter to fuse DroneID and Vision information to object's 3D state in ENU frame #173

Open
Tracked by #171
Vince-C156 opened this issue Apr 13, 2024 · 4 comments
Assignees

Comments

@Vince-C156
Copy link
Contributor

Vince-C156 commented Apr 13, 2024

Use Drone ID info docs and camera information to get the best estimate of the object of interest.

@EricPedley EricPedley self-assigned this Apr 14, 2024
@EricPedley
Copy link
Member

Just making notes for myself on this thread:
Right now I'm using a very simple test case where the drone just stays in the same spot and gets a single bounding box every frame in the same spot and size. The goal is for the filter to just converge on a state where the other drone is the appropriate size and distance away. When I had a linear motion model in the UKF, the covariance of the position was going up to really high numbers like in the hundreds of meters, but without the motion model, the covariance stays around 1-2 meters.

With motion model:
image

Without motion model:
image

@EricPedley
Copy link
Member

EricPedley commented Apr 19, 2024

After switching to use a particle filter (still one per track), the covariance actually converges even when adding a motion model, but it isn't always converging the position to (0,0). The filter shouldn't be doing this for the simple test-case I'm running so there's still more work to do. At first, the particle filter had similarly bad performance until I removed the part that ran the initialization step (uniform distribution of radii) every update step.

image

@EricPedley
Copy link
Member

EricPedley commented Apr 19, 2024

At this point I've completely removed all assumptions that were hand-tuned to this specific test-case and the filter still converges to a value that's physically feasible so I think I can start expanding test cases now. Although, for the one I already have I need to change the definition of success because with all assumptions off, the filter also accounts for the possibility that the drone is smaller and circling the origin instead of just stationary at the origin.

Remaining TODOs:

  • write auto test case generator (just for the filter)
    • simple version that generates ones with perfect bounding boxes and the camera always facing the target
    • drop and add noise to bounding boxes, and add false positives
    • make the drone being tracked not always in frame
  • put multiple drones per test-case and work on target association / track creation+deletion / occlusion problems
  • expand filter and test case generation to multi camera view
  • integrate remoteID information into filter
  • schedule process noise and initialization hyperparameters based on zone we're in (landing area, waypoint flying, and dropzone).
  • vectorize all operations with pytorch

@Vince-C156
Copy link
Contributor Author

suiiiiiiiiiii

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants