Replies: 1 comment
-
I would focus first on getting expected results off-board. How is the method performing on a validation set, for example. That will help you determine if it is an implementation problem or a problem with the method or how it was trained. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As part of a project at University, we are implementing autonomous parcel recognition with the crazyflie2.1 drone which is equipped with the AI deck.
At the current stage, we are able to have the drone, controlled by a Python script, autonomously fly a defined flight path and scan the environment for an orange parcel with the camera mounted on the AI deck of the drone (classification between parcel and background). The position data of the package is then stored and transmitted to a swarm of drones for further processing.
Our current big problem is that we are not able to train the AI to reliably recognize the orange packet. For the most part, the AI seems to randomly recognize a package in places where there is none. So far, we have recorded training data on many different environmental conditions (lighting conditions, background, etc.) in the hope of being able to train a (more) stable AI in this way - but without improvement.
We would be delighted and would greatly appreciate it if somebody could take the time to give us any suggestions for optimization.
Many thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions