Class project that analyzes running form to provide tips for improvement.
ELEC5790/6970-004 Special Topics: AI & Neuromorphic Hardware
- Create a
/modelsfolder to store the model - Create a
/datafolder to store data- Create a
/videossubfolder to store
- Create a
- Create a
- Create the virtual environment
- is there some way to simultaneously create the virtual environment and install the dependencies listed in requirements.txt?
There are a few key phases of the stride to look at to evaluate a runner's form. We want to train a neural network to detect these phases. These are the phases we want the neural network to detect:
- When the runner's foot strikes the ground (this will help us identify overstriding).
- When the runner's foot leaves the ground (this will help us find limited triple extension).
In the first iteration of this neural network, the neural network will take in pose information generated by Mediapipe's pose landmark model (limb coordinates), and it is supposed to tell us if the given pose represents a runner whose foot just struck the ground, left the ground, or neither. To train the model, we first will segment a video of a person running into parts of the video where the runner just struck the ground, where the runner is about to leave the ground, and neither. Then we will use Mediapipe's pose landmark model to generate landmark data for each frame in each video. The landmark data, which is all numerical, will represent a single datapoint during training and will be stored in a CSV file. Finally, we will go in and label each datapoint with the running phase it corresponds to (foot struck the ground, foot is about to leave the ground, or neither) -- the "y_actual" so to speak. After doing this, the training data will be ready for use.
PyTorch dependency: https://pytorch.org/get-started/locally/
OpenCV(dependency ofpip/mediapipe) runs on the CPU instead of GPU.- How to import a python file from another directory?
- How to show a video as it is made in real-time with a VideoWriter without it being extremely laggy
- Use case: each frame of a video is processed, and currently we are using cv2.imshow in a loop to show every new frame
- Reduce resolution of giant videos... One video is 3840 x 2160 pixels
- SOLVED: use the
shrink_video.pyscript
- SOLVED: use the