🚀This is an original project by I Putu Krisna Erlangga, but it has been modified to control a wheelchair.
This project is still under development. It involves controlling the wheelchair with an invisible steering wheel, using hand gestures as if steering. The approach utilizes an LSTM model to send output to the wheelchair, corresponding to the five available classes.
PyPi version
Please use seperate file for collecting dataset and training also use seperate folder for control. venv setup for training
python --version
python -m venv nama_venv
nama_venv\Scripts\activate
pip install opencv-python
pip install mediapipe
pip install numpy
pip install matplotlib
pip install tensorflow
pip install seaborn
pip install scikit-learn
Actually, you need an ESP32 and the wheelchair to run it.
python --version
python -m venv nama_venv
nama_venv\Scripts\activate
pip install mediapipe
pip install opencv-python
change the dataset folder name and class for your setup.
DATA_PATH = os.path.join('p_ganti_ini_pake_namaDataset') #Change it with whatever you like
actions = np.array(["Follow", "Github", "Agung", "Hari", "Bos"])
no_sequences = 50 #set jumlah sequence
sequence_length = 6 #Lama ngambil per sequence
run all cell on AMBIL_DATASET_AGUNG. it should open camera feed like this.
after you normalized the dataset you can start training the model
It is better to use a CNN-LSTM approach.