We've developed a raspberry pi based recycling bin with a camera capable of recognizing and classifying trash.
This README will explain:
- The hardware and software requirements
- How to set it up (both the electric circuits and software)
- Information about it's operation.
Our objective is to incentivize people to recycle by making it easier for people to correctly separate trash. The product will tell the user the correct place to put it's recycling trash (cardboard, metal/plastic and glass)
- Raspberry Pi 4B - 1 unit - Development board for ...
- Ultrasonic Sensor - 1 unit - For detecting trash in the box
- Webcam - 1 unit - for taking a photo of the trash
- Servo Motor - 1 unit - for opening the lid
- Raspberry Pi OS - Operating system
- Python - Programming Language and Platform
- Tensorflow - Machine learning framework
- Base dataset
- gRPC - For communication
- gpiozero - Python library for controlling GPIO Pins
These instructions will get you a copy of the project up and running on for testing purposes.
-
Then connect the webcam to the raspberry pi (usb 2.0)
-
Place the ultrasonic sensor in the box of the trash can. Make sure there is a direct line of sight with the other side of the box
NOTE: The ranges of the ultrasonic sensor are very dependent on the box used, if it's necessary to make any changes please edit these constants from
clientService.py
TRASH_DISTANCE = 0.25 CLOSER_PLASTIC = 0 FARTHER_PLASTIC = 0.09 CLOSER_CARDBOARD = 0.09 FARTHER_CARDBOARD = 0.19 CLOSER_GLASS = 0.19 FARTHER_GLASS = 0.25
The program in
util/testUltra.py
prints the reading of the sensor -
Tape a metal wire to the servo and place it in a position where it can open the lid
-
Connect power to the raspberry pi
Please ensure that both the remote computer (server) and raspberry pi (client) are in the same network and reachable
We tested the server program in linux environments
The client uses GPIO functions and therefore can only be executed in a raspberry pi
This part has to be done in both the remote computer and the raspberry pi
- Download the repository
- Install the virtual environment and packages with:
make install
- Select the virtual environment with:
source .venv/bin/activate
- Compile protobuf with
make compile
IMPORTANT! It may be necessary to change the address and port of the server, this can be done by editing the variable SERVER_PORT
at the top of the client.py
and server.py
files
cd server
The server uses pre-trained models (retrained_model.keras
). If this file is not already present we will need to generate it:
- Download the Waste Image Dataset from here and place the folder WasteImagesDataset in the root of this repository
- Inside the WasteImagesDataset, remove all subfolders except:
- Aluminium (please rename to Metal)
- Plastic
- Paper and Cardboard (please rename to Cardboard)
- Glass
- now inside the server folder run
python modelTrain.py
(heavy operation, may take several hours)
To run the server:
python server.py
NOTE: The model may take a few minutes to initialize. Once the server is ready it will show Started Server on <address>
cd client
python client.py
We haven't implemented any unit or integration testing. There are, however, some utilities that may help test and debug the circuit/code:
cd util
python testServo.py # this testes the servo movement (from MIN pos to MAX)
python testUltra.py # this prints the output of the ultrasonic sensor (1s interval)
When it's all ready, you should have a setup like this:
When someone wants to deposit trash it places it under the webcam and clicks on the left button. This will make the camera take a photo. Then the lid will open and the RGB Led will present the corresponding trash code (blue -- paper, yellow -- plastic and metal, green -- glass). Once the user deposits the trash the lid will close.
If the machine learning model is not confident in it's results or if the user notices the model made a mistake (by clicking in the button on the right), the lid will open and a photo of the product + the location where the user placed it will be sent to the remote machine as training data. Once more photos are collected (absolute minimum of 10) the model can be further trained in these images (hopefully) making it more accurate
- Adriana Nunes ist199172
- Francisco Carvalho ist199219
- Martim Baltazar ist199280