HAi is a pioneering project aimed at assisting the elderly with visual and auditory impairments. Utilizing cutting-edge technology, our system offers a seamless and intuitive experience, enhancing daily life and promoting independence.
- Advanced Neural Networks: Integrates four different neural networks to interpret visual and auditory data accurately.
- Comprehensive Datasets: Utilizes well-known datasets such as RAVDESS, FER, LibriSpeech, and COCO for training our models.
- YOLOv8 Integration: Employs the latest version of YOLO (You Only Look Once) for efficient and accurate real-time object detection.
- External Processing: Leverages the power of Linode API for external processing, ensuring smooth and responsive system performance on Raspberry Pi devices.
- Neural Networks: Custom-designed models for specific tasks related to visual and auditory processing.
- Datasets:
- RAVDESS: For auditory emotion recognition.
- FER: Facial expression recognition.
- LibriSpeech: Speech recognition and processing.
- COCO: Object detection and scene understanding.
- YOLOv8: For cutting-edge object detection and localization.
- Raspberry Pi: Cost-effective and efficient hardware for on-site processing.
- Linode API: Cloud computing for heavy-duty processing and neural network inferences.