Skip to content

juanmacaaz/Jetsy

Repository files navigation

header pic

Jetsy

GitHub_Action_Linux_CI Build status Language grade: Python

Personal assistant robot that interacts with the human being on a physical and emotional level.

Table of Contents

What is this?

This project seeks to create an autonomous robot completely focused on emotional interaction with the user. It will be carried out by means of a table assistant with already existing typical functionalities, but thanks to artificial intelligence you will be able to interact with it through voice and video and thus enhance robot-human interaction seeking the maximum fluidity possible.

We also believe that the aesthetic design of the robot is very important to be able to transmit emotions to the user, so it will be very well worked.

Another point that we want is that all software has to be open source, from deep learning models to used libraries. You do not need an internet connection to use most of its features.

Demo

Demo

Requirements

For running each sample code:

How to use

  1. Clone this repo.

git clone https://github.com/juanmacaaz/Jetsy

  1. Go to the directory where the code is located.

cd Jesty

  1. Install the required libraries.

using pip :

pip install -r Requirements/requirements.txt

  1. Execute the following command to run the sample code.

python3 run.py

  1. Add star to this repo if you like it 😃.

Components

SainSmart Dysplay

Raspberry Pi Camera

Sg90

GA12-N20 DC

infrared barrier module

infrared barrier module

L298N

Plastic AA Battery Holder

Arduino Uno

Power Bank

ALT-37

USB Microphone

Front cover

Back cover

Arms

Battery Cover

Hardware Scheme

hardware pic

Software Architecture

software pic

Software Modules

software modules pic

Eyes

Affirmation

affirmation pic

Loved

loves pic

Suspicious

suspicius pic

Angry

angry pic

Happy

happy pic

Normal

normal pic

Sad

sad pic

Arms

arm move pic

Movements

Front

go pic

Left

left pic

Right

right pic

Back

back pic

Proximity Sensors

Frontal

sensor pic

Command Voice

Voice Controller

Voice Controller

Dance

Dance

Tell me a joke

tell me a joke

Object Classification

object classification

Additional Implementations

  • Where I am
  • Emotion Detection

Built With

  • TinkerCard - Model Design Program.
  • Arduino - IDE used for the development of the servos.
  • VSCode - Code editor to program hardware components.
  • Python - Language used for programming.
  • Adoble Suit - For visual content creation.

License

This project is under the MIT License - see the LICENSE file for details

Use-Case

If this project helps your robotics project, please let us know with creating an issue.

Amazing Contribution

  • An emotional human-robot interaction never seen before.
  • Table assistant 2.0 equipped with artificial intelligence.
  • An assistant with next-level computer vision. Leaving current trade assistants behind.
  • An easy-to-program framework to add new functionality to the robot.
  • All the code is open-source and does not require the internet to work.

How to contribute

New estates

In order to create new states you can follow these steps:

  1. Adds the initial state of the new functionality in the Default state, this is found in state/StateMove.py.
  class Default(State):
    next_states = {
        -1: 'Default',
        0: 'Default',
        1: 'InitObject',
        2: 'InitEmotion',
        3: 'InitRepite',
        4: 'InitJoke',
        5: 'InitWhereAmI'
        6: 'Ejemplo1'
    }

    def run(self, kwargs):
        #self.state_machine.global_data['audio'] = None
        audio_state = self.state_machine.global_data['audio']
        kwargs = {}
        self.go_to(self.next_states[audio_state], kwargs)
  1. Create a new voice instruction in VoiceDetector/StringMaching.py inside list FRASES.
 FRASES = [
    ("hola detecta objeto", 1),
    ...
    ("hola, esto es un ejemplo", 6)
  ]

Note: that the second value of the tuple corresponds to the identifier of the new state created in step 1.

  1. You can now create the different states necessary to carry out the new functionality:
class Ejemplo1(State):
    def run(self, kwargs):
        kwargs = {}
        kwargs['variable'] = ‘Vengo del estado uno’
        print(f"Inicializando sequencia de test")
        self.go_to("Ejemplo2", kwargs)

class Ejemplo2(State):
    def run(self, kwargs):
        kwargs = {}
        print(f’Mostramos variable {kwargs['variable']}’)
        print(‘Modificamos variable y creamos otra’)
        kwargs['variable']  = ‘He pasado por el estado 2’
        kwargs['variable2'] = ‘He sido creado por el estado 2’
        print(f"Volvemos al estado 1")
        self.go_to("Ejemplo1", kwargs)

Any contribution is welcome!!

Contact us via email.

Citing

If you use this project's code for your academic work or in industry, we'd love to hear from you as well; feel free to reach out to the developers directly.

Support

Authors

Bibliography

About

A friendly robot that makes your life funnier.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •