Skip to content

Latest commit

 

History

History
124 lines (98 loc) · 4.33 KB

README.md

File metadata and controls

124 lines (98 loc) · 4.33 KB

Pictomood Build Status Size Maintainability

Pictomood produces emotions out of images.

What?

Pictomood is an implementation of the Object-to-Emotion Association (OEA) model, which adds object annotations as features to consider in predicting human emotion response towards an image.

Makes use of the following features:

Feature Derived From
Object annotations Microsoft COCO: Common Objects in Context (Lin et. al., 2015)
Colorfulness score Measuring colourfulness in natural images (Hasler and Susstrunk, 2003)
Dominant colors palette Color Thief
Mean GLCM contrast Textual Features for Image Classificiation (Haralick et. al., 1973)

Built on top of scikit-learn and Tensorflow Object Detection API.

Dependencies

  • Python 3.6

  • Python packages at requirements.txt

    # Available on both conda and pip
    scikit-image==0.13.0
    scikit-learn==0.19.1
    tensorflow==1.3.0
    pillow==5.0.0
    pandas==0.20.1
    numpy<=1.12.1
    opencv-python
    imutils
    
    # Available on pip only
    colorthief==0.2.1
    
  • Repo-included APIs:

Contributing

Setup

  1. Fork this repo.
  2. Clone the fork.
  3. Clone the dataset Size to the clone's root path.
# BEFORE
pictomood # clone's root path, clone dataset here
L .git
L pictomood
L # other repo files
# TO CLONE,
$ git clone https://github.com/pic2mood/training_p2m.git {clone's root path}
# AFTER
pictomood # clone's root path, clone dataset here
L .git
L pictomood
L training_p2m # dataset clone path
L # other repo files
  1. Setup Python environment.
  2. Install dependencies.
$ pip install -r requirements.txt

Run

Typical usage

python -m pictomood.pictomood --montage --score

Help

$ python -m pictomood.pictomood --help
usage: pictomood.py [-h] [--model MODEL] [--parallel] [--batch] [--montage]
                    [--single_path SINGLE_PATH] [--score]

optional arguments:
  -h, --help            show this help message and exit
  --model MODEL         Models are available in /conf. Argument is the config
                        filename suffix. (e.g. --model oea_all # for
                        config_oea_all config file).
  --parallel            Enable parallel processing for faster results.
  --batch               Enable batch processing.
  --montage             Embed result on the image.
  --single_path SINGLE_PATH
                        Single image path if batch is disabled.
  --score               Add model accuracy score to output.

Train

Typical usage

python -m pictomood.trainer --model oea_all

Help

$ python -m pictomood.trainer oea --help
usage: trainer.py [-h] [--model MODEL] [--dry_run]

optional arguments:
  -h, --help     show this help message and exit
  --model MODEL  Models are available in /conf. Argument is the config filename
                 suffix. (e.g. --model oea_all # for config_oea_all config
                 file).
  --dry_run      When enabled, the trained model won't be saved.

Authors


raymelon


gorejuice