Software for Continuous Affect Rating and Media Annotation (c) Jeffrey M Girard, 2014-2023
CARMA is a media annotation program that collects continuous ratings while displaying audio and video files. It is designed to be highly user-friendly and easily customizable. CARMA enables researchers and study participants to provide moment-by-moment ratings of multimedia files using a computer mouse, keyboard, or joystick. The rating scale can be configured on a number of parameters including its labels and numerical range. Annotations can be displayed alongside the multimedia file and saved for easy import into statistical analysis software. CARMA provides a tool for researchers in affective computing, human-computer interaction, and the social sciences who need to capture the unfolding of subjective experience and observable behavior over time.
- The latest release of CARMA will always be available from https://github.com/jmgirard/CARMA/releases
- Documentation for CARMA can be accessed via the wiki at https://github.com/jmgirard/CARMA/wiki
- Issues can be reported and features can be requested at https://github.com/jmgirard/CARMA/issues
CARMA was first published by Jeffrey Girard in 2014 under the GNU General Public License version 3 (GPUv3). Users are free to use, distribute, and modify the program as outlined in the license. CARMA is meant to be a modernization of Gottman & Levenson's affect rating dial. A journal publication describing CARMA and its use was published in 2014. However, note that the program and its functionality have changed a lot since that initial publication.
Users must agree to cite the following article in all publications making use of CARMA:
Girard, J. M. (2014). CARMA: Software for continuous affect rating and media annotation. Journal of Open Research Software, 2(1), e5. http://doi.org/10.5334/jors.ar
@article{Girard2014e,
author = {Girard, Jeffrey M},
journal = {Journal of Open Research Software},
title = {{CARMA: Software for continuous affect rating and media annotation}},
year = {2014},
volume = {2},
number = {1},
pages = {e5},
doi = {10.5334/jors.ar}
}
- Kaczmarek, L. D., Behnke, M., Enko, J., Kosakowski, M., Guzik, P., & Hughes, B. M. (in press). Splitting the affective atom: Divergence of valence and approach-avoidance motivation during a dynamic emotional experience. Current Psychology. doi: https://doi.org/10/gf2trk
- Dhamija, S., & Boult, T. E. (2018). Automated action units vs. expert raters: Face off. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision.
- Leins, D. A., Zimmerman, L. A., & Polander, E. N. (2017). Observers' real-time sensitivity to deception in naturalistic interviews. Journal of Police and Criminal Psychology. doi: http://doi.org/10.1007/s11896-017-9224-2
- Hammal, Z., Cohn, J. F., Heike, C., & Speltz, M. L. (2015). What can head and facial movements convey about positive and negative affect? In Proceedings of the International Conference on Affective Computing and Intelligent Interaction. doi: http://doi.org/10.1109/ACII.2015.7344584
- Hammal, Z., Cohn, J. F., Heike, C., & Speltz, M. L. (2015). Automatic measurement of head and facial movement for analysis and detection of infants' positive and negative affect. Frontiers in ICT, 2(21). doi: http://doi.org/10.3389/fict.2015.00021
- Dworkin, J. (2015). Capturing emotional suppression as it naturally unfolds in couple interactions. Haverford College. Retrieved from http://hdl.handle.net/10066/16644