ACM CHI 2022

EmoBalloon - Conveying Emotional Arousal in Text Chats with Speech Balloons

TOSHIKI AOKI*†, RINTARO CHUJO*†, KATSUFUMI MATSUI†, SAEMI CHOI‡, ARI HAUTASAARI†

* Both authors contributed equally to this research.
† The University of Tokyo
‡Samsung Research

EmoBalloon is a novel method to increase the socio-emotional cues available in text-based chat with speech balloons.

We found that EmoBalloon outperforms emoticons in decreasing the differences between message senders’ and receivers’ perceptions about the level of emotional arousal.

Preview Video [0:30]

Main Video [6:44]

Abstract

Text chat applications are an integral part of daily social and professional communication. However, messages sent over text chat applications do not convey vocal or nonverbal information from the sender, and detecting the emotional tone in text-only messages is challenging. In this paper, we explore the effects of speech balloon shapes on the sender-receiver agreement regarding the emotionality of a text message. We first investigated the relation- ship between the shape of a speech balloon and the emotionality of speech text in Japanese manga. Based on these results, we created a system that automatically generates speech balloons matching linear emotional arousal intensity by Auxiliary Classifier Generative Adversarial Networks (ACGAN). Our evaluation results from a controlled experiment suggested that the use of emotional speech balloons outperforms the use of emoticons in decreasing the differences between message senders’ and receivers’ perceptions about the level of emotional arousal in text messages.

Publication

Toshiki Aoki*, Rintaro Chujo*, Katsufumi Matsui, Saemi Choi, and Ari Hautasaari. 2022. EmoBalloon - Conveying Emotional Arousal in Text Chats with Speech Balloons. In CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 527, 1–16. https://doi.org/10.1145/3491102.3501920 (🏆 Best Paper Awards, top 1%) (*: equal contribution)

Download

Paper [PDF]

Talk slide [PDF] [Speaker Deck]

Authors

is a senior student at Faculty of Engineering, the University of Tokyo. His area of research lies in applying machine learning techniques to Human-Computer-Interaction especially Computer Graphics part. He currently work at Igarashi CREST.

is a senior student at Department of psychology, the University of Tokyo. His main research area is Educational Technology, computer-supported cooperative learning, and Human-Computer Interaction especially Human Communication.

Dr. KATSUFUMI MATSUI

is a Project Researcher in the Division of University Corporate Relations and the Director of Hongo Tech Garage at The University of Tokyo. His research interests are focused on entrepreneurship education and Human-Computer Interaction.

Dr. SAEMI CHOI

is a Computer Scientist at Samsung Research (Think Tank Team) based in South Korea. Her research interests are in Affective Computing, Multimedia Processing and Computer Human Interaction.

Dr. ARI HAUTASAARI

is a Project Associate Professor in the Interfaculty Initiative in Information Studies at the University of Tokyo. His research interests are in multilingual and socio-emotional computer mediated communication (CMC), computer-supported cooperative work (CSCW), and emotional value exchange in online C2C markets.


MEDIA COVERAGE

  • 2022.05 音声入力時の感情に合わせて吹き出しを自動作成 興奮時なら“爆発型”など、東大が開発 | ITMedia: link

ACKNOWLEDGMENTS

This work was supported by JSPS KAKENHI Grant Number JP18K18085. This research is part of the results of Value Exchange Engineering, a joint research project between Mercari, Inc. and the RIISE.