Softness, Warmth, and Responsiveness Improve Robot Hugs

authors: Alexis E. Block, Katherine J. Kuchenbecker,
publication: International Journal of Social Robotics, October 2018

HuggieBot Teaser

The left image shows the robot (PR2) in the Soft-Warm condition, wearing the custom-made outfit. The right image shows a participant hugging the PR2 with its custom Soft-Warm outfit during the experiment.




Abstract

Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, roboticists are naturally interested in having robots one day hug humans as seamlessly as humans hug other humans. This project’s purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a soft, warm, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty relatively young and rather technical participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use.




Accompanying Video

Citation

@Article{HuggieBot2018,
	author="Block, Alexis E. and Kuchenbecker, Katherine J.",
	title="Softness, Warmth, and Responsiveness Improve Robot Hugs",
	journal="International Journal of Social Robotics",
	year="2018",
	month="Oct",
	day="25",
	issn="1875-4805",
	doi="10.1007/s12369-018-0495-2",
	url="https://doi.org/10.1007/s12369-018-0495-2"
}

Acknowledgments

Open access funding provided by the Max Planck Society. This work is supported in part by funding from the Max Planck ETH Center for Learning Systems. We thank Elyse Chase for developing the artistic rendering of our project goal, as well as Elisabeth Smela and Ying Chen for providing us with custom-made sensors for our project. We would also like to thank Joe Romano, whose PR2 props code served as a foundation of this experiment, Siyao (Nick) Hu for his help and expertise working with the PR2, and Naomi Fitter for her help in designing the user study and analysis.