{"id":1246,"date":"2017-09-29T11:30:18","date_gmt":"2017-09-29T10:30:18","guid":{"rendered":"http:\/\/blogs.kent.ac.uk\/unikentcomp-news\/?p=1246"},"modified":"2017-09-29T11:30:39","modified_gmt":"2017-09-29T10:30:39","slug":"ai-can-predict-whether-your-relationship-will-last-based-on-how-you-speak-to-your-partner","status":"publish","type":"post","link":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/2017\/09\/29\/ai-can-predict-whether-your-relationship-will-last-based-on-how-you-speak-to-your-partner\/","title":{"rendered":"AI can predict whether your relationship will last based on how you speak to your partner"},"content":{"rendered":"<p><a href=\"https:\/\/theconversation.com\/profiles\/ian-mcloughlin-217755\">Ian McLoughlin<\/a>, <em><a href=\"http:\/\/theconversation.com\/institutions\/university-of-kent-1248\">University of Kent<\/a><\/em><\/p>\n<p>Any child (or spouse) who has been scolded for their tone of voice \u2013 such as shouting or being sarcastic \u2013 knows that the <em>way<\/em> you speak to someone can be just as important as the <em>words<\/em> that you use. Voice artists and actors make great use of this \u2013 they are skilled at imparting meaning in the way that they speak, sometimes much more than the words alone would merit.<\/p>\n<p>But just how much information is carried in our tone of voice and conversation patterns and how does that impact our relationships with others? Computational systems can already establish <a href=\"https:\/\/theconversation.com\/can-voice-recognition-technology-really-identify-a-masked-jihadi-52787\">who people are from their voices<\/a>, so could they also tell us anything about our love life? Amazingly, it seems like it.<\/p>\n<p>New research, <a href=\"http:\/\/journals.plos.org\/plosone\/article?id=10.1371\/journal.pone.0185123\">just published in the journal PLOS-ONE<\/a>, has analysed the vocal characteristics of 134 couples undergoing therapy. Researchers from the University of Southern California used computers to extract standard speech analysis features from recordings of therapy session participants over two years. The features \u2013 including <a href=\"http:\/\/iitg.vlab.co.in\/?sub=59&amp;brch=164&amp;sim=1012&amp;cnt=1\">pitch<\/a>, variation in pitch and intonation \u2013 all relate to voice aspects like tone and intensity.<\/p>\n<p>A machine-learning algorithm was then trained to learn a relationship between those vocal features and the eventual outcome of therapy. This wasn\u2019t as simple as detecting shouting or raised voices \u2013 it included the interplay of conversation, who spoke when and for how long as well as the sound of the voices. It turned out that ignoring what was being said and considering only these patterns of speaking was sufficient to predict whether or not couples would stay together. This was purely data driven, so it didn\u2019t relate outcomes to specific voice attributes.<\/p>\n<figure><div class=\"kent-video-wrapper\"><span class='embed-youtube' style='text-align:center; display: block;'><iframe class='youtube-player' type='text\/html' width='1140' height='672' src='https:\/\/www.youtube.com\/embed\/GVqJtvRjuns?version=3&#038;rel=1&#038;fs=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;wmode=transparent' frameborder='0' allowfullscreen='true'><\/iframe><\/span><\/div><figcaption><span class=\"caption\">How a tone of voice can change the meaning of a few words.<\/span><\/figcaption><\/figure>\n<p>Interestingly, the full video recordings of the therapy session were then given to experts to classify. Unlike the AI, they made their predictions using psychological assessment based on the vocal (and other) attributes \u2013 including the words spoken and body language. Surprisingly, their prediction of the eventual outcome (they were correct in 75.6% of the cases) was inferior to predictions made by the AI based only on vocal characteristics (79.3%). Clearly there are elements encoded in the way we speak that not even experts are aware of. But the best results came from combining the automated assessment with the experts\u2019 assessment (79.6% correct).<\/p>\n<p>The significance of this is not so much about involving AI in marriage counselling or getting couples to speak more nicely to each other (however meritorious that would be). The significance is revealing how much information about our underlying feelings is encoded in the way we speak \u2013 some of it completely unknown to us.<\/p>\n<p>Words written on a page or a screen have lexical meanings derived from their dictionary definitions. These are modified by the context of surrounding words. There can be great complexity in writing. But when words are read aloud, it is true that they take on additional meanings that are conveyed by word stress, volume, speaking rate and tone of voice. In a typical conversation there is also meaning in how long each speaker talks for, and how quickly one or other might interject.<\/p>\n<p>Consider the simple question \u201cWho are you?\u201d. Try speaking this with stress on different words; \u201cWho are <em>you<\/em>?\u201d, \u201cWho <em>are<\/em> you?\u201d and \u201c<em>Who<\/em> are you?\u201d. <a href=\"http:\/\/www.lintech.org\/clip1\">Listen to these<\/a> \u2013 the semantic meaning can change with how we read even when the words stay the same.<\/p>\n<h2>Computers reading \u2018leaking senses\u2019?<\/h2>\n<p>It is unsurprising that words convey different meanings depending on how they are spoken. It is also unsurprising that computers can interpret some of the meaning behind how we choose to speak (maybe one day they will even be able to <a href=\"http:\/\/felix.syntheticspeech.de\/publications\/ironyDB.pdf\">understand irony<\/a>).<\/p>\n<p>But this research takes matters further than just looking at the meaning conveyed by a sentence. It seems to reveal underlying attitudes and thoughts that lie behind the sentences. This is a much deeper level of understanding.<\/p>\n<p>The therapy participants were not reading words like actors. They were just talking naturally \u2013 or as naturally as they could in a therapist\u2019s office. And yet the analysis revealed information about their mutual feelings that they were \u201cleaking\u201d inadvertently into their speech. This may be one of the first steps in using computers to determine what we are really thinking or feeling. Imagine for a moment conversing with future smartphones \u2013 will we \u201cleak\u201d information that they can pick up? How will they respond?<\/p>\n<figure class=\"align-center \"><img src=\"https:\/\/cdn.theconversation.com\/files\/183737\/width754\/file-20170829-10431-1te3x9t.jpg\" alt=\"\" \/><figcaption><span class=\"caption\">Congratulations. Changes in your voice, pulse and pupil size all indicate you\u2019ve found a romantic match.<\/span><br \/>\n<span class=\"attribution\"><span class=\"source\">Astarot\/Shutterstock<\/span><\/span><\/figcaption><\/figure>\n<p>Could they advise us about potential partners by listening to us talking together? Could they detect a propensity towards antisocial behaviour, violence, depression or other conditions? It would not be a leap of imagination to imagine the <a href=\"https:\/\/www.fastcompany.com\/3047894\/how-to-turn-your-smartphone-into-your-personal-therapist\">devices themselves as future therapists<\/a> \u2013 interacting with us in various ways to track the effectiveness of interventions that they are delivering.<\/p>\n<p>Don\u2019t worry just yet because we are years away from such a future, but it does raise <a href=\"http:\/\/fc15.ifca.ai\/preproceedings\/wearable\/paper_2.pdf\">privacy issues<\/a>, especially as we interact more deeply with computers at the same time as they are becoming more powerful at analysing the world around them.<\/p>\n<p><img loading=\"lazy\" src=\"https:\/\/counter.theconversation.com\/content\/81420\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" \/>When we pause also to consider the other human senses apart from sound (speech); perhaps we also leak information through sight (such as body language, blushing), touch (temperature and movement) or even smell (pheromones). If smart devices can learn so much by listening to how we speak, one wonders <a href=\"https:\/\/www.wired.com\/insights\/2013\/01\/coming-soon-computers-will-use-the-five-senses-to-enhance-our-lives\/\">how much more could they glean from the other senses<\/a>.<\/p>\n<p><a href=\"https:\/\/theconversation.com\/profiles\/ian-mcloughlin-217755\">Ian McLoughlin<\/a>, Professor of Computing, Head of School (Medway), <em><a href=\"http:\/\/theconversation.com\/institutions\/university-of-kent-1248\">University of Kent<\/a><\/em><\/p>\n<p>This article was originally published on <a href=\"http:\/\/theconversation.com\">The Conversation<\/a>. Read the <a href=\"https:\/\/theconversation.com\/ai-can-predict-whether-your-relationship-will-last-based-on-how-you-speak-to-your-partner-81420\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ian McLoughlin, University of Kent Any child (or spouse) who has been scolded for their tone of voice \u2013 such as shouting or being sarcastic &hellip; <a href=\"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/2017\/09\/29\/ai-can-predict-whether-your-relationship-will-last-based-on-how-you-speak-to-your-partner\/\">Read&nbsp;more<\/a><\/p>\n","protected":false},"author":5321,"featured_media":1247,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[124,70],"tags":[149989,149987],"_links":{"self":[{"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/posts\/1246"}],"collection":[{"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/users\/5321"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/comments?post=1246"}],"version-history":[{"count":2,"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/posts\/1246\/revisions"}],"predecessor-version":[{"id":1249,"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/posts\/1246\/revisions\/1249"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/media\/1247"}],"wp:attachment":[{"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/media?parent=1246"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/categories?post=1246"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/unikentcomp-news\/wp-json\/wp\/v2\/tags?post=1246"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}