The 100 suicide notes that taught us about creating more empathetic chatbots
While the art of conversation in machines is limited, there are improvements with every feather. As tools are developed to manage complex conversations, there will be technical and ethical challenges in how to find and deal with sensitive human issues.
Our work involves building chatbots for a range of practices in health care. Our system, which incorporates a number of algorithms using inartificial intelligence (AI) and natural language processing, has been in development at the Australian eHealth Research Center since 2014.
The system has created several chatbot apps that are tested among select people, usually with a basic health condition or requiring reliable health - related information.
They include HARLIE for Parkinson's disease and Autism Specialty Disorder, Edna for people receiving genetic counseling, Dolores for people living with chronic pain, and Quin for people who want to stop. to smoke.
The RECOVER residential robot was a big hit at our recent photoshoot. Our team is currently developing two #chatbots for people with #whiplash and #chronicpain. Dolores will be released at local pain clinics next month. pic.twitter.com/ThG8danV8l
- UQ RECOVER Injury Research Center (@RecoverResearch) May 18, 2021
Research has shown that those with underlying medical conditions are more likely to think about suicide than the general population. We need to make sure our chatbots pay attention to this.
We believe that the safest way to understand the language patterns of people with suicidal thoughts is to examine their messages. The choice and arrangement of their words, the feeling and the philosophy all reflect the author’s thoughts.
For our recent work we examined more than 100 suicidal notes from various texts and identified four relevant language patterns: negative emotion, limited thinking, idioms, and logical problems.
Read more: Introducing Edna: the chatbot trained to help patients make a difficult medical decision
Negative awareness and limited thinking
As might be expected, many of the phrases in the notes we reviewed expressed negative emotions such as:
… Just this heavy despair.
There was also language that signified limited thinking. For instance:
I will never escape from darkness or misery…
The reality of finite thoughts and language is well documented. Restricted thinking considers the truth when dealing with a long source of distress.
For that author, there is no compromise. In the resulting language there are often terms such as either / or, always, forever, forever, nothing, completely, completely and only.
Idioms such as "the grass is greener on the other side" were also common - though not directly linked to suicidal ideation. Idioms are often colloquial and cultural, with the true meaning very different from the literal interpretation.
Such idioms are difficult for chatbots to understand. If a bot is not programmed with the intended meaning, it will work by assuming a literal meaning.
Chatbots can make catastrophic mistakes if they are not coded with knowledge of the true meaning behind certain idioms. In the example below, a more appropriate response from Siri would have been to redirect the user to an emergency line.
The fallacies of reasoning
Words like therefore, it should and their various synonyms need special attention from chatbots. That is because these are often words of bridge between thought and action. Behind them is a logic made up of a basis for decision making, such as:
If I were dead, she would go on living, laughing, trying to make a fortune. But she's thrown me over and she's still doing those things. So, I am so dead.
This is very similar to common perspiration (example of faulty reasoning) which is called confirming what comes next. Below is a more pathological example of this, called catastrophic logic:
I have failed at all. If I do this, I will succeed.
This is an example of semantic reflection (and sensitive thinking) in terms of meaning I., which alternates between the two clauses that make up the second sentence.
This reflection occurs when the author declares that they will experience emotions such as joy or success after completing a suicide - that is what seo he mentions in the note above. This type of "autopilot" method was often described by people who gave psychological statements in interviews after attempting suicide.
Preparing future chatbots
The good news is that negative emotion is detected and tight language can be achieved with off-the-shelf algorithms and publicly available data. Chatbot developers can (and should) implement these algorithms.
In general, the accuracy of performance and detection of the bot will depend on the quality and quantity of the training data. Therefore, only one algorithm should never be involved in detecting language related to poor mental health.
Discovering logic reasoning styles is a new and promising area of research. Formal logic is well established in mathematics and computer science, but it is no small feat to establish machine logic for a common reasoning that has found these flaws.
Here is an example of our system thinking about a short conversation that included a semantic robe mentioned earlier. Note that it is at first a hypothesis of what seo could be referenced, based on the user interaction.
While this technology still needs further research and development, it provides an essential - albeit crucial - understanding of how words can relate to complex world situations ( that is basically semantics).
And machines need this capability if they are to ultimately deal with sensitive human issues - first by detecting warning signs, and then delivering the appropriate response.
This article by David Ireland, Principal Research Scientist at the Australian E-Health Research Center., CSIRO and Dana Kai Bradford, Principal Research Scientist, Australian e-Health Research Center, CSIRO, is republished from The Conversation licensed under Creative Commons. Read the original article.