The project aims to experimentally verify whether physicians are willing to collaborate with the AI system on a diagnosis and to discover factors increasing physicians’ trust in AI. The second goal is to develop settings where patients’ well-being will flourish during the AI diagnosis.
Trust is a vital component of the doctor-patient relationship. When patients trust their physician, patient satisfaction increases, patients are more willing to disclose private information, and follow doctor’s orders. The question arises whether patients can establish a good relationship with a virtual doctor. Studies on a healthy population verified that people can trust the AI system under specific conditions. First of all, to trust people need another human. Therefore, anthropomorphic virtual agents/ avatars should be created. Next, such an agent needs to provide nonverbal cues through facial expressions. In order to evoke trust, one can also create a so-called virtual doppelganger that resembles the user in terms of gender, age, and appearance. Having met such conditions, AI can be expected to inspire trust in users. However, being a patient is a unique situation involving a sense of threat and helplessness. Therefore, evoking trust can be diminished as patients may feel that they are left alone. In fact, the results of the first literature review in this matter indicate that patients are willing to follow AI guidance in low-risk settings. Having this on mind, the project aims to develop such virtual settings where patients’ well-being will flourish during the AI diagnosis process.