While advances in AI have led to many achievements in the last decade, AI cannot yet completely replace humans in complex or critical decision-making. Many human traits such as intuition, ethical considerations, empathy, cannot yet be reliably modeled computationally. Furthermore, AI applications have yet achieved full autonomy, even as they have enabled the automation of many traditional tasks.
Compared to traditional HCI, how a human interacts with an AI application differs in a few ways. First, the role of humans has shifted from giving detailed, step-by-step instructions to providing high-level goals to the application. Humans will serve a more supervisory role in interacting with AI applications. This new role requires humans to identify the errors made by AI and to correct the errors. In certain scenarios, such error identification and correction need to happen quickly to avoid undesirable consequences (e.g., accidents in the case of autonomous driving).
Second, the relationship between AI applications and humans becomes more collaborative. Instead of a one-way interaction, where the application receives instructions from humans, human-AI interaction is a two-way interaction -- humans may receive instructions/guidance from the AI to jointly manage the applications' outcomes.
Third, many AI applications, such as voice assistants and chatbots, strive to exhibit human-like behavior. The user interaction paradigms have also shifted from CLI and WIMP interfaces to natural language-based conversational UI, opening up huge design space and implications for applying human-human social interaction theories to human-AI interactions.
Finally, AI applications can learn and evolve from interacting with users, either proactively (ala active learning) or passively (ala recommendation systems). Such an evolving and personalized behavior may present itself as non-deterministic responses to human inputs. In contrast, traditional non-AI applications behave deterministically.
Furthermore, when a user produces an error in interaction (e.g., accidentally click on a wrong item, or intentionally gives a wrong label to a training sample), the AI application may inadvertently learn from the errors, leading to a misbehaved AI over time.
For this research, we aim to design effective AI applications from an HCI perspective, focusing on how humans can collaborate with AI, how humans perceive AI, and how humans can mislead AI.
One domain that we are interested in is conversational chatbots for health applications. We are working on using a conversational chatbot to interact regularly with patients to understand their behavior and to, in the long term, change their behavior. One dimension of our research is to understand the effectiveness of the chatbot, particularly on the quality of the information collected from the chatbot and long-term behavioral change, compared to using the traditional survey form and interview sessions.
From an HCI perspective, we look at the impact of various design factors of the chatbot, such as the persona of the chatbot and the timing of the conversation, on its effectiveness; From an AI perspective, we are interested in how that conversational agent can change its strategy and persona to become more effective after chatting with the users. This is part of our vision to push the envelope of AI agents to not just be more accurate, but also more effective in improving the quality of life of users.
In addition to health, we are also interested in other domains such as e-commerce and education.
References
Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2021). From Human-Computer Interaction to Human-AI Interaction: New Challenges and Opportunities for Enabling Human-Centered AI. arXiv preprint arXiv:2105.05424.
Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., ... & Coiera, E. (2018). Conversational agents in healthcare: a systematic review. Journal of the American Medical Informatics Association, 25(9), 1248-1258.