Bio/CV

*To view my most recent CV, please click on this link (Updated: September 7, 2023).

Dr. Chun-Hua Tsai is an Assistant Professor in the Department of Information Systems and Quantitative Analysis (ISQA) at the College of Information Science & Technology, University of Nebraska at Omaha (UNO). Prior to his appointment at UNO, he served as an assistant research professor at the College of Information Sciences and Technology at Penn State University. In 2019, Dr. Tsai received his Ph.D. from the University of Pittsburgh.

His research agenda aims to contribute to the understanding and design of sociotechnical systems through the application of Human-Computer Interaction (HCI) and Human-Centered Computing (HCC) methodologies. His research lies at the intersection of HCI, Intelligent User Interface (IUI), and Artificial Intelligence (AI), with a focus on developing fair, trustworthy, and transparent AI through data-driven and human-centered computing approaches. Dr. Tsai adopts mixed quantitative and qualitative approaches to understand user interactions and experiences with AI-driven systems, exploring design solutions for building trustworthy services and improving human-AI interaction, particularly for recommender systems, educational systems, and social media.

He is particularly interested in designing solutions that empower non-expert users and marginalized groups. His research goals are twofold: first, to generate empirical, conceptual, and theoretical insights into how AI-driven systems and social media users engage in information retrieval, seeking, and personal decision-making; and second, to offer practical recommendations for designing a better controllable and explainable AI (XAI) mechanism to support and empower individuals' interactions with AI systems. Dr. Tsai enjoys adopting mixed methods in exploring and designing designs to empower stakeholders. He applies qualitative methods such as narrative interviews, thematic analysis, and content analyses to capture people's needs, behavior, perception, and practice in multiple contexts (e.g., social media, health recommendations, and online communities) to translate the results into design implications.

Dr. Tsai's current research focuses on ensuring that users understand the underlying rationale contributing to data or computing methods. He and his research team aim to make sure that recommendations' inner logic or data are interpretable and understandable to users who are not trained professionals (due to a lack of AI literacy). He aims to explore and design everyday explanations that even non-expert users or low AI literacy groups can benefit from, which considers cross-discipline knowledge (e.g., HCI, social science, cognitive science, psychology, etc.) and the user's mental model instead of only the domain expert's scientific intuition. He wants to explore how to design explainable AI that could be used to empower human-AI interaction across sociotechnical systems. In 2022, Dr. Tsai received an award from the NSF CRII to support this proposed research project.