Empowering Human Decision-Making on Social Media and Intelligent Systems

My first line of research focuses on design, implement, and evaluate novel recommender systems and intelligent user interfaces in the social context by data science and human-centered approaches. This research line consists of a series of studies of social network, user control, diversity, and engagement. My works aim to expand our understanding of human decision-making on social media and intelligent systems, such as social recommenders, and proposing novel designs and insights to empower the users.

  • Understanding Social Collaborations: In this work [31], I proposed a novel research design that aims at capturing real-time network structures from large-scale news reporting data (~1 million). I characterized the relationship between the two countries by analyzing their news reporting similarity of the APEC (Asia-Pacific Economic Cooperation) Summit in 2013, revealed interesting and meaningful international relations among member countries through network visualizations. In the project [18,25,28,30], I aim to adopt data science approaches to provide future collaboration suggestions. I identified businesses/scholars that might attract the same collaborators with emergent social connections in the future. My works helped to identify the useful features to predict future social collaborations. I conducted offline evaluations to show the prediction model’s high accuracy through supervised learning approaches.

  • Putting the User in Control: Social recommender system aims to provide useful suggestions to the user and prevent the social overload problem. Most of the research efforts address pushing high relevant items to the top of the ranked list, using a weight ensemble approach. However, I argued that the pre-trained static fusion is insufficient for different specific contexts. In this project [9,10,15,17,26], I developed a series of visual recommendation components and a control panel for the user to interact with the recommendation results at an academic conference. The system offered better recommendation transparency and user-driven fusion of multiple data sources. The experiment result showed that the user did fuse the different recommended sources and exploration patterns among tasks. The user-centric evaluation indicated the effectiveness of the system and the quality of the social explanations. This finding shed light on future research questions about designing a recommender system with human intervention and the interface beyond the static ranked list.

  • Breaking the Filter Bubble: Increasing diversity in the recommender system's output is an active research question. Most of the current approaches have focused on ranked list optimization to improve recommendation diversity. However, little is known about the effect that a visual interface can have on this issue. My project [9,10,16,20] showed that a multidimensional visualization promotes a diversity of social exploration in academic conferences. I conducted a series of user studies to evaluate my solutions. The study results showed a significant difference in the exploration pattern between the ranking-based and interactive user interface. The results showed that a visual interface could help the user explore a more diverse set of recommended items, which could be used to solve the filter bubble and echo chamber's challenge in the systems.

  • Investigating Designs for the post-COVID-19 Era: In my recent project [1,5,7], I aim to understand the human decision and experience during the COVID-19 crisis. I have conducted crisis informatics research on government social media, marginalized groups, and the education portioners’ experience during the crisis transition. I used content analysis, inductive thematic analysis, and narrative interfaces to explore the stakeholders’ practice, experience, and perception. My goal is to identify the useful design implications for intelligent systems and sociotechnical systems that could impact human decision-making and collaboration. For instance, public engagement with the government, marginalized community' cross-local collaborations, and new pedagogy for students and faculties.

Improving Human-AI Interaction through Transparency and Explanation

My second line of research focuses on exploring and improving the transparency and explainability of AI-driven systems. I have conducted a series of user studies to understand the user perception and experience in human-AI interactions, focusing on hybrid recommender systems that used social relevance from multiple sources to recommend relevant items or people to a user. I adopt mixed methods to analyze and understand the user’s mental model in interacting AI systems to fulfill this need.

  • Making Explainable Recommendation: In my works of [9,13,14,27], I attempted to increase the transparency of a recommendation process by augmenting an interactive hybrid recommendation interface with several types of explanations, e.g., for text, social, topic, and item similarities. I evaluated the behavior patterns and subjective feedback through the user-centric evaluation of three academic conferences. Results from the evaluations show the effectiveness of the proposed explanation models and visualizations. The post-treatment survey and structural equation modeling indicated a significant improvement in the perception of explainability, but such improvement comes with a lower degree of controllability and cognitive loading. This finding shed light on future research questions about design better AI-driven recommenders’ explanations for non-expert users.

  • Exploring Mental Models on Transparency: My experience demonstrated that explainable user interfaces may not always assure that these users understand the underlying rationale of the data or methods; namely, the recommendations are still not transparent or predictable in the user's mental model. In my work of [19], I presented my preliminary exploration of how the user establishes their understanding of the prompted explanations. I formulated the user-generated text, i.e., the reasons for selecting social recommendations, and applied linguistic analysis to the collected data in three linguistic dimensions of the LIWC library. The comparison allowed me to observe the user feedback and psychological changes across different interface components. These findings shed light on considering the users’ mental models in human-AI interactions and cross-discipline knowledge.

  • Promoting Healthy Lifestyles through Explanations: In my work of [2,6], I examined how non-collocated family exchange support and collaborate on healthy living via explainable health recommendations. I conducted a week-long field study that the participants were paired as families or friends to have a daily photo journaling task on private social media. I used the collected data to generate health recommendations and explanations for promoting healthy daily activities. I adopted a mixed-method approach in analyzing the experimental data, including statistical measures, quantitative survey data, qualitative content analysis, and inductive thematic analysis. I showed the explainable health recommendation significantly increased health awareness between family members and friends, which shed light on the future design of explainable health recommendations.

  • Supporting Medical Decisions through Diagnostic Transparency: In the high-stakes healthcare domain, little attention has been paid to transparency and explainability, despite its enormous popularity. I aim to bridge the explanation design to the underexplored health and medical domains through a user-centric evaluation framework. My work of [3] presented my work of promoting diagnostic transparency by augmenting online symptom checkers (OSC) with explanations. I first identified that users desired to see explanations and styles when using OSCs. Based on empirical data, I proposed and evaluated three explanation styles in a COVID-19 online symptom checker. My research echoes previous work on how explanations could enhance the transparency of intelligent systems. I showed that a more transparent OSC helps reduce uncertainties and confusion for receiving the medical recommendation and help in learning the COVID-19 knowledge.