The speakers

Simone Stumpf

Simone Stumpf Simone Stumpf is Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with AI systems. Her research includes self-management systems for people living with long-term conditions, developing teachable object recognisers for people who are blind or have low vision, and investigating AI fairness. Her work has contributed to shaping Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles for enabling better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI systems effectively.

More about Simone Stumpf's topics

Simone will present two distinct sessions during the Winter School: Session A1 and A2.

Session A1 - Responsible AI AI technologies are rapidly advancing and are transforming our work and lives. However, there are grave concerns that AI carries risks and might create harms to individuals, groups and society. There have been many calls that we need to develop more responsible AI (RAI) systems. In this session, you will learn what RAI is, fundamental aspects of developing RAI, and cover current research strands in RAI. We will have hands-on tasks to critically evaluate how ‘responsible’ current AI technologies are, possible ways forward in creating responsible AI systems, and the role of AI in your own research programme.

**Session A2 - Explainable AI Transparency is one of the corner stones of Responsible AI and Explainable AI has been seen as the solution to transparency issues. In this session, we will cover different ways of explaining AI systems and their pitfalls. We will center human interpretability as the main purpose of explaining AI systems and delve into aspects that need to be considered when providing explanations of AI systems as well as when measuring the effects of providing explanations. We will explore current research gaps and integrate exercises and activities to deepen your understanding of explanations.


 

Ujwal Gadiraju

Firstname Lastname Ujwal Gadiraju is a tenured Assistant professor in the Software Technology Department of the faculty of Electrical Engineering, Mathematics, and Computer Science at Delft University of Technology in the Netherlands. He co-directs the TU Delft “Design@Scale” AI Lab and is a member of the program management team of the TU Delft AI Labs. He is the Vice-Chair of CHI Netherlands, representing the human-computer interaction community of academics and industry practitioners in the Netherlands, and has served as an ACM Distinguished Speaker. Ujwal leads a research line on Human-Centered AI and Crowd Computing, actively collaborating with experts in healthcare, finance, and education and working with a variety of industry partners and NGOs to tackle important societal problems. Before joining the WIS group, Ujwal worked at the L3S Research Center as a Postdoctoral researcher between 2017-2020. He received a PhD degree (Dr. rer. nat.) in Computer Science with a summa cum laude recognition from the Leibniz University of Hannover, Germany, in 2017, an MSc. Computer Science degree from TU Delft, the Netherlands, in 2012, and a B.Tech. Computer Science and Engineering degree from VIT University, India in 2010. His research interests lie at the intersection of Human-Computer Interaction (HCI), Artificial Intelligence (AI), and Information Retrieval (IR). Ujwal has published over 200 peer-reviewed articles in these fields. His work has been recognized with several honors, including 10 paper awards at top-tier HCI and AI conferences. His current research focuses on creating novel methods, interfaces, systems, and tools to overcome existing challenges on the path toward building more effective and inclusive AI systems and facilitating appropriate reliance of humans on such systems.

More about Ujwal Gadiraju topics Pat will present two distinct sessions during the Winter School: Session C1 and C2.

Session B1 - Fostering Appropriate Trust and Reliance in Human-AI Decision-making Advances in AI and machine learning technologies have snowballed the proliferation and adoption of AI systems across different domains ranging from finance to health and education. Researchers and practitioners in different communities exploring the societal impact of integrating AI systems in our everyday lives have recognized the dangers of over-trust and blind reliance on AI systems. In equal measure, there has been a recognition of the potential benefits in collaborating with AI systems that can aid humans in domains and contexts that go beyond their expertise or otherwise complement human capabilities. Striving to foster appropriate reliance (i.e., simultaneously preventing over-reliance or under-reliance) on AI systems has been akin to walking a tight rope. Over the last decade, several methods and interventions have been proposed to this end, but with limited success. This lecture will present an overview of the empirical pursuit of facilitating appropriate reliance in human-AI decision-making and the lessons we learned along the way. The lecture will also discuss the open opportunities and challenges that lie ahead of us in the imminent future.

Session B2 - Designing Effective Conversational Interfaces for Human-AI Collaboration The rise in popularity of conversational agents has enabled humans to interact with machines more naturally. There is a growing familiarity among people with conversational interactions mediated by technology due to the widespread use of LLM agents, mobile devices, and messaging services. Over half the population on our planet has access to the Internet with ever-lowering barriers to accessibility. Though text modality is a dominant way to implement conversational user interfaces (CUIs) today, foundational AI models enable the implementation of multimodal CUIs using voice and visual modality. Adopting visual and auditory cues in addition to text-based responses provides an engaging user experience, specifically in complex scenarios like health guidance, and job interviewing, among others. This lecture will present a review of state-of-the-art research and best practices on building and deploying multimodal CUIs and synthesize the open research challenges in supporting such CUIs. The lecture will also showcase the benefits of employing novel conversational interfaces in the domains of human-AI decision-making, health and well-being, information retrieval, and crowd computing. The lecture will also discuss the potential of conversational interfaces in facilitating and mediating the interactions of people with AI systems.


 

Pat Pataranutaporn

Firstname Lastname Pat Pataranutaporn, Ph.D. is a technologist and a researcher at the Massachusetts Institute of Technology (MIT) where he explores Human-AI interaction, Cyborg Psychology, synthetic virtual humans, and synthetic biology. Specifically, he focuses on the intersections between biological and digital systems. He finished his Ph.D. from the Fluid Interfaces research group at the MIT Media Lab and was a KBTG Fellow, working in collaboration with teams at NASA, NTT DATA, IBM, KBTG, UCSB, Stanford, and Harvard on advancing the future of human-computer interaction. Pat has also held a position as a researcher at the NASA SETI Frontier Development Lab.

More about Pat Pataranutaporn's topics

Pat will present two distinct sessions during the Winter School: Session C1 and C2.

Session C1 - Cyborg Psychology: Designing Human-AI Systems that Support Human Flourishing As Artificial Intelligence (AI) becomes increasingly integrated into our daily lives, understanding the psychological implications of human-AI interaction is crucial for developing systems that truly support human capabilities. This talk introduces "Cyborg Psychology," an interdisciplinary, human-centered approach to understanding how AI systems influence human psychological processes. Cyborg Psychology emphasizes applying insights to design and develop AI systems that support human flourishing through the cultivation of Wisdom, Wonder, and Wellbeing. For example, the "Wearable Reasoner" seeks to enhance human rationality, "Personalized Virtual Characters" aims to support learning motivation, and "Future You" is designed to encourage long-term oriented thinking and behavior. The ultimate goal is to empower the development of AI systems that foster human flourishing by nurturing intellectual growth, cultivating motivation, stimulating critical thinking, and preserving individual autonomy in decision-making

Session C2 - Research Methodologies for Investigating Human Cognitive Vulnerabilities in Cyborg Psychology This presentation examines methodological approaches for studying cognitive vulnerabilities that emerge in human-AI interactions. Through an interdisciplinary lens combining human-computer interaction and behavioral sciences, we analyze how AI systems can inadvertently influence human psychological processes, particularly in areas of critical thinking, memory, and belief formation. Our research framework identifies key vulnerabilities in human cognition when interfacing with AI, including over-reliance, confirmation bias amplification, and reduced cognitive awareness. We present empirical approaches for investigating these phenomena and discuss implications for protective design measures in AI systems.


 

Joseph Paradiso

Firstname Lastname Joseph A. Paradiso is the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, where he directs the Responsive Environments group. Joe Paradiso received his PhD in Physics from MIT in 1981, a BSEE from Tufts University in 1977, and joined the Media Lab in 1994. His current research explores how sensor networks augment and mediate human experience, interaction, and perception—encompassing wireless sensing systems, wearable and body sensor networks, energy harvesting and power management for embedded sensors, ubiquitous and pervasive computing, human-computer interfaces, electronic music, and interactive media. He has previously worked in high-energy physics, spacecraft control, and underwater sonar.

More about Joe Paradiso's topic

Joe will present one session during the Winter School: Session E.

Session E - Joe Paradiso's insights on Human-AI Collaboration During this session, Joe will provide his insights on the notio of Augmentred Human and summarizes the concepts seen during the Winter School.