The speakers

Simone Stumpf

Simone Stumpf Simone Stumpf is Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with AI systems. Her research includes self-management systems for people living with long-term conditions, developing teachable object recognisers for people who are blind or have low vision, and investigating AI fairness. Her work has contributed to shaping Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles for enabling better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI systems effectively.

More about Simone Stumpf's topics

Simone will present two distinct sessions during the Winter School: Session A1 and A2.

Session A1 - Responsible AI AI technologies are rapidly advancing and are transforming our work and lives. However, there are grave concerns that AI carries risks and might create harms to individuals, groups and society. There have been many calls that we need to develop more responsible AI (RAI) systems. In this session, you will learn what RAI is, fundamental aspects of developing RAI, and cover current research strands in RAI. We will have hands-on tasks to critically evaluate how ‘responsible’ current AI technologies are, possible ways forward in creating responsible AI systems, and the role of AI in your own research programme.

**Session A2 - Explainable AI Transparency is one of the corner stones of Responsible AI and Explainable AI has been seen as the solution to transparency issues. In this session, we will cover different ways of explaining AI systems and their pitfalls. We will center human interpretability as the main purpose of explaining AI systems and delve into aspects that need to be considered when providing explanations of AI systems as well as when measuring the effects of providing explanations. We will explore current research gaps and integrate exercises and activities to deepen your understanding of explanations.


 

Ujwal Gadiraju

Ujwal Gadiraju Ujwal Gadiraju is a tenured Assistant professor in the Software Technology Department of the faculty of Electrical Engineering, Mathematics, and Computer Science at Delft University of Technology in the Netherlands. He co-directs the TU Delft “Design@Scale” AI Lab and is a member of the program management team of the TU Delft AI Labs. He is the Vice-Chair of CHI Netherlands, representing the human-computer interaction community of academics and industry practitioners in the Netherlands, and has served as an ACM Distinguished Speaker. Ujwal leads a research line on Human-Centered AI and Crowd Computing, actively collaborating with experts in healthcare, finance, and education and working with a variety of industry partners and NGOs to tackle important societal problems. Before joining the WIS group, Ujwal worked at the L3S Research Center as a Postdoctoral researcher between 2017-2020. He received a PhD degree (Dr. rer. nat.) in Computer Science with a summa cum laude recognition from the Leibniz University of Hannover, Germany, in 2017, an MSc. Computer Science degree from TU Delft, the Netherlands, in 2012, and a B.Tech. Computer Science and Engineering degree from VIT University, India in 2010. His research interests lie at the intersection of Human-Computer Interaction (HCI), Artificial Intelligence (AI), and Information Retrieval (IR). Ujwal has published over 200 peer-reviewed articles in these fields. His work has been recognized with several honors, including 10 paper awards at top-tier HCI and AI conferences. His current research focuses on creating novel methods, interfaces, systems, and tools to overcome existing challenges on the path toward building more effective and inclusive AI systems and facilitating appropriate reliance of humans on such systems.

More about Ujwal Gadiraju topics Ujwal will present two distinct sessions during the Winter School: Session B1 and B2.

Session B1 - Fostering Appropriate Trust and Reliance in Human-AI Decision-making Advances in AI and machine learning technologies have snowballed the proliferation and adoption of AI systems across different domains ranging from finance to health and education. Researchers and practitioners in different communities exploring the societal impact of integrating AI systems in our everyday lives have recognized the dangers of over-trust and blind reliance on AI systems. In equal measure, there has been a recognition of the potential benefits in collaborating with AI systems that can aid humans in domains and contexts that go beyond their expertise or otherwise complement human capabilities. Striving to foster appropriate reliance (i.e., simultaneously preventing over-reliance or under-reliance) on AI systems has been akin to walking a tight rope. Over the last decade, several methods and interventions have been proposed to this end, but with limited success. This lecture will present an overview of the empirical pursuit of facilitating appropriate reliance in human-AI decision-making and the lessons we learned along the way. The lecture will also discuss the open opportunities and challenges that lie ahead of us in the imminent future.

Session B2 - Designing Effective Conversational Interfaces for Human-AI Collaboration The rise in popularity of conversational agents has enabled humans to interact with machines more naturally. There is a growing familiarity among people with conversational interactions mediated by technology due to the widespread use of LLM agents, mobile devices, and messaging services. Over half the population on our planet has access to the Internet with ever-lowering barriers to accessibility. Though text modality is a dominant way to implement conversational user interfaces (CUIs) today, foundational AI models enable the implementation of multimodal CUIs using voice and visual modality. Adopting visual and auditory cues in addition to text-based responses provides an engaging user experience, specifically in complex scenarios like health guidance, and job interviewing, among others. This lecture will present a review of state-of-the-art research and best practices on building and deploying multimodal CUIs and synthesize the open research challenges in supporting such CUIs. The lecture will also showcase the benefits of employing novel conversational interfaces in the domains of human-AI decision-making, health and well-being, information retrieval, and crowd computing. The lecture will also discuss the potential of conversational interfaces in facilitating and mediating the interactions of people with AI systems.


 

Emmanuel Senft

Emmanuel Senft Dr. Emmanuel Senft is a Research Scientist at the Idiap Research Institute in Martigny, Switzerland and a member of the Swiss Young Academy. He leads the Human-centered Robotics and AI group which develops assistive robots by exploring the intersection of participatory design, end-user programming, and interactive machine learning. He obtained his MSc. in micro-engineering from EPFL, his PhD in human-robot interaction at the University of Plymouth, UK. Before joining Idiap, he was a cooperate researcher at ATR in Japan and a Research Associate at the University of Wisconsin-Madison in the USA.

More about Emmanuel Senft topic

Emmanuel will present one session during the Winter School: Session C1.1.

Session C1.1 - End-user programming (EUP) for AI End-user programming (EUP) or no-code programming tools aim to lower the barrier of entry for end-users to directly specify AI and robot programs. By transferring this capability from engineers to end-users, EUP enables users to customize their experiences according to their unique needs and preferences. This talk will discuss specific challenges of EUP and traditional modalities, with a particular focus on recent works in EUP for human-robot interaction.


 

Sandrine Tornay

Sandrine Tornay Dr. Sandrine Tornay is a Postdoctoral Researcher at the Idiap Research Institute (Martigny, Switzerland) in the group of Dr. Mathew Magimai Doss. She is currently working on the R&D of sign language learning technologies. She received her Ph.D. degree in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL, Switzerland) after completing a Master's degree in Mathematics at the University of Fribourg (Switzerland). Her thesis was about explainable phonology-based approaches for sign language recognition and assessment. Her research interests lie at the intersection of sign language processing, accessibility, education, child-oriented technology and human-AI interaction.

More about Sandrine Tornay topic

Sandrine will present one session during the Winter School: Session C1.2.

Session C1.2 - Assistive Technology for Sign Language Learning In language learning, learners need to develop comprehension and production skills, both of which are necessary for successful interaction. The use of digital technologies to support the acquisition of these skills has proven effective in spoken language learning and is emerging in sign language learning. Most of the existing tools for sign language learning have been developed for the acquisition of comprehension, while the production side involves only self-comparison. However, learning effective sign language production requires good proprioception, spatial reasoning and observation skills, as sign language is a gestural mode of communication that uses multiple channels of information to convey meaning: hand gestures, body posture, facial expression and mouthing. There is a need to develop applications that can guide the learner in different aspects of sign language production. This talk will present the methods behind an AI-driven, web-based sign language learning application that automatically assesses sign language production. The talk will provide insights into an assistive technology that deals with hand movement and handshape modeling, multi-channel modeling, and explainable requirements, in a low resource framework.


 

Pat Pataranutaporn

Firstname Lastname Pat Pataranutaporn, Ph.D. is a technologist and a researcher at the Massachusetts Institute of Technology (MIT) where he explores Human-AI interaction, Cyborg Psychology, synthetic virtual humans, and synthetic biology. Specifically, he focuses on the intersections between biological and digital systems. He finished his Ph.D. from the Fluid Interfaces research group at the MIT Media Lab and was a KBTG Fellow, working in collaboration with teams at NASA, NTT DATA, IBM, KBTG, UCSB, Stanford, and Harvard on advancing the future of human-computer interaction. Pat has also held a position as a researcher at the NASA SETI Frontier Development Lab.

More about Pat Pataranutaporn topics

Pat will remotely present part of a session during the Winter School: Session C1.2.

Session C1.2 - Cyborg Psychology: Designing Human-AI Systems that Support Human Flourishing As Artificial Intelligence (AI) becomes increasingly integrated into our daily lives, understanding the psychological implications of human-AI interaction is crucial for developing systems that truly support human capabilities. This talk introduces "Cyborg Psychology," an interdisciplinary, human-centered approach to understanding how AI systems influence human psychological processes. Cyborg Psychology emphasizes applying insights to design and develop AI systems that support human flourishing through the cultivation of Wisdom, Wonder, and Wellbeing. For example, the "Wearable Reasoner" seeks to enhance human rationality, "Personalized Virtual Characters" aims to support learning motivation, and "Future You" is designed to encourage long-term oriented thinking and behavior. The ultimate goal is to empower the development of AI systems that foster human flourishing by nurturing intellectual growth, cultivating motivation, stimulating critical thinking, and preserving individual autonomy in decision-making


 

André Freitas

Andre Freitas André Freitas is a Research Group Leader at the Idiap Research Institute, an Associate Professor (Senior Lecturer) at the Department of Computer Science at the University of Manchester and an AI Group leader at the CRUK National Biomarker Centre. He leads the Neuro-symbolic AI Group. His main research interests are on enabling the development of AI methods to support explainable, safe and flexible reasoning for complex analytical problems, by integrating contemporary Generative AI paradigms with formal reasoning methods. He has published at major conferences and journals in the AI/Natural Language Processing (NLP) interface including AAAI, NeurIPS, ACL, EMNLP, EACL, NAACL, COLING, TACL and CL.

More about Andre Freitas topic

Andre will present one session during the Winter School: Session C2.

Session C2 - Teaming with Generative AI Agents This tutorial offers an in-depth exploration of human-expert and multi-agent collaboration in the context of Generative AI Agents, with a particular focus on domains requiring advanced analytical capacity—such as policymaking, biomedicine, and the physics of novel materials. As these specialized domains increasingly adopt Generative AI systems, we witness an unprecedented opportunity to realize a vision of augmented rationality, wherein experts and AI agents cooperate in a synergistic manner to address complex analytical challenges. We approach this topic through the lens of neuro-symbolic AI, emphasizing the importance of reasoning mechanisms that can guide, justify, and refine the outputs of generative models. We introduce the notion of analytical Generative AI Agents - systems designed to tackle domain-specific complexities while upholding the principles of Responsible AI. The tutorial will dissect the architectural elements needed to implement such agents effectively, from model composition to interpretability and transparent decision-making workflows. In addition, participants will be introduced to emerging practices for evaluating these multi-agent systems in human-AI teaming contexts. By the end of the tutorial, attendees will have gained a deep understanding of how to design, deploy, and assess analytical Generative AI Agents, aiming to foster responsible and effective collaborations between humans and AI-driven multi-agent systems.


 

Joseph Paradiso

Firstname Lastname Joseph A. Paradiso is the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, where he directs the Responsive Environments group. Joe Paradiso received his PhD in Physics from MIT in 1981, a BSEE from Tufts University in 1977, and joined the Media Lab in 1994. His current research explores how sensor networks augment and mediate human experience, interaction, and perception—encompassing wireless sensing systems, wearable and body sensor networks, energy harvesting and power management for embedded sensors, ubiquitous and pervasive computing, human-computer interfaces, electronic music, and interactive media. He has previously worked in high-energy physics, spacecraft control, and underwater sonar.

More about Joe Paradiso's topic

Joe will present one session during the Winter School: Session E.

Session E - Joe Paradiso's insights on Human-AI Collaboration During this session, Joe will provide his insights on the notio of Augmentred Human and summarizes the concepts seen during the Winter School.