Bayesian Theory of Surprise to Quantify Degrees of Uncertainty
Nelly Bencomo, Durham University, United Kingdom
Empowering AI Through Frugality
Amparo Alonso Betanzos, University of A Coruña, Spain
Learning Compatible Representation
Alberto Del Bimbo, Università degli Studi di Firenze, Italy
The Challenge of Computing Responsible AI
Thomas B. Moeslund, Aalborg University, Denmark
Bayesian Theory of Surprise to Quantify Degrees of Uncertainty
Nelly Bencomo
Durham University
United Kingdom
Brief Bio
Nelly Bencomo is a professor in the CS Department at Durham University and the leader of the Research Team at SE@Durham. In 2019, Nelly was granted the Leverhulme Fellowship “QuantUn: quantification of uncertainty using Bayesian surprises. ”Before, she was awarded a Marie Curie Fellow at INRIA Paris - Roquencourt. The Marie Curie project is called Requirements-aware Systems (nickname: Requirements@run.time). Nelly exploits the interdisciplinary aspects of model-driven engineering (MDE), software engineering, comprising both technical and human concerns, while developing techniques for intelligent, autonomous and highly distributed systems. With other colleagues, she coined the research topic models@run.time. Her research informs the design of systems that involve communities of people and technology (https://aihs.webspace.durham.ac.uk/socio-technical-systems/). She is the PI of the EPSRC Twenty20Insight research project. Twenty20Insight is an interdisciplinary project bringing together academic experts in Software Engineering (SE), RE, Design Thinking and ML to help system stakeholders and developers understand and reason about the impact of intelligent systems on the world in which they operate. Twenty20Insight actively supports the explainability of the exposed behaviour by the running system. She also leads the EPSRC IAA Project weDecide: Clinical Tool for Shared Decision-Making for Treatment of Menopause Symptoms.
Nelly has actively participated in different European Projects and the EPSRC in the UK regarding self-adaptive and autonomous systems. She was the program chair of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS) in 2014 and co-program chair of the 12th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO) in 2018 and the ACM/IEEE 25th International Conference on Model Driven Engineering Languages and Systems MODELS’22. Nelly is an Associate Editor of ACM Transactions on Autonomous and Adaptive Systems and was an Associate Editor of IEEE Transactions on Software Engineering (TSE) and a member of the Editorial Board of the Journal of Software and Systems. She is also a member of the IEEE TCSE (Technical Council on Software Engineering) members-at-large (2020-24) and the Steering Committee of MODELS. She has served as a PC member and organizing team member of multiple SE-related Conferences (e.g., ICSE, ASE, MODELS, RE, REFSQ, ICSA).
Website: www.nellybencomo.me
Abstract
In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). We are just starting. In this talk, we will talk about a novel and formal Bayesian definition of surprise as the basis for quantitative analysis to measure degrees of uncertainty and deviations of self-adaptive systems from normal behaviour. A surprise measures how observed data affects the models or assumptions of the world during runtime. The key idea is that a “surprising” event can be defined as one that causes a large divergence between the belief distributions prior to and posterior to the event occurring. In such a case the system may decide either to adapt accordingly or to flag that an abnormal situation is happening. We will discuss possible applications of Bayesian theory of surprise for the case of self-adaptive systems using Bayesian Inference and Partially Observable Markov Decision Processes (POMDPs). We will also discuss and cover different Surprise-based approaches to quantifying uncertainty (Bayesian Surprise, Shannon Surprise, Bayes Factor Surprise, and Bayes Factor Surprise) and work related to Digital Twins.
Empowering AI Through Frugality
Amparo Alonso Betanzos
University of A Coruña
Spain
Brief Bio
Amparo Alonso Betanzos is a Full Professor in Computer Science and Artificial Intelligence at CITIC-University of A Coruña (UDC). Her research lines are developing Scalable Machine Learning models, Frugal, Reliable and Explainable Artificial Intelligence, and using agent-based models for environmental applications. ?
She has published more than 200 articles in journals, international conferences, books, and book chapters, participating in more than 30 competitive European, national and local research projects.???
She is a Senior Member of IEEE and ACM, and a corresponding member of the Royal Spanish Academy of Exact, Physical, and Natural Sciences. She is also a member of the Advisory Council on Artificial Intelligence of the Ministry of Digital Transformation and Public Function of the Government of Spain, since 2020, as well as a Member of the Spanish Research Ethics Committee of the Ministry of Science, Innovation and Universities of the Government of Spain, since 2023.
Abstract
Frugal AI represents a new generation of artificial intelligence focused on maximizing performance and accessibility through optimized, cost-effective, and sustainable solutions. Unlike traditional AI models, often resource-intensive and costly, Frugal AI promotes the development of efficient systems deployable in resource-constrained contexts. This approach enables businesses and organizations of various scales to adopt AI technologies tailored to their needs and capacities, democratizing access to artificial intelligence while fostering a more ethical and sustainable approach.
Learning Compatible Representation
Alberto Del Bimbo
Università degli Studi di Firenze
Italy
http://www.micc.unifi.it/delbimbo/
Brief Bio
Alberto Del Bimbo is Emeritus Professor of Computer Engineering at the University of Firenze, Italy. He authored over 500 scientific publications in Computer Vision, Artificial Intelligence and Multimedia. His current research interests are in compatibility learning, intelligent surveillance and automation and smart environments for cultural heritage. He served as Associate Editor of IEEE TPAMI and IEEE TMM, and Editor in Chief of ACM TOMM journals. He was the General Chair of ACMMM 2022 and ACMMM2010, the International Conferences on Multimedia, ICPR2020, the International Conference on Pattern Recognition, ECCV 2012, the European Conference on Computer Vision, IEEE ICMCS 1999, the International Conference on Multimedia Computing and Systems and the Program Chair of other highly ranked scientific conferences. He leaded important collaborations with industry and has been the scientific advisor of institutions worldwide. He is member of the Scientific Advisory Board of the INSIGHT centre on Artificial Intelligence, Computer Vision and Data Analysis in Ireland, and the Intelligent Media Research Centre of Harbin Institute of Technology, Shenzhen, China. He was the recipient of the ACM SIGMM Award for Outstanding Technical Contributions to Multimedia Computing, Communications and Applications. He is ACM Distinguished Scientist, ELLIS Fellow and IAPR Fellow. Presently, he is the Chair of ACM SIGMM, the ACM Special Interest Group on Multimedia, bringing together the scientific community in Multimedia.
Abstract
Representation learning is a fundamental aspect of deep learning and underpins core tasks such as search, retrieval, and recognition. These systems operate by matching images from a gallery set to input query images. They involve using a trained model to encode images from a gallery set into feature representations. When queries are available, the system retrieves the most similar gallery representations. Early research in these areas has largely considered models that do not change over time. Less attention has been given to scenarios where the availability of new training data requires model updates, or where adopting a more expressive model is needed to enhance performance, which can lead to completely different representations. In these cases, recomputing the feature vectors for all images in the gallery set, a process known as backfilling or re-indexing, becomes essential. However, this can be prohibitively expensive for real-world galleries containing vast amount of data, sometimes even billions of images, or even infeasible if the original data was no more available due to privacy concerns or storage restrictions. As this issue has gained more research attention, learning compatible representations has become a central focus. This approach tackles the difficult problem of learning a new representation model without needing to recalculate gallery features using the updated model. Our talk will explore the foundational challenges of compatible representation learning and the key role of representation stationarity in this process, along with novel training techniques for developing compatible feature representations through stationarity.
The Challenge of Computing Responsible AI
Thomas B. Moeslund
Aalborg University
Denmark
Brief Bio
Professor Moeslund received his PhD from Aalborg University in 2003 and is currently leading the Visual Analysis and Perception lab at Aalborg University (~25 people), the Media Technology section at Aalborg University (~40 people) and the AI for the People Center at Aalborg University (~140 people). His research covers all aspects of software systems for automatic analysis of data - especially visual data. He has been involved in more than 50 national and international research projects. Professor Moeslund has (co-)edited eight special journal issues and (co-)chaired 30+ scientific events. He has published 300+ books, peer reviewed journal and conference papers, and been cited 19,500+ times (H-index 55). Awards include a most cited paper award, a teacher of the year award, an innovation award, and 8 best paper awards.
https://thbm.blog.aau.dk/
Abstract
Current and upcoming regulatory frameworks for governing the use of AI include notions like ‘Human-centric AI’, ‘Fair AI’, ‘Robust AI’, ‘The right to be forgotten’, etc. While such concepts seem reasonable it becomes a bit more fuzzy when defining how to make these auditable. This in turn has resulted in new and increased spotlight on both classical and novel scientific topics such as Machine Unlearning, XAI, Uncertainty, Model Drift and Accuracy. In this talk we will dwell into these matters by unfolding the relevant concepts and exemplifying the current gap between high-level notions and practical solutions.