Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Forecast the Forecasting
Marco Cristani, University of Verona, Italy

A Review on Malicious Facial Image Processing and Possible Counter-Measures
Jean-Luc Dugelay, MULTIMEDIA, EURECOM, France

Combining Images and Words in Deep Networks that Identify People from Body Shape
Alice J. O'Toole, School of Behavioral and Brain Sciences, The University of Texas at Dallas, United States

Letting Go of the Numbers: Measuring AI Trustworthiness
Carol J. Smith, AI Division, Trust Lab, Carnegie Mellon University, Software Engineering Institute, United States

 

Forecast the Forecasting

Marco Cristani
University of Verona
Italy
 

Brief Bio
Marco Cristani is Professor at the Department of Engineering for Innovation Medicine at the University of Verona, Associate Member at the National Research Council (CNR), and External Collaborator at the Italian Institute of Technology (IIT). His main research interests are in statistical pattern recognition and computer vision, mainly in generative modeling, with a strong emphasys on industrial applications. On these topics, he has published more than 200 papers. In 2016 he has co-founded Humatics, an academic start up that has been sold in 2021 to SYS-DAT Group, dealing with e-commerce for fashion. Now is founder of Qualyco, a newco on industrial quality control. He is/has been the Principal Investigator of several national and international projects, including PRIN and H2020 projects. He will be Program Chair of ICIAP 2025 in Rome. He is an IAPR fellow.


Abstract
Traditionally, the topic of forecasting belongs to the area of probability and statistics, and mainly concerns numerical series in the econometric and financial fields. However, several prediction-related topics are emerging in the field of pattern recognition, such as predicting trajectories, human poses, behaviors, and many others. Some of them are not just limited to academic research, having important implications in the industrial field. This talk offers an updated overview of this situation, showing some of the most effective technologies, and introducing little-explored but very promising applications where traditional forecasting and machine learning could, and should, meet.



 

 

A Review on Malicious Facial Image Processing and Possible Counter-Measures

Jean-Luc Dugelay
MULTIMEDIA, EURECOM
France
http://image.eurecom.fr/~dugelay
 

Brief Bio
Jean-Luc Dugelay obtained his PhD in Information Technology from the University of Rennes in 1992. His thesis work was undertaken at CCETT (France Télécom Research) at Rennes between 1989 and 1992. He then joined EURECOM in Sophia Antipolis where he is now a Professor in the Department of Digital Security. His current work focuses in the domain of multimedia image processing, in particular activities in security (image forensics, biometrics and video surveillance, mini drones), and facial image processing. He has authored or co-authored over 285 publications in journals and conference proceedings, 1 book on 3D object processing published by Wiley, 5 book chapters and 3 international patents. His research group is involved in several national projects and European projects. He has delivered several tutorials on digital watermarking, biometrics and compression at major international conferences such as ACM Multimedia and IEEE ICASSP. He participated in numerous scientific events as member of scientific technical committees, invited speakers or session chair. He is a fellow member of IEEE, IAPR, and AAIA; and an elected member of the EURASIP BoG. Jean-Luc Dugelay is (or was) associate editor of several international journals (IEEE Trans. on IP, IEEE Trans. on MM) and is the founding Editor-in-Chief of the EURASIP journal on Image and Video Processing (SpringerOpen). Jean-Luc DUGELAY is co-author of several conference articles that received an IEEE award in 2011, 2012, 2013 and 2016. He co-organized the 4th IEEE International Conference on Multimedia Signal Processing held in Cannes, 2001 and the Multimodal User Authentication held in Santa Barbara, 2003. In 2015, he served as general co-chair of IEEE ICIP (Québec City) and EURASIP EUSIPCO (Nice).


Abstract
Recent advances in visual sensor technologies and computer vision enable to create more and more efficient face recognition systems, and more generally speaking, facial image processing tools. Such new performances generate some ethical and societal concerns. In parallel, such advances, in deep learning for example, contribute to the proliferation of malicious digital manipulations. The main objectives for designing such attacks is to access resources illegally, to harm some individuals or to make ineffective some technologies. In this presentation, we give an overview of existing attacks and related techniques attached to some applications in the domains of biometrics (e.g. identity spoofing), of medias (e.g. fake news), of video surveillance (e.g. de-anonymization). Some recent works investigate potential counter-measures for detecting such attacks.



 

 

Combining Images and Words in Deep Networks that Identify People from Body Shape

Alice J. O'Toole
School of Behavioral and Brain Sciences, The University of Texas at Dallas
United States
https://profiles.utdallas.edu/alice.otoole
 

Brief Bio
Alice J. O’Toole is a Professor with the School of Behavioral and Brain Sciences, at The University of Texas at Dallas. She currently holds the Aage and Margareta Moller Endowed Chair. She received her Ph.D. in experimental psychology from Brown University, Providence, RI, USA. Following Postdoctoral work at the Ecole Nationale Superieure des Télécommunications, Paris, France, she came to the University of Texas at Dallas, where she established a laboratory for visual perception and face/object recognition experiments. Over the last 30 years, she has published over 200 peer-reviewed manuscripts spanning the fields of psychology, neuroscience, and computational vision. Her research has been funded by the National Institute of Justice, the National Institute of Standards and Technology, as well as DARPA, IARPA, National Eye Institute, and the Alexander von Humboldt Foundation. She currently serves as an Associate Editor for the British Journal of Psychology and the Journal of Vision. She is a fellow of the Association for Psychological Science.


Abstract
Many important applications of person identification occur at distances and viewpoints from which the face is not visible or insufficiently resolved to be useful. In these cases, body shape can serve as a biometric modality that can generalize across distance and viewpoint variation. I will discuss our approach to body identification, which combines standard object classification networks with representations based on word-based descriptions of bodies. In our previous work, we showed that a relatively small number of linguistic descriptors (e.g., broad shoulders, muscular, curvy, pear-shaped, short-legs) can be used in conjunction with a PCA-based model of 3D bodies to generate an accurate 3D reconstruction of a body. We reasoned that this description might also be sufficient for identifying individual bodies. I will overview our work implementing body shape identification networks with and without linguistic training. We compared these models on their ability to identify people from body shape using images captured across a large range of distances/views (close-range, 100m-1000m, and in images from unmanned aerial vehicles [UAV]). Accuracy, as measured by identity-match ranking and false accept errors, was surprisingly good. Non-linguistic models were generally more accurate for close-range images, whereas linguistic models fared better at farther distances and for UAV. Fusion of the linguistic and non-linguistic embeddings improved performance in nearly all cases. This work shows that linguistic and non-linguistic representations of body shape can offer complementary identity information about bodies that can improve identification in applications of interest. At a broader perspective, the evolution of language to be a highly effective and concise form of communication may underlie its current success in a wide range of machine learning applications.



 

 

Letting Go of the Numbers: Measuring AI Trustworthiness

Carol J. Smith
AI Division, Trust Lab, Carnegie Mellon University, Software Engineering Institute
United States
https://scholars.cmu.edu/8816-carol-smith
 

Brief Bio
Carol Smith is the AI Division Trust Lab Lead and Principal Researcher at the Carnegie Mellon University (CMU), Software Engineering Institute. She leads research focused on development practices that result in trustworthy, human-centered, and responsible AI systems. Carol Smith has been conducting research to improve the human experience with complex systems across industries for over 20 years. Since 2015 she has led research to integrate ethics and improve human experiences with AI systems, autonomous vehicles, and other complex and emerging technologies. Carol Smith is recognized globally as a leading researcher and user experience advocate and has presented over 250 talks and workshops in over 40 cities around the world. Her writing can be found in publications from organizations including AAAI, ACM, and the UXPA, and she has taught courses and lectured at CMU and other leading institutions. Ms. Smith is currently an ACM Distinguished Speaker and a Working Group member of the IEEE P7008™ Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems. Carol Smith holds a Master of Science degree in Human-Computer Interaction from DePaul University.


Abstract
AI systems need to be designed to work with, and for, the people using them. A person’s willingness to trust a particular system is based on their expectations of the system’s behavior. Their trust is complex, transient, and personal – it cannot easily be measured. However, an AI system’s trustworthiness can be measured. A trustworthy AI system demonstrates that it will fulfill its promise by providing evidence that it is dependable in the context of use, and the end user has awareness of its capabilities during use. We can measure reliability and instrument systems to monitor usage (or lack thereof) quantitatively. However, AI’s potential is bound to perceptions of its trustworthiness, which requires qualitative measures to fully ascertain. Doing AI well requires a reset – letting go of (some of) the numbers, and learning new methods that provide a more complete assessment of the system.



footer