Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Privacy-preserving Machine Learning for Multimedia Data
Andrea Cavallaro, Queen Mary University of London, United Kingdom

Toward User-adaptive Visualizations
Cristina Conati, University of British Columbia, Canada

Integrating Generative Modeling into Deep Learning
Max Welling, University of Amsterdam, Netherlands

 

Privacy-preserving Machine Learning for Multimedia Data

Andrea Cavallaro
Queen Mary University of London
United Kingdom
 

Brief Bio
Andrea Cavallaro is Professor of Multimedia Signal Processing at Queen Mary University of London (QMUL) and a Turing Fellow at the Alan Turing Institute, the UK National Institute for Data Science and Artificial Intelligence. He is the founding Director of the Centre for Intelligent Sensing and Director of Research of the School of Electronic Engineering and Computer Science at QMUL. He received his Ph.D. in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, in 2002. He was a Research Fellow with British Telecommunications (BT) in 2004/2005 and was awarded the Royal Academy of Engineering teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. Prof. Cavallaro is Senior Area Editor for the IEEE Transactions on Image Processing; and Associate Editor for the IEEE Transactions on Circuits and Systems for Video Technology and IEEE Multimedia. He is a past Area Editor for the IEEE Signal Processing Magazine (2012-2014) and past Associate Editor for the IEEE Transactions on Image Processing (2011-2015), IEEE Transactions on Signal Processing (2009-2011), IEEE Transactions on Multimedia (2009-2010) and IEEE Signal Processing Magazine (2008-2011). He is chair of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee (2019) and an elected member of the IEEE Video Signal Processing and Communication Technical Committee. He is a past elected member of the IEEE Multimedia Signal Processing Technical Committee and of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee, and chair of its Awards committee. Prof. Cavallaro has published over 250 journal and conference papers, one monograph on Video tracking (2011, Wiley) and three edited books: Multi-camera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer). Andrea is a Fellow of the International Association for Pattern Recognition (IAPR).


Abstract
The pervasiveness of cameras and associated sensors is offering incredible opportunities to improve services and our quality of life, while however simultaneously posing important societal challenges. Enabling multimedia applications and services while protecting privacy is a major challenge that must be addressed to adapt to increased capabilities of classifiers and changing demands from users. In this talk I will cover new privacy preserving analytical frameworks for single and multimedia sensor data, such as images, videos, audio, as well as accelerometer and gyroscope data. Specifically, I will discuss as applications image sharing in social media, mobile-based health, and video capturing, and talk about desirable (and undesirable) inferences from various types of data.



 

 

Toward User-adaptive Visualizations

Cristina Conati
University of British Columbia
Canada
 

Brief Bio
Dr. Conati is a Professor of Computer Science at the University of British Columbia, Vancouver, Canada. She received a M.Sc. in Computer Science at the University of Milan, as well as a M.Sc. and Ph.D. in Intelligent Systems at the University of Pittsburgh. Conati’s research is at the intersection of Artificial Intelligence (AI), Human Computer Interaction (HCI) and Cognitive Science, with the goal to create intelligent interactive systems that can capture relevant user’s properties (states, skills, needs) and personalize the interaction accordingly. Her areas of interest include User Modeling, Affective Computing, Intelligent Virtual Agents, and Intelligent Tutoring Systems. Conati has over 100 peer-reviewed publications in these fields, and her research has received awards from a variety of venues, including UMUAI, the Journal of User Modeling and User Adapted Interaction (2002), the ACM International Conference on Intelligent User Interfaces (IUI 2007), the International Conference of User Modeling, Adaptation and Personalization (UMAP 2013, 2014), TiiS, ACM Transactions on Intelligent Interactive Systems (2014), and the International Conference on Intelligent Virtual Agents (IVA 2016). Dr. Conati is an associate editor for UMUAI, ACM TiiS, IEEE Transactions on Affective Computing, and the Journal of Artificial Intelligence in Education. She served as President of AAAC, (Association for the Advancement of Affective Computing), as well as Program or Conference Chair for several international conferences including UMAP, ACM IUI, and AI in Education. She is a member of the Executive Committee of AAAI (Association for the Advancement of Artificial Intelligence).


Abstract
Information Visualization (InfoVis) is becoming increasingly important given the continuous growth of applications that allow users to view and manipulate complex data, not only in professional settings, but also for personal usage. To date, visualizations are typically designed based on the type of tasks and data to be handled, without taking into account user differences. However, there is mounting evidence that visualization effectiveness depends on a user’s specific preferences, abilities, states, and even personality.

These findings have triggered research on user-adaptive visualizations, i.e., visualizations that can track and adapt to relevant user characteristics and specific needs. In this talk, I will present results on which user differences can impact visualization processing and on how these differences can be captured using predictive machine learning models based on eye-tracking data. I will also discuss how to leverage these models to provide personalized support that can improve the user's experience with a visualization



 

 

Integrating Generative Modeling into Deep Learning

Max Welling
University of Amsterdam
Netherlands
https://staff.fnwi.uva.nl/m.welling/
 

Brief Bio
Prof. Dr. Max Welling  is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm. He has a secondary appointment as a senior fellow at the Canadian Institute for Advanced Research (CIFAR). He is co-founder of “Scyfer BV” a university spin-off in deep learning which got acquired by Qualcomm in summer 2017. In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto (’01-’03). He received his PhD in ’98 under supervision of Nobel laureate Prof. G. ‘t Hooft. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015 (impact factor 4.8). He serves on the board of the NIPS foundation since 2015 (the largest conference in machine learning) and has been program chair and general chair of NIPS in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. He has served on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. He received multiple grants from Google, Facebook, Yahoo, NSF, NIH, NWO and ONR-MURI among which an NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010. Welling is in the board of the Data Science Research Center in Amsterdam, he directs the Amsterdam Machine Learning Lab (AMLAB), and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA). Max Welling has over 250 scientific publications in machine learning, computer vision, statistics and physics and an h-index of 58.


Abstract
Deep learning has boosted the performance of many applications tremendously, such as object classification and detection in images, speech recognition and understanding, machine translation, game play such as chess and go etc. However, these all constitute reasonably narrowly and well defined tasks for which it is reasonable to collect very large datasets. For artificial general intelligence (AGI) we will need to learn from a small number of samples, generalize to entirely new domains, and reason about a problem. What do we need in order to make progress to AGI? I will argue that we need to combine the data generating process, such as the physics of the domain and the causal relationships between objects, with the tools of deep learning. In this talk I will present a first attempt to integrate the theory of graphical models, which arguably was the dominating modeling machine learning paradigm around the turn of the twenty-first century, with deep learning. Graphical models express the relations between random variables in an interpretable way, while probabilistic inference in such networks can be used to reason about these variables. We will propose a new hybrid paradigm where probabilistic message passing in such networks is enhanced with graph convolutional neural networks to improve the ability of such systems to reason and make predictions.



footer