INDEED 2018 Abstracts


Full Papers
Paper Nr: 31
Title:

A Deep Learning based Food Recognition System for Lifelog Images

Authors:

Binh T. Nguyen, Duc-Tien Dang-Nguyen, Tien X. Dang, Thai Phat and Cathal Gurrin

Abstract: In this paper, we propose a deep learning based system for food recognition from personal life archive images. The system first identifies the eating moments based on multi-modal information, then tries to focus and enhance the food images available in these moments, and finally, exploits GoogleNet as the core of the learning process to recognise the food category of the images. Preliminary results, experimenting on the food recognition module of the proposed system, show that the proposed system achieves 95:97% classification accuracy on the food images taken from the personal life archive from several lifeloggers, which potentially can be extended and applied in broader scenarios and for different types of food categories.
Download

Paper Nr: 33
Title:

Social Relation Trait Discovery from Visual LifeLog Data with Facial Multi-Attribute Framework

Authors:

Tung Duy Dinh, Dinh-Hieu Nguyen and Minh-Triet Tran

Abstract: Social relation defines the status of interactions among individuals or groups. Although people's happiness is significantly affected by the quality of social relationships, there are few studies focusing on this aspect. This motivates us to propose a method to discover potential social relation traits, the interpersonal feelings between two people, from visual lifelog data to improve the status of our relationships. We propose Facial Multi-Attribute Framework (FMAF), a flexible network that can embed different sets of multiple pre-trained Facial Single-Attribute Networks to capture various facial features such as head pose, expression, age, and gender for social trait evaluation. We adopt the architecture of Inception-Resnet-V2 to each single attribute component to utilize the flexibility of Inception model and avoid the degradation problem with the residual module. We use a Siamese network with two FMAFs to evaluate social relation traits for two main persons in an image. Our experiment on the social relation trait dataset by Zhangpeng Zhang et.al shows that our method achieves the accuracy of 77.30%, which is 4.10% higher than the state-of-the-art result (73.20%). We also develop a prototype system integrated into Facebook to analyse and visualize the chronological changes in social traits between a user with friends in daily lives via uploaded photos and video clips.
Download

Paper Nr: 35
Title:

Lightweight Deep Convolutional Network for Tiny Object Recognition

Authors:

Thanh-Dat Truong, Vinh-Tiep Nguyen and Minh-Triet Tran

Abstract: Object recognition is an important problem in Computer Vision with many applications such as image search, autonomous car, image understanding, etc. In recent years, Convolutional Neural Network (CNN) based models have achieved great success on object recognition, especially VGG, ResNet, Wide ResNet, etc. However, these models involve a large number of parameters that should be trained with large-scale datasets on powerful computing systems. Thus, it is not appropriate to train a heavy CNN with small-scale datasets with only thousands of samples as it is easy to be over-fitted. Furthermore, it is not efficient to use an existing heavy CNN method to recognize small images, such as in CIFAR-10 or CIFAR-100. In this paper, we propose a Lightweight Deep Convolutional Neural Network architecture for tiny images codenamed “DCTI” to reduce significantly a number of parameters for such datasets. Additionally, we use batch-normalization to deal with the change in distribution each layer. To demonstrate the efficiency of the proposed method, we conduct experiments on two popular datasets: CIFAR-10 and CIFAR-100. The results show that the proposed network not only significantly reduces the number of parameters but also improves the performance. The number of parameters in our method is only 21.33% the number of parameters of Wide ResNet but our method achieves up to 94.34% accuracy on CIFAR-10, comparing to 96.11% of Wide ResNet. Besides, our method also achieves the accuracy of 73.65% on CIFAR-100.
Download

Short Papers
Paper Nr: 32
Title:

HealthyClassroom - A Proof-of-Concept Study for Discovering Students’ Daily Moods and Classroom Emotions to Enhance a Learning-teaching Process using Heterogeneous Sensors

Authors:

Minh-son Dao, Duc-Tien Dang-Nguyen, Asem Kasem and Hung Tran-The

Abstract: This paper introduces an interactive system that discovers students’ daily moods and classroom emotions to enhance the teaching and learning process using heterogeneous sensors. The system is designed to enable (1) detecting students daily moods and classroom emotions using physiological, physical activities, and event tags data coming from wristband sensors and smart-phones, (2) discovering association/correlation between students’ lifestyle and daily moods, and (3) displaying statistical reports and the distribution of daily moods and classroom emotions of students, both in individual and group modes. A pilot proof-of-concept study was carried out using Empatica E4 wristband sensors and Android smart-phones, and preliminary evaluation and findings showing promising results are reported and discussed.
Download