Introduction

Today's digital contents are inherently multimedia: text, image, audio, video etc., due to the advancement of multimodal sensors. Image and video contents, in particular, become a new way of communication among Internet users with the proliferation of sensor-rich mobile devices. Accelerated by tremendous increase in Internet bandwidth and storage space, multimedia data has been generated, published and spread explosively, becoming an indispensable part of today's big data. Such large-scale multimedia data has opened challenges and opportunities for intelligent multimedia analysis, e.g., management, retrieval, recognition, categorization and visualization. Meanwhile, with the recent advances in deep learning techniques, we are now able to boost the intelligence of multimedia analysis significantly and initiate new research directions to analyze multimedia content. For instance, convolutional neural networks have demonstrated high capability in image and video recognition, while recurrent neural networks are widely exploited in modeling temporal dynamics in videos. Therefore, deep learning for intelligent multimedia analysis is becoming an emerging research area in the field of multimedia and computer vision.

The goal of this workshop is to call for a coordinated effort to understand the scenarios and challenges emerging in multimedia analysis with deep learning techniques, identify key tasks and evaluate the state of the art, showcase innovative methodologies and ideas, introduce large scale real systems or applications, as well as propose new real-world datasets and discuss future directions. The multimedia data of interest cover a wide spectrum, ranging from text, audio, image, click-through log, Web videos to surveillance videos. We solicit manuscripts in all fields of multimedia analysis that explores the synergy of multimedia understanding and deep learning techniques.

Topics of Interest

The workshop will offer a timely collection of research updates to benefit the researchers and practitioners working in the broad fields ranging from computer vision, multimedia to machine learning. To this end, we solicit original research and survey papers addressing the topics listed below (but not limited to):

  • Multimedia Retrieval (image search, video search, speech/audio search, music search, retrieval models, learning to rank, hashing).
  • Web IR and Social Media (link analysis, click models, user behavioral mining, social tagging, social network analysis, community-based QA).
  • Deep image/video understanding (object detection and recognition, localization, summarization, highlight detection, action recognition, multimedia event detection and recounting, semantic segmentation, tracking).
  • Vision and language (image/video captioning, visual Q&A, image/video commenting, storytelling).
  • Multimedia data browsing, visualization, clustering and knowledge discovery.
  • Home/public video surveillance analysis (motion detection and classification, scene understanding, event detection and recognition, people analysis, object tracking and segmentation, human computer/robot interaction, behavior recognition, crowd analysis).
  • Multimedia-based security and privacy analysis.
  • Data collections, benchmarking, and performance evaluation.
  • Other applications of large-scale multimedia data.

Important Dates

Paper Submission March 3, 2017   March 13, 2017
Notification of acceptance: April 07, 2017
Camera-ready submission: April 19, 2017

Submission Guideline

Paper Format & Page Limit: 6 pages,  see details
Submission: CMT *

*: Please choose "Deep Learning for Intelligent Multimedia Analytics"

Program Committee

Anan Liu

Tianjin University,China

Bailan Feng

Huawei Technologies, Co., Ltd, China

Caiyan Jia

Beijing Jiaotong University, China

Changqing Zhang

Tianjin University, China

Chong-Wah Ngo

City University of Hong Kong, Hong Kong

Efstratios Gavves

University of Amsterdam, The Netherlands

Fen Xiao

Xiangtan University, China

Hongtao Xie

Chinese Academy of Sciences, China

Hongzhi Li

Microsoft Research, USA

Huazhu Fu

Agency for Science, Technology and Research (A*STAR), Singapore

Ji Wan

Baidu Research, China

Kaihua Zhang

Nanjing Univesty of Information Science and Technology, China

Kuiyuan Yang

Microsoft Research Asia, China

Lamberto Ballan

University of Florence, Italy

Lei Huang

Ocean University of China, China

Lei Pang

iFlight technology company, China

Liang Yang

Tianjin University of Commerce

Lin Ma

Tecent AI Lab, China

Ling Du

Tianjin Polytechnic University, China

Luheng Jia

Beijing University of Technology, China

Seung-won Hwang

Yonsei University, Korea

Tao Mei

Microsoft Research Asia, China

Vasili Ramanishka

Boston University, USA

Wei Hu

Peking University, China

Wen-Huang Cheng

Academia Sinica, Taiwan

Wengang Zhou

University of Science and Technology of China, China

Wu Liu

Beijing University of Posts and Telecommunications, China

Xavier Giro-i-Nieto

Universitat Politecnica de Catalunya,Spain

Xinmei Tian

University of Science and Technology of China, China

Xirong Li

Renmin University of China, China

Xueming Qian

Xi'an Jiaotong University, China

Yazhe Tang

National University of Singapore, Singapore

Yingwei Pan

University of Science and Technology of China, China

Yuncheng Li

University of Rochester, USA

Zhaofan Qiu

University of Science and Technology of China, China

 
 

Program

When: July 14, 2017

Where: TBD

Keynote session 08:30 - 09:10 AM Unsupervised Incremental Learning of Deep Descriptors from Video Streams

Prof. Alberto Del Bimbo

University of Florence, Italy
Bio: Prof. Del Bimbo is Full Professor at the Department of Information Engineering of University of Firenze, where he serves as Director of MICC-Media Integration and Communication Center. He was President of the Foundation for Research and Innovation, Deputy-Rector for Research and Innovation and Director of the Department of Systems and Computer Science. Prof. Del Bimbo leads a research team at the Media Integration and Communication Center investigating cutting-edge solutions in the fields of computer vision, multimedia content analysis, indexing and retrieval, and advanced multimedia and multimodal interactivity. He is the author of over 350 publications that were published in some of the most prestigious journals and conferences. He has been the coordinator of many research and industrial projects at the international and national level.
He provided services to the scientific community having been, among the others, the Program Chair of the Int'l Conferences on Pattern Recognition ICPR 2016, and ICPR 2012, and ACM Multimedia 2008, and the General Chair of the European Conference on Computer Vision ECCV 2012, the ACM Int'l Conference on Multimedia Retrieval ICMR 2011, ACM Multimedia 2010, and IEEE ICMCS 1999, the Int'l Conference on Multimedia Computing & Systems.
Presently, he is the Editor in Chief of ACM TOMM Transactions on Multimedia Computing, Communications, and Applications and Associate Editor of Multimedia Tools and Applications, Pattern Analysis and Applications journals. He was Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence and IEEE Transactions on Multimedia and also served as the Guest Editor of many Special Issues in highly ranked journals.
Prof. Del Bimbo is IAPR Fellow and the recipient of the 2016 ACM SIGMM Award for Outstanding Technical Contributions to Multimedia Computing, Communications and Applications
Abstract: We present a novel unsupervised method for face identity learning from video sequences. The method exploits the ResNet deep network for face detection and VGGface fc7 face descriptors together with a smart learning mechanism that exploits the temporal coherence of visual data in video streams. We present a novel feature matching solution based on Reverse Nearest Neighbour and a feature forgetting strategy that supports incremental learning with memory size control, while time progresses. It is shown that the proposed learning procedure is asymptotically stable and can be effectively applied to relevant applications like multiple face tracking.
09:10 - 09:50 AM Video Content Analysis with Deep Learning

Prof. Yu-Gang Jiang
Fudan University, China
Bio: Yu-Gang Jiang is a Full Professor in School of Computer Science and Vice Director of Shanghai Engineering Research Center for Video Technology and System at Fudan University, China. His Lab for Big Video Data Analytics conducts research on all aspects of extracting high-level information from big video data, such as video event recognition, object/scene recognition and large-scale visual search. He is the lead architect of a few best-performing video analytic systems in worldwide competitions such as the annual U.S. NIST TRECVID evaluation. His work has led to many awards, including "emerging leader in multimedia" award from IBM T.J. Watson Research in 2009, early career faculty award from Intel and China Computer Federation in 2013, the 2014 ACM China Rising Star Award, and the 2015 ACM SIGMM Rising Star Award. He holds a PhD in Computer Science from City University of Hong Kong and spent three years working at Columbia University before joining Fudan in 2011.
Abstract: Nowadays people produce a huge number of videos. Many of them are uploaded to the Internet on social media sites. There is a strong need to develop automatic solutions for analyzing the contents of these videos. Potential applications of such techniques include effective video content management and retrieval, open-source intelligence analysis, etc. In this talk, I will introduce our recent works on video content analysis. I will start by introducing a few recently constructed Internet video datasets. After that I will introduce several recent approaches developed in my group, with a focus on deep learning based methods tailored for video analysis.
09:50 - 10:30 AM Deep learning for multimedia content security
Prof. Xiaochun Cao
IIE, Chinese Academy of Sciences
Bio
Abstract
10:30 - 11:00 AM Coffee break
Oral 1: Detection and Localization 11:00 - 11:20 AM PBG-NET: Object Detection with a Multi-feature and Iterative CNN Model
Yingxin Lou (Beijing University of Posts and Telecommunications, China)
Guangtao Fu (Academy of Broadcasting Science, China)
Zhuqing Jiang (Beijing University of Posts and Telecommunications, China)
Aidong Men (Beijing University of Posts and Telecommunications, China)
Yun Zhou (Academy of Broadcasting Science, China)
11:20 - 11: 40AM Locally Optimal Detection of Adversarial Inputs to Image Classifiers
Pierre Moulin (University of Illinois, USA)
Amish Goel (University of Illinois, USA)
11:40 - 12:00 AM Spatiotemporal Utilization of Deep Features for Video Saliency Detection
Trung-Nghia Le (National Institute of Informatics & SOKENDAI, Japan)
Akihiro Sugimoto (National Institute of Informatics, Japan)
12:00 - 12:20 PM Hierarchical Pedestrian Attribute Recognition Based on Adaptive Region Localization
Chunfeng Yao (Huawei Technologies, China)
Bailan Feng (Huawei Technologies, China)
Defeng Li (Huawei Technologies, China)
Jian li (Huawei Technologies, China)
12:30 - 13:20 PM Lunch break
Oral 2:Search and Applications 13:20 - 13:40 PM Image Search Re-ranking with an Improved Visualrank Algorithm and Multi-layer DCNN Features
Ai Wei (University of Science and Technology of China, China)
Xinmei Tian (University of Science and Technology of China, China)
13:40 - 14:00 PM Analysis and Prediction of "Yuru-chara" Mascot Popularity Using Visual and Auditory Features
Yuri Nakasato (Tokyo University of Agriculture and Technology, Japan)
Toshihisa Tanaka (Tokyo University of Agriculture and Technology, Japan)
14:00 - 14:20 PM Why My Photos Look Sideways or Upside Down? Detecting Canonical Orientation of Images using Convolutional Neural Networks
Kunal Swami (SamsungR&D Institute-Bangalore, India)
Pranav Deshpande (SamsungR&D Institute-Bangalore, India)
Gaurav Khandelwal (SamsungR&D Institute-Bangalore, India)
Ajay Vijayvargiya (SamsungR&D Institute-Bangalore, India)
14:20 - 14:40 PM Learning Spatial-temporal Consistent Correlation Filter for Visual Tracking
Han Lou (Beijing University of Posts and Telecommunications, China)
Dongfei Wang (Academy of Broadcasting Science, China)
Zhuqing Jiang (Beijing University of Posts and Telecommunications, China)
Aidong Men (Beijing University of Posts and Telecommunications, China) Yun Zhou (Academy of Broadcasting Science, China)
14:40 - 15:00 PM Deep Hashing with Mixed Supervised Losses for Image Search
Dawei Liang (Peking University, China)
Ke Yan (Peking University, China)
Wei Zeng (Peking University, China)
Yaowei Wang (Beijing Institute of Technology, China)
Qingsheng Yuan
Xiuguo Bao
Yonghong Tian (Peking University, China)
15:00 - 15:30 PM Coffee break
Poster Session 15:30 - 16:30 PM MFC: A Multi-scale Fully Convolutional Approach for Visual Instance Retrieval
Solar Radio Spectrum Classification with LSTM
Supervised Deep Quantization for Efficient Image Search
Image Blur Classification and Blur Usefulness Assessment
PU-LP: A Novel Approach for Positive and Unlabeled Learning by Label Propagation
Center Contrastive loss regularized CNN for tracking
Inception Single Shot MultiBox Detector for object detection
Two-layer Video Fingerprinting Strategy for Near-duplicate Video Detection
CRF Estimation Based HDR Image Generation Method
Deep Saliency Quality Assessment Network
Frame-Skip Convolutional Neural Networks for Action Recognition
Deep Hash Learning for Efficient Image Retrieval