No.040 Deep Learning: Theory, Algorithms, and Applications


NII Shonan Meeting Seminar 040


Pierre Baldi, UCI

Kenji Fukumizu, ISM

Tomaso Poggio, MIT



Pierre Baldi University of California in Irvine
Kenji Fukumizu Institute of Statistical Mathematics
Tomaso Poggio Massachusetts Institute of Technology
Shotaro Akaho AIST
Shun-ichi Amari RIKEN Brain Science Institute
Hideki Asoh AIST
Joachim Buhmann ETH Zurich
Paolo Frasconi Università di Firenze
Sepp Hochreiter Johannes Kepler University Linz, Institute of Bioinformatics
Takio Kurita Hiroshima University
Tomoko Matsui Institute of Statistical Mathematics
Klaus-Robert Müller Technical University of Berlin
Takayuki Osogami IBM Research – Tokyo
Michal Rosen-Zvi IBM Research
Alessandro Sperduti University of Padova, Italy
Masashi Sugiyama Tokyo Institute of Technology
Jean-Philippe Vert Mines ParisTech
Takayuki Okatani Tohoku University
Kevin Duh Nara Institute of Science and Technology
Marco Cuturi Kyoto University
Surya Ganguli Stanford University
Aapo Hyvärinen University of Helsinki
Shimon Edelman Cornell University
Eric Mjolsness University of California, Irvine
Erik De Schutter Okinawa Institute of Science and Technology (OIST)
Kunihiko Sadamasa NEC Corporation?
Hiroyuki Nakahara ?RIKEN, BSI (Brain Science Institute)
Koji Tsuda National Institute of Advanced Industrial Science and Technology (AIST)
Chiyuan Zhang? MIT




Daily Meal Schedule:

. Breakfast: 7:30-9:00
. Lunch: 11:30-13:30
. Dinner: 18:00-19:30


Welcome, Introduction, Overview, History:

(Pierre Baldi, Tomaso Poggio, and Kenji Fukumizu)

1)????? THEORY (Monday Morning) Chair: Kenji Fukumizu


Basic principles of Self-Organization and Supervised L earning (Shun-ichi Amari)

Autoencoders for Structured Data (Alessandro Sperduti)

Deep Targets and Dropout (Pierre Baldi)


Non Linear Dynamics of Learning in Deep Linear Networks (Surya Ganguli)

Deep Sequential Decision Making? (Takayuki Osogami)

Black Box and Representation Aspects of Neural Networks (Klaus-Robert Muller)

Panel: ?Chair and Speakers

2)????? NEUROBIOLOGY (Monday Afternoon) Chair: Hiroyuki Nakahara


Stochasticity of Biological Synapses (Erik De Schutter)

A Mathematical Theory of Semantic Cognition ?(Surya Ganguli)

Learning, Decision-Making, and Neural Coding ?(Hiroyuki? Nakahara)


Circuits and Large Scale Architecture of the Brain (Shimon Edelman)

?Towards Modelling Cortical Response Properties Using Multilayer Models of Natural Images

(Aapo Hyvärinen)

Artificial Neural Networks versus Biological Neural Networks (Pierre Baldi)

Brain Machine Interfaces (Klaus-Robert Muller)

?Panel :? Chair and Speakers

3)????? ALGORITHMS (Tuesday Morning) Chair: Klaus-Robert Muller


What is the Information Content of an Algorithm? (Joachim Buhmann)

M-Theory (Tomaso Poggio)

Generative vs Discriminative Scaling to Big Data (Klaus-Robert Muller)


Going from Text Analysis to Structural Analysis ?(Michal Rosen Zvi)

Dropout (Sepp Hochreiter)

Long-Short Term Memory Units (Sepp Hochreiter)

Evolutionary/Developmental Neural Networks, Because That’s What Worked For Us (Eric Mjoslness)

Panel:? Chair and Speakers

4)????? SYMBOLIC/SEMANTIC VS STATISTICS/CONNECTIONIST (Tuesday Afternoon) Chair: Paolo Frasconi


Languages for Machine Learning: What Role for Neural Networks?? (Paolo Frasconi)

Cognitive Architectures, Expressible as or Hybridized With Neural Networks? and/or Graphical Models

(Eric Mjolsness)

A Design for a Brain?? (Shimon Edelman)

SPECIAL SESSION ON OPEN PROBLEMS (Paolo Frasconi, Shimon Edelman, Eric Mjolsness)

5)????? APPLICATIONS (Wednesday Morning)? Chair: Pierre Baldi


Applications in Physics, Chemoinformatics, and? Bioinformatics (Pierre Baldi)

Applications in Genetics and Quantum Chemistry (Klaus-Robert Muller)

Big Data in Neuroscience (Joachim Buhmann)


Deep Density-Ratio Estimation (Masashi Sugiyama)

Bayesian Optimization in Materials Informatics (Koji Tsuda)

Decoding EEG, MEG, and EMG Data: Three-way Analysis and Deep Learning (Aapo Hyvärinen)

Large Scale Identification of Brain Cells (Paolo Frasconi)

Panel:? Chair and Speakers

6)??????????? EXCURSION ??(Wednesday Afternoon)? ?and OPEN PROBLEMS

7)??????????? APPLICATIONS? (Thursday Morning)? Chair: Takio Kurita


What is Learned by Convolutional Networks for Image Recognition Tasks?? (Takayuki Okatani)

Recent developments of Deep Learning in Natural Language Processing (Kevin Duh)

Limitation of Neural Network Approach in Natural Language Processing (Kunihiko Sadamasa)


Applications to Health Care (Michal Rosen Zvi)

Expected Applications of Deep Learning (Hideki Asoh)

Wasserstein Means: Efficient Detection of Invariances with Optimal Transport (Marco Cuturi)

Panel: ?Chair and Speakers


We gratefully acknowledge the support from the Shonan Center and the co-sponsorship by the Research Center for Statistical Machine Learning of The Institute of Statistical Mathematics.

Short Description: The ability to learn is essential to the survival and robustness of biological systems. There is also growing evidence that learning is essential to build robust artificial intelligent systems and solve complex problems in most application domains. Indeed, one of the success stories in computer science over the past three decades has been the emergence of machine learning and data mining algorithms as tools for solving large-scale problems in a variety of domains such as text analysis, computer vision, robotics, and bioinformatics. However, we are still far from having a complete understanding of machine learning and its role in AI, and plenty of challenges, both theoretical and practical, remain to be addressed.

Complex problems cannot be solved in one single step and often require multiple processing stages in both natural and artificial systems. For instance, visual recognition in humans is not an instantaneous process and requires activation of a hierarchy of processing stages and pathways. The same is true for all the best performing computer vision systems available today. Thus deep learning architectures, comprising multiple, adaptable, processing layers are important for the understanding and design of both natural and artificial systems and, today, are at the forefront of machine learning research. In the past year alone, deep architectures and deep learning have achieved state-of-the-art performance in many application areas ranging from computer vision, to speech recognition, to bioinformatics.

It is this recent wave of progress that provides the relevant context for this meeting which will focus on all aspects of deep architectures and deep learning, with a particular emphasis on understanding fundamental principles because there is still very little theoretical understanding of deep learning, in spite of the recent progress. Thus a major thrust of the meeting will be to foster theoretical analyses of deep learning. In addition to theory, topics to be covered will include also algorithms and applications. The primary intellectual focus of the meeting will be on deep learning in artificial systems. However, deep learning draws some of its inspiration from, and has close connections to, neuroscience. Thus presentations and discussions bridging learning in natural and artificial learning systems will also be encouraged.