Paper submission deadline
5:00pm Eastern Standard Time, October 27, 2017

Location
Vancouver Convention Center, Vancouver, BC, Canada, April 30 - May 3, 2018

Overview
The performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of deep learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. We take a broad view of the field and include topics such as feature learning, metric learning, compositional modeling, structured prediction, reinforcement learning, and issues regarding large-scale learning and non-convex optimization. The range of domains to which these techniques apply is also very broad, from vision to speech recognition, text understanding, gaming, music, etc.

A non-exhaustive list of relevant topics:
unsupervised, semi-supervised, and supervised representation learning
representation learning for planning and reinforcement learning
metric learning and kernel learning
sparse coding and dimensionality expansion
hierarchical models
optimization for representation learning
learning representations of outputs or states
implementation issues, parallelization, software platforms, hardware
applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field
The program will include keynote presentations from invited speakers, oral presentations, and posters.

ICLR features two tracks: a Conference Track and a Workshop Track. Submissions of extended abstracts to the Workshop Track will be accepted after the decision notifications for Conference Track submissions are sent. A future call for extended abstracts will provide more details on the Workshop Track.

Some of the submitted Conference Track papers that are not accepted to the conference proceedings will be invited for presentation in the Workshop Track.


Explore Existing Conferences Across the World or Publish a Conference to Showcase It Globally in VePub.