Past work on computational models of semantics is often fragmented across different levels of semantics, with developments in one level disconnected from the others. LSDSem aims to provide a venue for researchers from lexical, sentential and discourse-level semantics to interact, encouraging the development of models of natural language understanding that use multiple levels of semantics.

LSDSem 2017 will continue the theme of linking lexical, sentential, and discourse-level semantics, and submissions describing efforts in these areas are strongly encouraged. This includes core research on joint or ensemble models, new evaluations to measure different levels semantics, and applications that require more than one level of semantics to solve.

In addition, this year we have an additional focus on the comprehensive understanding of narrative structure in language. Recently a range of tasks have been proposed in the area of learning and applying commonsense/procedural knowledge. Such tasks include, for example, learning prototypical event sequences and event participants, modeling the plot structure of novels, and resolving anaphora in Winograd schemas. Knowledge on the level of scripts and narratives is not only useful to represent stories, recipes, and how-to instructions in a meaningful way, but can also be applied in downstream applications. Examples include, in particular, applications that require reasoning on the document level.

With respect to the area of focus, LSDSem 2017 will include two special sessions: a shared task session (see below) and a discussion session on challenges related to new datasets, evaluation techniques, and models for richer semantics.
Shared Task

Our shared task is the Story Cloze Test. This test is one of the recent proposed frameworks on learning commonsense/procedural knowledge, which introduces a new evaluation for story understanding and script learning. In this test the system reads a four-sentence story along with two alternative endings, then it is tasked with choosing the correct ending to the story. Story Cloze Test requires systems to link various levels of semantics to commonsense knowledge for successful narrative understanding.

We have released an additional ~53K ROCStories, which can be used for training purposes. We encourage using a variety of resources and any training corpora of your choice, not only the ROCStories, for tackling the Story Cloze Test. More details will be available on the Shared Task Page.
Call for Submissions
Long and short papers

We solicit long (8 page) and short (4 page) papers on topics that include:

Approaches for enriching models of lexical and sentence semantics with discourse information
Joint models of lexical, sentential and discourse semantics (or pairs thereof)
Evaluation methods and analysis of system performance that emphasize different levels of semantics
Applications, such as summarization, text generation and question answering, developed based on multiple layers of semantic information
Models for narrative structure, script learning, and applying common sense knowledge for inference
Meaning representations for procedural texts such as technical instructions and recipes

System description papers:

We invite system description papers from participants in the shared task. The papers should describe the approach and results. We also encourage papers providing insights on negative results on the task.
Submission instructions
All submissions must follow the EACL 2017 formatting instructions described here.
Please submit your papers at http://www.softconf.com/eacl2017/LSDSem/.
For a list of accepted papers at the previous iteration of this workshop, please have a look here.

Organizers:
Michael Roth, UIUC and Saarland University
Nasrin Mostafazadeh, University of Rochester
Nate Chambers, United States Naval Academy
Annie Louis, University of Essex