ICMC2020: International Conference on Multimodal Communication CFP

Building on ICMC2017 (https://sites.google.com/a/case.edu/icmc2017/) and ICMC2018 (http://cognitivescience.hunnu.edu.cn/ICMC2018/home/index.html), the conference organizers are pleased to announce ICMC2020: the International Conference on Multimodal Communication, 8-10 May 2020, Osnabrück, Germany

The 2020 International Conference on Multimodal Communication is newly conjoined with the following 3 events, all at Osnabrück University, and with a coordinated deadline of 1 December 2019 for the submission of abstracts to the conferences themselves:

  1. 7 May 2020. Red Hen Lab Workshop. See the bottom of this page. Organizer: Professor Francis Steen, UCLA.
  2. 8-10 May 2020. ICMC2020. Email: 2020icmc@gmail.com. Deadline for submission of abstracts: 1 December 2019.
  3. 11-13 May 2020. Spring School on "Cognitive Science meets the humanities and arts: building the next generation"
  4. 15-17 May 2020. 2020 Conference on Cognitive Futures in the Arts and Humanities. Deadline for submission of abstracts: 1 December 2019.

Deadline for abstracts: 1 December 2019.

Conference website: https://sites.google.com/a/case.edu/icmc2020/

Abstract submission: See the conference website for further particulars.


  1. a proposal for a group parallel session at ICMC2020, or
  2. an abstract for a solo talk in a parallel presentation at ICMC 2020, or
  3. an abstract for a poster presentation at ICMC2020

All non-plenary talks are 25 minutes: 15 for the talk, 10 for Q&A.

All submissions will be reviewed (quickly) for acceptance.

We encourage presentations on any aspect of multimodal communication, including topics that deal with language and multimodal constructions, paralinguistics, facial expressions, gestures, cinema, theater, role-playing games, artificial intelligence, machine learning. . . . The content domains can be drawn from personal interaction, social media, mass media, group communication, . . . . We invite conceptual papers, observational studies, experiments, and computational, technical, and statistical approaches.

Deadline for submission of proposals for a group parallel session: 1 December 2019.

  • Submit a proposal for a group parallel session by emailing it to 2020icmc@gmail.com
  • Design:
    • Use as the subject line of the email message "group session proposal, ICMC2020"
    • At the top of the body of the message, provide
      • Names of organizers
      • Affiliations of organizers
      • Email addresses of organizers
      • Title of group session
  • Description of group session, maximum of 500 words + maximum of 15 references
  • List of potential presenters in the group session, with titles and abstracts of 125 words


Deadline for submission of abstracts: 1 December 2019.


  • Submit an abstract by emailing it to 2020icmc@gmail.com
  • Design:
    • Use as the subject line of the email message "solo talk abstract, ICMC2020"
    • At the top of the body of the message, provide
      • Name(s)
      • Affiliation(s)
      • Email address(es)
      • Title
  • Content: maximum of 500 words + maximum of 15 references
  • Please avoid explicit personal identifiers in the body of the email message except for the 3 elements at the top: name, affiliation, email address. This will make it easier for the organizers to supervise double-blind reviewing.


Deadline for submission of abstracts: 1 December 2019.


  • Submit an abstract by emailing it to 2020icmc@gmail.com
  • Design:
    • Use as the subject line of the email message "poster abstract, ICMC2020"
    • At the top of the body of the message, provide
      • Name(s)
      • Affiliation(s)
      • Email address(es)
      • Title
  • Content: maximum of 250 words + maximum of 15 references
  • Please avoid explicit personal identifiers in the body of the email message except for the 3 elements at the top: name, affiliation, email address. This will make it easier for the organizers to supervise double-blind reviewing.

Pre-conference Red Hen Lab Workshop, 7 May 2020: Separate registration required.

Plenary speakers, confirmed

1. Hans Boas

Title: Comparing multimodal constructions

Abstract: This talk proposes systematic criteria for classifying and comparing constructions. Part 1 reviews different approaches to discovering and documenting constructions and frames. Part 2 focuses on a comparison of different classifications of constructions to show that different constructions may require different types of classifications. Finally part 3 shows how multimodal constructions can be compared with each other (in one language) and across languages.

Bio: http://sites.la.utexas.edu/hcb/bio/

2. Jana Bressem

Title: The interplay of verbal and gestural negation – a cross-linguistic perspective

Abstract: A growing body of research underlines a tight relation of gestures with different types of verbal negation (e.g., Andrén 2014; Antas & Gembalczyk 2017; Beaupoil-Hourdel, Boutet, & Morgenstern, 2015; Harrison, 2018; Kendon, 2004). In particular, with explicit negation it is assumed that verbal negation imposes positional constraints on co-speech gestures such that "specific bindings of grammatical and gestural form" occur when "speakers use particular types of linguistic negations or perform certain negative speech acts" (Harrison, 2018, p. 45). Recent studies, however, rather show a high correlation of gestures with implicit than explicit verbal negation (Inbar & Shor2019; Wegener & Bressem 2019). Based on German, Australian English and Savosavo, a Non-Austronesian language spoken on Savo Island, Solomon Islands ( http://dobes.mpi.nl/projects/savosavo/), the talk explores the relation of verbal negation with gestures (hand, head and eyebrow movements) in these languages and questions if and how tightly gestures are indeed linked with verbal negation. The talk discusses potential factors that could explain their co-occurrence and considers general implications of these findings with regard to the question of a possible "grammar-gesture nexus" (Harrison 2018).

Bio: http://www.janabressem.de/en/

3. Cristóbal Pagán Cánovas

Title: What drives the relation between speech and gesture in the expression of time?

Abstract: How are the construction of meaning and the presentation of information articulated during face-to-face, multimodal communication? Is gesture complementing speech, or are they both serving (and perhaps competing for) common goals? I will be presenting novel evidence on the expression of time, the classical domain for research into mappings in language and cognition. Basic spatial relations are indeed used to construct temporal meanings, but we still know little about how exactly this takes place in live communication, beyond the lab and the dictionary. Thanks to the digital resources developed by the Red Hen Lab, we can now quantify, with unprecedented detail and power, the relation between specific verbal expressions and gestural patterns co-occurring with them. I will be reporting on several studies examining frequency of co-occurrence and formal correspondences between some of the most conventional time phrases (e.g. from beginning to end, earlier/later than, in the near future) and gestural patterns that signal for lines, points, or directionality to represent temporal relations. Our results suggest that the relation between speech and gesture is closely related to the organization of information and the reduction of uncertainty during communication. Time is spatialized in complex, dynamic ways that depend heavily on context and specific purposes, as well as on our cognitive capacities to integrate disparate elements into meaningful wholes.

Bio: https://sites.google.com/site/cristobalpagancanovas/cv-resume

4. Susanne Flach

Title: Multimodal usage data: What's in it for usage-based corpus linguistics?

Abstract: Quantitative, multifactorial corpus linguistics has advanced rapidly over the last two decades and has also informed theory-building, particularly in usage-based models of linguistic knowledge. At the same time, corpus data are also often seen as incomplete data, as the multimodal nature of human communication is poorly represented in corpora. This talk will revisit well-known alternation phenomena, adding multimodality to assess the new insights that can be gained from multimodal usage data.

Bio: https://sfla.ch/ — Susanne Flach is a post-doctoral researcher in English corpus linguistics at the Université de Neuchâtel in Switzerland. She primarily works on contemporary lexico-grammatical phenomena from a corpus linguistic perspective, but has also employed experimental methods. She is also interested in language change in (Late) Modern English.

5. Thomas Hoffmann

Title: Multimodal Communication - Insights from and for Construction Grammar

Abstract: Human communication is inherently multimodal from internet memes to face-to-face communication using language and gesture. In this talk, I will explore how a usage-based Construction Grammar approach combined with insights from Conceptual Blending can provide a cognitive explanation of on-line multimodal semiosis. At the same time, I will outline how Construction Grammar as a theory has to develop in order to fully account for multimodal communication.

Bio: Thomas Hoffmann is Professor and Chair of English Language and Linguistics at the Catholic University Eichstätt-Ingolstadt. His main research interests are usage-based Construction Grammar, language variation and change, and multimodal communication. He has published widely in international journals such as Cognitive Linguistics, Journal of English Linguistics, English World-Wide and Corpus Linguistics and Linguistic Theory. His 2011 monograph Preposition Placement in English as well as his 2019 book Comparing English Comparatives were both published by Cambridge University Press and he is currently writing a textbook on Construction Grammar: The Structure of English for the Cambridge Textbooks in Linguistics series. He is also Area Editor for Syntax of the Linguistics Vanguard and Editor-in-Chief of the Open Access journal Constructions. https://ku-eichstaett.academia.edu/ThomasHoffmann.

6. Kai-Uwe Kühnberger

Title: Facilitating Concept Learning by Multimodal Communication

Abstract: Human concept learning functions fundamentally different in comparison to classical machine learning approaches. Whereas machine learning algorithms require usually a large amount of data – that should furthermore also be well balanced, equally distributed, unbiased etc. – human concept learning allows often reliable generalizations from very few examples. In this presentation, I will argue that the multimodal aspect of natural language together with methodologies like analogy-making and conceptual blending can potentially be used as a model for a more cognitively inspired approach towards concept learning. Based on models developed originally for abstract disciplines like mathematics we will extend this to models for concept learning facilitated by multimodal communication.

Bio: Kai-Uwe Kühnberger is currently director of the Institute of Cognitive Science at Osnabrück University and is heading the working group Artificial Intelligence in this institute. He got his PhD in Computational Linguistics / General Linguistics from the University of Tübingen in the year 2002. Kai-Uwe Kühnberger (co-)published more than 130 scientific articles in different areas of Artificial Intelligence, such as knowledge representation, neuro-symbolic integration, machine learning, or cognitive architectures. A special focus of his research is the field of computational creativity, in particular, with respect to concept invention, learning abstract concepts from data, and conceptual blending in domains such as music and mathematics. He was a SICSA Fellow (Scottish Informatics and Computer Science Alliance) in 2009 and won an IBM faculty award 2016.

7. Kiki Nikiforidou

Title: A grammarian's look at non-verbal correlates of constructions: Multimodality and conventionality in the grammar of genre

Abstract: In this talk I present and discuss an array of grammatical constructions that are associated with specific genres and discourse settings, including folk tales, stage directions, Alcoholics Anonymous, and empathetic narration. I sketch the relations of such constructions to the rest of the grammar (through inheritance) and investigate posture and gestural correlates. While not all of these can be unequivocally integrated into constructional descriptions due to their non-obligatory, dissociable nature, I suggest that a multimodal view of grammatical constructions offers an ideal ground for exploring in depth a relevant and more subtle concept of conventionality not necessarily covered by Langackerian entrenchment.

Bio: http://en.enl.uoa.gr/academic-staff/language-and-linguistics/nikiforidou-vassiliki0.html

8. Peter Uhrig

Title: Using big-data methods in the analysis of multimodal communication

Abstract: Much of the analysis of co-speech gesture has been based on the careful and detailed manual analysis of video recordings, which is so time-consuming that it does not scale to large datasets. In my talk, I would like to show how the semi-automatic and fully-automatic analysis of multimodal communication on a much coarser level enables us to answer a different set of questions than the manual method did. We will see that for the analysis of both the video and the audio, automatic analysis can reveal patterns that are hard to spot in small datasets. To this end, computer vision software and automatic measurements of audio features are combined with corpus-linguistic methods into a unified workflow for data analysis. The presentation will include case studies and a live demo of the dataset and tools developed.

Bio: Peter Uhrig is a post-doctoral researcher in English Linguistics at FAU Erlangen-Nürnberg. In 2018/19 he was interim professor of Computational Linguistics at the University of Osnabrück. His current research project on large-scale multimodal corpus linguistics aims at creating new methods for research on multimodal communication by integrating insights and tools from corpus linguistics, computational linguistics, speech recognition and computer vision.

9. Vera Tobin

Title: Cognitive Bias and the Multimodal Pragmatics of Artful Concealment

Abstract: Cognitive scientists such as Michael Tomasello and Stephen Levinson, among many others, have argued that cooperation, as humans practice it, is a natural and unique feature of human social behavior: a key element of what distinguishes our cognition from that of other primates, central to the language faculty, the complexity of our cultural institutions, and more. At the same time, humans can be tremendously uncooperative on a monumental scale, in baroque, creative, and even monstrous ways. What's more, it's very tricky for us to assess how transparent or opaque we have actually been, in the moment. This talk takes up these issues with respect to the case of "cooperative uncooperativity" in discourse—when we join forces with others in pursuit of shared and mutually enjoyed goals that involve deception, misinformation, and other kinds of ostensibly uncooperative results. Very often understanding these kinds of discourse is impossible without considering them from a multimodal perspective. This talk will present a range of examples from film, news media, experimental studies, legal discourse, puzzles, and more to show how the difficulty of being (appropriately) difficult shapes conversation, rhetoric, and narrative—and how crucial multimodal approaches are to the study of these phenomena.

Bio: https://veratobin.org

Pre-Conference Red Hen Lab Research Workshop, 7 May 2020: Tools and Workflows

Participation is free but requires registration separate from the conference registration. See https://sites.google.com/a/case.edu/icmc2020/.

This is a pre-conference event, provided courtesy of Francis Steen and Mark Turner, co-directors of the International Distributed Little Red Hen Lab. This workshop will be a hands-on introduction to the Red Hen datasets, research tools, and integrated workflows. Learn how to develop your research questions, craft them into testable hypotheses, and utilize the full range of search tools available. You will be introduced to the command-line interface, the Edge and Edge2 search engines, and CQPweb, and learn how to output and export the datasets you want to work on. Learn how to search in Parts of Speech, Named Entities, Conceptual Frames, and Gesture classifications. The annotations tools available to you include the Red Hen Rapid Annotator, UCLA's Online Video Annotation Tool, ELAN, and Dartmouth's Semantic Annotation Tool. Depending on your interests, you can also be introduced to computational tools that can be used to expand your manual annotations into much larger datasets, such as neural networks and deep learning, and to the use and creation of portable applications on high-performance computing clusters. Related topics that can be discussed if there is interest includes the use of Singularity containers for portability, network security through RSA keys, network communication and sharing of data, aspects of computational corpus linguistics, etc. . . . Attendance will be recorded; registered attendees who complete the workshop will be awarded a Red Hen Certificate of completion.

Comment on this blogpost

There are no comments.

You need to be logged in to post comments.

By signing up as member, you will be part of a stimulating and active scientific community, offering you numerous benefits. Signing up is easy and fast.

Register as a member