<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>* Apologies for cross postings *</p>
<p><br>
</p>
<p> ------------------------------------<br>
Call for Papers<br>
------------------------------------<br>
<br>
<a moz-do-not-send="true"
href="https://www.frontiersin.org/research-topics/10091/cognitive-architectures-for-hri-embodied-models-of-situated-natural-language-interactions#overview">"Cognitive
Architectures for HRI: Embodied Models of Situated Natural
Language Interactions"</a><br>
<br>
Journal: <br>
Frontiers in Robotics and AI<br>
<br>
Manuscript Submission Deadline: <br>
15 December 2019<br>
<br>
Topic Editors:<br>
Stephanie Gross<br>
Matthias Scheutz<br>
Brigitte Krenn<br>
<br>
Keywords: <br>
Human-Robot Interaction, Human-Human Interaction, Situated Task
Description, Embodied Language Acquisition, Acquiring Semantic
Representations <br>
<br>
<br>
------------------------------------<br>
About this Research Topic<br>
------------------------------------<br>
<br>
In many application fields of human robot interaction, robots need
to adapt to changing contexts and thus be able to learn tasks from
non-expert humans through verbal and non-verbal interaction.
Inspired by human cognition and social interaction, we are
interested in mechanisms for representation and acquisition,
memory structures etc., up to full models of socially guided,
situated, multi-modal language interaction. These models can then
be used to test theories of human situated multi-modal
interaction, as well as to inform computational models in this
area of research.<br>
<br>
This article collection aims at bringing together linguists,
computer scientists, cognitive scientists, and psychologists with
a particular focus on embodied models of situated natural language
interaction. <br>
<br>
Articles should answer at least one of the following questions:<br>
• Which kind of data is adequate to develop socially guided models
of language acquisition, e.g. multi-modal interaction data, audio,
video, motion tracking, eye tracking, force data (individual or
joint object manipulation)?<br>
• How should empirical data be collected and preprocessed in order
to develop socially guided models of language acquisition, e.g.
collect either human-human or human-robot data?<br>
• Which mechanisms are needed by the artificial system to deal
with the multi-modal complexity of human interaction? And how to
combine information transmitted via different modalities - at a
higher level of abstraction?<br>
• Models of language learning through multi-modal interaction: How
should semantic representations or mechanisms for language
acquisition look like to allow an extension through multi-modal
interaction?<br>
• Based on the above representations, which machine learning
approaches are best suited to handle the multi-modal, time-varying
and possibly high dimensional data? How can the system learn
incrementally in an open-ended fashion?<br>
<br>
Relevant Topics include (but are not limited to) the following:<br>
• models of embodied language acquisition<br>
• models of situated natural language interaction<br>
• multi-modal situated interaction data<br>
• individual / joint manipulation & task description data<br>
• multi-modal human-human interaction<br>
• multi-modal human-robot interaction<br>
• acquiring multi-modal semantic representations<br>
• multi-modal reference resolution<br>
• machine learning approaches for multimodal situated interaction<br>
• embodied models of incremental learning<br>
<br>
<br>
------------------------------------<br>
Information for Authors<br>
------------------------------------<br>
<br>
Author guidelines:<br>
<a class="moz-txt-link-freetext"
href="http://www.frontiersin.org/about/authorguidelines"
moz-do-not-send="true">http://www.frontiersin.org/about/authorguidelines</a></p>
<p>Frontiers' publishing fees:<br>
<a moz-do-not-send="true"
href="http://www.frontiersin.org/about/publishingfees">http://www.frontiersin.org/about/publishingfees</a><br>
(A list of Frontiers Institutional Members: <a
moz-do-not-send="true"
href="https://www.frontiersin.org/about/institutional-membership">https://www.frontiersin.org/about/institutional-membership</a>)<br>
</p>
</body>
</html>