[CogSci] CFP Special Issue "Cognitive Architectures for HRI: Embodied Models of Situated Natural Language Interactions" in Frontiers in Robotics and AI

Stephanie Gross stephanie.gross at gmx.at
Tue Oct 1 00:54:28 PDT 2019


* Apologies for cross postings *


------------------------------------
Call for Papers
------------------------------------

"Cognitive Architectures for HRI: Embodied Models of Situated Natural
Language Interactions"
<https://www.frontiersin.org/research-topics/10091/cognitive-architectures-for-hri-embodied-models-of-situated-natural-language-interactions#overview>

Journal:
Frontiers in Robotics and AI

Manuscript Submission Deadline:
15 December 2019

Topic Editors:
Stephanie Gross
Matthias Scheutz
Brigitte Krenn

Keywords:
Human-Robot Interaction, Human-Human Interaction, Situated Task
Description, Embodied Language Acquisition, Acquiring Semantic
Representations


------------------------------------
About this Research Topic
------------------------------------

In many application fields of human robot interaction, robots need to
adapt to changing contexts and thus be able to learn tasks from
non-expert humans through verbal and non-verbal interaction. Inspired by
human cognition and social interaction, we are interested in mechanisms
for representation and acquisition, memory structures etc., up to full
models of socially guided, situated, multi-modal language interaction.
These models can then be used to test theories of human situated
multi-modal interaction, as well as to inform computational models in
this area of research.

This article collection aims at bringing together linguists, computer
scientists, cognitive scientists, and psychologists with a particular
focus on embodied models of situated natural language interaction.

Articles should answer at least one of the following questions:
• Which kind of data is adequate to develop socially guided models of
language acquisition, e.g. multi-modal interaction data, audio, video,
motion tracking, eye tracking, force data (individual or joint object
manipulation)?
• How should empirical data be collected and preprocessed in order to
develop socially guided models of language acquisition, e.g. collect
either human-human or human-robot data?
• Which mechanisms are needed by the artificial system to deal with the
multi-modal complexity of human interaction? And how to combine
information transmitted via different modalities - at a higher level of
abstraction?
• Models of language learning through multi-modal interaction: How
should semantic representations or mechanisms for language acquisition
look like to allow an extension through multi-modal interaction?
• Based on the above representations, which machine learning approaches
are best suited to handle the multi-modal, time-varying and possibly
high dimensional data? How can the system learn incrementally in an
open-ended fashion?

Relevant Topics include (but are not limited to) the following:
• models of embodied language acquisition
• models of situated natural language interaction
• multi-modal situated interaction data
• individual / joint manipulation & task description data
• multi-modal human-human interaction
• multi-modal human-robot interaction
• acquiring multi-modal semantic representations
• multi-modal reference resolution
• machine learning approaches for multimodal situated interaction
• embodied models of incremental learning


------------------------------------
Information for Authors
------------------------------------

Author guidelines:
http://www.frontiersin.org/about/authorguidelines

Frontiers' publishing fees:
http://www.frontiersin.org/about/publishingfees
(A list of Frontiers Institutional Members:
https://www.frontiersin.org/about/institutional-membership)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cognitivesciencesociety.org/pipermail/announcements-cognitivesciencesociety.org/attachments/20191001/2ae0f1d5/attachment-0001.html>


More information about the Announcements mailing list