[CogSci] ONION 2020: Second CFP

Albert Gatt albert.gatt at um.edu.mt
Mon Jan 13 00:54:16 PST 2020


Second Call for Papers
ONION: peOple in laNguage, visIOn and the miNd

Workshop to be held at the 12th Edition of the Language Resources and
Evaluation Conference, Palais du Pharo, Marseilles, France, on Saturday,
May 16 2020.

https://onion2020.github.io/

We invite paper submissions for the first workshop on People in Language,
Vision, and the Mind (ONION 2020), which discusses how people, their bodies
and faces as well as mental states are described in text. We are interested
in contributions from diverse areas including language generation, language
analysis, cognitive computing, affective computing.

Detailed Workshop goals
------------------------
The workshop will provide a forum to present and discuss current research
focusing on multimodal resources as well as computational and cognitive
models aiming to describe people in terms of their bodies and faces,
including their affective state as it is reflected physically. Such models
might either generate textual descriptions of people, generate images
corresponding to people’s descriptions, or in general exploit multimodal
representations for different purposes and applications.  Knowledge of the
way human bodies and faces are perceived, understood and described by
humans is key to the creation of such resources and models, therefore the
workshop also invites contributions where the human body and face are
studied from a cognitive, neurocognitive or multimodal communication
perspective.

Human body postures and faces are being studied by researchers from
different research communities, including those working with vision and
language modeling, natural language generation, cognitive science,
cognitive psychology, multimodal communication and embodied conversational
agents. The workshop aims to reach out to all these communities to explore
the many different aspects of research on the human body and face,
including the resources that such research needs,  and to foster
cross-disciplinary synergy.

The ability to adequately model and describe people in terms of their body
and face is interesting for a variety of language technology applications,
e.g., conversational agents and interactive multimodal narrative
generation, as well as forensic applications in which people need to be
identified or their images generated from textual or spoken descriptions.

Such systems need resources and models where images associated with human
bodies and faces are coupled with linguistic descriptions, therefore the
research needed to develop them is placed at the interface between vision
and language research.

At the same time, this line of research raises important ethical questions,
both from the perspective of data collection methodology and from the
perspective of bias detection and avoidance in models trained to process
and interpret human attributes.

By focussing on the modelling and processing of physical characteristics of
people, and the ethical implications of this research, the workshop will
explore and further develop a particular area within visual and language
research. Furthermore, it will foster novel cross-disciplinary knowledge by
soliciting contributions from different fields of research. By attempting
to bring results from the cognitive and neurocognitive fields to the
attention of the HLT community, it is also in line with the “Language and
the Brain” hot topic of LREC 2020.

Relevant topics
----------------
We are inviting short and long papers reporting original research, surveys,
position papers, and demos. Authors are strongly encouraged to identify and
discuss ethical issues arising from their work, insofar as it involves the
use of image data or descriptions of people.

- Relevant topics include, but are not limited to, the following ones:
- Datasets of facial images, as well as body postures, gestures and their
descriptions
- Methods for the creation and annotation of multimodal resources dedicated
to the description of people
- Methods for the validation of  multimodal resources for descriptions of
people
- Experimental studies of facial expression understanding by humans
- Models or algorithms for automatic facial description generation
- Emotion recognition by humans
- Multimodal automatic emotion recognition from images and text
- Subjectivity in face perception
- Communicative, relational and intentional aspects of head pose and
eye-gaze
- Collection and annotation methods for facial descriptions
- Coding schemes for the annotation of body posture and facial expression
- Understanding and description of the human face and body in different
contexts, including commercial applications, art, forensics, etc.
- Modelling of the human body, face and facial expressions for embodied
conversational agents
- Generation of full-body images and/or facial images from textual
descriptions
- Ethical and data protection issues related to the collection and/or
automatic description of images of real people
- Any form of bias in models which seek to make sense of human physical
attributes in language and vision.

Important dates
----------------
Paper submission deadline:    February 14, 2020
Notification of acceptance:    March 13, 2020
Camera ready Papers:        April 2, 2020
Workshop:            May 16, 2020 (afternoon)

Submission guidelines
-----------------------
Short paper submissions may consist of up to 4 pages of content, while long
papers may have up to 8 pages of content. References do not count towards
these page limits.

All submissions must follow the LREC 2020 style files, which are available
for LaTeX (preferred) and MS Word and can be retrieved from the following
address: https://lrec2020.lrec-conf.org/en/submission2020/authors-kit/

Papers must be submitted digitally, in PDF, and uploaded through the online
submission system here: https://www.softconf.com/lrec2020/ONION2020/

The authors of accepted papers will be required to submit a camera-ready
version to be included in the final proceedings. Authors of accepted papers
will be notified after the notification of acceptance with further details.

Identify, Describe and Share your LRs!
----------------------------------------
Describing your LRs in the LRE Map is now a normal practice in the
submission procedure of LREC (introduced in 2010 and adopted by other
conferences). To continue the efforts initiated at LREC 2014 about “Sharing
LRs” (data, tools, web-services, etc.), authors will have the possibility,
 when submitting a paper, to upload LRs in a special LREC repository.  This
effort of sharing LRs, linked to the LRE Map for their description, may
become a new “regular” feature for conferences in our field, thus
contributing to creating a common repository where everyone can deposit and
share data.
As scientific work requires accurate citations of referenced work so as to
allow the community to understand the whole context and also replicate the
experiments conducted by other researchers, LREC 2020 endorses the need to
uniquely Identify LRs through the use of the International Standard
Language Resource Number (ISLRN, www.islrn.org), a Persistent Unique
Identifier to be assigned to each Language Resource. The assignment of
ISLRNs to LRs cited in LREC papers  will be offered at submission time.

Organisers
-----------
Patrizia Paggio, University of Copenhagen and University of Malta,
paggio at hum.ku.dk
Albert Gatt, University of Malta, albert.gatt at um.edu.mt
Roman Klinger, University of Stuttgart, roman.klinger at ims.uni-stuttgart.de

Programme committee
---------------------
Adrian Muscat, University of Malta
Andreas Hotho, University of Würzburg
Andrew Hendrickson, University of Tilburg
Catherine Pelachaud, Institute for Intelligent Systems and Robotics, UPMC
and CNRS
Costanza Navarretta, CST, University of Copenhagen
David Hogg, University of Leeds
Diego Frassinelli, University of Stuttgart
Isabella Poggi, Roma Tre University
Jonas Beskow, KTH Speech, Music and Hearing
Jordi Gonzalez, Universitat Autònoma de Barcelona
Kristiina.Jokinen, National Institute of Advanced Industrial Science and
Technology (AIST)
Mihael Arcan,  National University of Ireland, Galway
Raffaella Bernardi, CiMEC Trento
Sebastian Padó, University of Stuttgart


-- 
Albert Gatt
Institute of Linguistics and Language Technology
University of Malta
http://staff.um.edu.mt/albert.gatt/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cognitivesciencesociety.org/pipermail/announcements-cognitivesciencesociety.org/attachments/20200113/c27bf182/attachment-0001.html>


More information about the Announcements mailing list