<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div>*** Information in International Sign (IS): <a href="https://s.gwdg.de/ZZsMdS">https://s.gwdg.de/ZZsMdS</a> ***</div><div><br></div><div><b><u>Sign Language Grammars, Parsing Models, & the Brain (Interdisciplinary Workshop) [EXTENDED DEADLINE: 15 September 2025]</u></b></div><div><br></div><div>Date: 06-Nov-2025 - 07-Nov-2025</div><div>Location: Leipzig, Germany</div><div>Contact: Patrick C. Trettenbrein</div><div>Contact Email: <a href="mailto:trettenbrein@cbs.mpg.de">trettenbrein@cbs.mpg.de</a></div><div>Meeting URL: <a href="https://sign-language-grammars-parsers-brain.github.io">https://sign-language-grammars-parsers-brain.github.io</a></div><div><br></div><div><b><u>Background</u></b></div><div><br></div><div>The world’s different sign languages offer a unique perspective on the human capacity for language and their rigorous scientific study within linguistics since the 60s of the past century has provided a multitude of novel insights. Some of these have significantly and lastingly changed how we conceptualize and investigate our species’ faculty of language: We now understand language as a seemingly universal and modality-independent capacity.</div><div><br></div><div>Just like in research on spoken language, the relationship between theoretical descriptions of different phenomena in sign languages and how they may apply or relate to phenomena usually studied by psychologists and neuroscientists is not straightforward. However, we nevertheless believe that any serious experimental investigation of (sign) language should be grounded in a well-motivated theoretical framework provided by linguistics as the scientific study of grammar.</div><div><br></div><div><b><u>Goal</u></b></div><div><br></div><div>The goal of this workshop is to bring together sign language researchers of different theoretical persuasions with practitioners in psycho- and neurolinguistics of sign language to jointly determine:</div><div><ul class="MailOutline"><li>To what extent are current theoretical approaches themselves accurate or need to be expanded to capture phenomena of the visuo-spatial modality of sign languages?</li><li>How do different formal descriptions and theoretical approaches relate and are relevant to psycho- and neurolinguistic studies of sign language processing and language processing in general?</li><li>In this context, we also explicitly invite contributions dealing with similar issues in research on spoken language that adopt a multimodal perspective on speech or integrate speech and gesture.</li></ul></div><div><br></div><div><b><u>Key Questions</u></b></div><div><br></div><div>Topics that are of particular interest for discussion at the workshop are:</div><div><ul class="MailOutline"><li>What impact has research on the grammar (in a broad sense) of sign languages had on how we look at and study spoken languages and conceptualise the human language capactiy and its neurocognitive basis?</li><li>How can seemingly modality-specific phenomena of sign languages (e.g., the impact of iconicity) be accounted for theoretically and what is the impact of such “enlarged” theoretical accounts on the psycho- and neurolinguistics of sign language (e.g., algorithmic accounts aiming to create parsing models that account for sign language processing)?</li><li>How can non-manual components of sign languages best be accounted for and integrated in theories of grammar, what is their linguistic and neurocognitive status, and how can we integrate them in theories of (sign) language processing?</li><li>What, if anything, can psycho- and neurolinguistic studies on sign languages feed back into our theoretical understanding of grammar (of sign languages, but also language in general)?</li></ul></div><div><br></div><div><b><u>Invited Presenters</u></b></div><div><br></div><div>To get the discussion going our workshop will feature invited presentations by leading researchers in the study of sign language grammars, parsing models, and the neural basis of sign language processing. The invited presenters will also participate in the scheduled round-table session as discussants.</div><div><br></div><div>The following Invited Presenters have confirmed their participation:</div><div><br></div><div><b>Carlo Cecchetto: "The Challenge of Simultaneity for Formal Accounts of Sign Language Grammars”</b></div><div><br></div><div>Abstract: In sign languages, different articulators (i.e., the two hands, facial expressions etc.) can simultaneously represent different characters involved in the same action or different facets of the same event. In this talk, I will ask to what extent these massive cases of simultaneity can be integrated into the core grammatical system of sign languages and to what extent they can be described by the analytical tools coming from spoken language linguistics. My answer to these questions will build on the assumption that, as in bilingual bimodal productions, signers can utter at the same time two propositional units or two constituents inside the same clause.</div><div><br></div><div><b>David Corina: "Neurobiological Perspectives on Sign Production: Implications for Predictive Models of Sign Language Processing"</b></div><div><br></div><div><div>Abstract: Neurolinguistic research on signed languages has significantly advanced our understanding of sign language structure and processing. In this talk, I present two case studies that offer novel insights into the production of American Sign Language (ASL), highlighting distinct roles for the somatosensory and proprioceptive systems. The first case involves electrocorticography (ECoG) recordings from a deaf ASL signer undergoing awake neurosurgery. We analyze the neural responses associated with articulatory features, specifically handshape and location, focusing on their timing and cortical specificity during sign production. The second case features a deaf individual with conduction aphasia, whose sign language errors suggest a disruption in somatosensory feedback mechanisms involved in hierarchical motor planning. Together, these findings support a forward model of sign language production, emphasizing the central role of articulatory location and motivating further exploration of holistic “postures” in sign perception.</div></div><div><br></div><div><b>Karen Emmorey: "From Perception to Phonology: Neural Tuning for Visual-Manual Phonological Structure”</b></div><div><br></div><div>Abstract: Many linguistic and psycholinguistic studies have demonstrated that sign languages exhibit phonological structure that is parallel, but not identical to spoken languages. I will present ERP and fMRI data that are beginning to reveal how the brain adapts to phonological structure in a visual-manual language. Our ERP studies demonstrate that deaf ASL signers, in contrast to hearing non-signers, exhibit form-priming effects on the N400 component that differ for shared handshape vs. place of articulation (location on the body). An fMRI adaptation study reveals repetition suppression (reduced BOLD response) for signs with handshape overlap (compared to no phonological overlap) in bilateral parietal cortex, and for place of articulation, repetition suppression was observed in the extrastriate body area (EBA), bilaterally. These effects were not observed for hearing non-signers. Thus, the EBA appears to become neurally tuned to where the hands are positioned on the body (place of articulation) rather than to hand configurations, and neural representations of linguistic handshapes may reside in parietal cortex. Overall, these studies indicate how psycholinguistic processes and linguistic units are mapped onto a functional neuroanatomical network for a visual-manual language.</div><div><br></div><div><b>Vadim Kimmelman: "Can Computer Vision Provide Insight Into the Nature of Nonmanual Markers in Sign Languages?”</b></div><div><br></div><div>Abstract: Recent developments in Computer Vision have made it possible to track and measure movements of signers in video recordings, including the nonmanual components: head and eyebrow movements, eye blinks and eye gaze, mouth shapes, etc. Previous theoretical and empirical reserach in sign linguistics has demonstrated the crucial role nonmanual markers play in sign language grammar and use. The technological developments offers the possibility to investigate the subtle formal properties of the markers, that is, their kinematics. These formal properties have consequences for theoretical accounts of nonmanual markers, and possibly for future experimental research on them. In my talk, I will discuss some cases studies of nonmanuals using Computer Vision, their theoretical implications, and methodological limitations.</div><div><br></div><div><b>Rachel I. Mayberry: "Childhood Language Shapes the Adult Brain-Language System: Insights from American Sign Language”</b></div><div><br></div><div>Abstract: Young children typically develop language so quickly and show responses in the brain language system shortly after birth giving the impression that the human language faculty requires little or no linguistic experience to become fully functional. But is this an accurate picture of how the brain language system acquires its functionality? Children born deaf cannot hear the language spoken around them and the lipreading signal is too impoverished to enable spontaneous language acquisition. The majority of children born deaf do not encounter sign language until they have matured well beyond infancy. This naturally occurring variation in linguistic experience as a function of biological maturation over childhood and adolescence provides a unique window into how the brain language system develops its capacity to parse linguistic structure. The results of behavioral, neurolinguistic, and anatomical studies investigating the sequalae of this unique situation all show that the adult ability to comprehend and process linguistic structure is shaped by the timing of language experience in relation to neural maturation.</div><div><br></div><div><b>Marloes Oomen: "Do Signers Interpret R-Loci as Regions or Points? Insights From an Online Probe Recognition Task”</b></div><div><br></div><div>Abstract: Sign languages use space referentially by associating discourse referents with R(eferential)-loci. The traditional model of referential use of space assumes that R-loci can, in principle, be set up anywhere in horizontal signing space. However, this implies there are infinitely many potential R-loci, which poses a theoretical problem (it makes R-loci ‘unlistable’) and also seems empirically inaccurate. Recently, it has been proposed that R-loci represent regions—not points—in space, where referents get associated with spatial regions via a recursive system of maximal contrast. In this talk, I present an on-line probe recognition task in which 30 deaf signers of Dutch Sign Language (NGT) participated. This reaction-time based experiment was designed to discover which of these theoretical proposals is best empirically supported. The results offer us greater insight into the processing of anaphoric elements in languages rooted in the visuo-spatial modality.</div><div><br></div><div><b><u>Call for Papers</u></b></div><div><br></div><div>Call for papers in International Sign (IS): https://s.gwdg.de/ZZsMdS</div><div><br></div><div>Besides the contributions by our invited speakers, we invite contributions from the scientific community for which we have generously allocated time slots in the prelimninary Workshop Programme. Accordingly, we currently have an open Call for Papers and are looking forward to reciving your submissions.</div><div><br></div><div>The workshop will feature a select number of on-stage presentations, but will also include a poster session. Notice that we particularly encourage submissions from junior researchers (advanced master’s or PhD students, as well as early post-docs).</div><div><br></div><div>Visit the Call for Papers page to learn more about the requirements for abstract submission: <a href="https://sign-language-grammars-parsers-brain.github.io/call-for-papers/">https://sign-language-grammars-parsers-brain.github.io/call-for-papers/</a> [EXTENDED DEADLINE: 15 September 2025]</div><div><br></div><div>
<meta charset="UTF-8"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div>—<br>Dr. Patrick C. Trettenbrein<br>Department of Neuropsychology, Max Planck Institute for Human Cognitive & Brain Sciences<br>Experimental Sign Language Laboratory, Department of German Philology, University of Göttingen<br><br>Office address (room 219): Stephanstraße 1a, 04103 Leipzig, Germany<br>Phone: +49 (0) 341 9940 2625 <br>E-mail: trettenbrein@cbs.mpg.de<br><br>Web: https://trettenbrein.biolinguistics.eu</div></div></div></div></div>
</div>
<br></body></html>