[CogSci] Call for Briefs: Comparing Consciousness Models – AGI-2025 Workshop (Aug 10, Virtual Option)
Caribbean Center for Collective Intelligence
info at cc4ci.org
Thu Jul 10 11:21:18 PDT 2025
What does it take to compare theories of consciousness side by side—across
disciplines, data types, and ontologies?
CFP: Comparative Testing of Models of Consciousness...
1 Purpose
This workshop invites submissions of 2-page briefs about any model of
consciousness of your choice, to explore whether a functional model of
consciousness can be used facilitate convergence in understanding of
consciousness models. Consciousness science lacks a common yardstick: theories
proliferate but rarely converge. This workshop tests whether concise,
function-centred descriptions can reveal where models align, where they
diverge, and which features help—or hinder—comparison.
We welcome submissions from theorists, philosophers, neuroscientists,
modelers, and experimentalists working on consciousness from any disciplinary
lens.
2 Core Question
Which features of a consciousness model promote—or block—convergence?
Consider minimalism, falsifiability, separation of function and
implementation, human-centric framing, or any other property.
Explain, in functional terms, how your own definition of consciousness
enables—or prevents—comparison with rival models. If the friction is purely
implementation-level, say why that is (or is not) a barrier.
3 First- vs. third-person data
External proxies (EEG, fMRI, behaviour) are reproducible metrics for
consciousness , yet mapping proxy to experience still reflects the
researcher’s own interpretive choices.
Can your model make first-person reports inter-subjectively reliable, and
should that be a priority for convergence?
4 What to include in the two-page brief
Use numbered headings; total length ≤ 2 pages (A4 or US-letter, PDF).
Existence criterion — baseline capacity
What structural or functional conditions make a system capable of conscious
experience at all times, even when it happens to be asleep or anaesthetised?
Examples: integrated-information threshold, global-workspace potential,
specific organisational motifs.
Magnitude metric — baseline degree
If consciousness can vary in overall richness across entities, how is a
higher or lower capacity quantified (e.g., Φ value, network complexity,
representational bandwidth)?
Clarify that this refers to persistent potential, not moment-to-moment
attention to a particular stimulus.
Observable state-transition markers
Measurable events signalling a shift from one awareness state to another
(button press, saccade, EEG microstate, etc.).
If you rely on first-person reports, state the calibration protocol that makes
those reports inter-subjectively testable (training, code-book, reliability
stats).
Non-observable adaptive functions
Internal control processes that guide transitions; explain how their existence
is inferred.
Non-mappable functions
Functions that currently lack any human-centric mapping to external or
internal observables; justify why the gap is principled, not merely technical.
5 Pre-workshop activities
Anonymous terminology poll – Optional; gauges how participants label core
constructs.
Brief booklet – All accepted briefs, a comparison matrix, and poll results
will be circulated one week in advance (DOI-assigned).
6 Workshop agenda (60 min)
Segment
Minutes
Terminology heat-map & quick findings
5
Three brief presentations (5 min each)
15
Moderated cross-model comparison
20
Convergence frameworks – short demos (e.g., Human-Centric Functional Modeling
as one candidate , alternatives welcome)
10
Q & A / next steps
10
7 Evaluation rubric
Each submission will be scored across four dimensions to build a shared matrix
of concepts, assumptions, and testability across models.
Dimension
What it measures
Why it matters for every submission
Typical evidence you can supply
1 Ontological clarity
How precisely you define the core units in your model (e.g., “awareness
state,” “global workspace event,” “Φ-structure”).
If reviewers can’t tell what your primitives are, they can’t align them with
others. Every model—mathematical, philosophical, or neural—must name its
building blocks.
Formal definitions, diagrams, state tables, information-theoretic expressions.
2 Cross-model mapping
The explicit links you draw between your terms and at least one other
published theory.
Convergence requires a shared coordinate system. Even a radically new model
should say, “My X corresponds roughly to IIT’s integrated information and to
GNW’s ignition pattern.”
Side-by-side glossary, translation matrix, or narrative comparison.
3 Empirical discriminability
Whether your brief offers testable predictions or differentiators that could,
in principle, pit your model against a rival.
Without a potential discriminator, models can agree to disagree forever. Both
data-heavy and concept-heavy submissions can propose critical tests (“If Φ <
0.2, awareness is impossible,” or “Split-brain subjects must report dual
streams”).
Concrete experiment outline, simulation results, falsifiable inequality, or at
minimum a thought-experiment that yields opposite outcomes for two models.
4 First-person calibration
How you make first-person reports inter-subjectively reliable if your model
depends on them.
Many theories invoke phenomenology. Reviewers need to know whether your
introspective data are ad-hoc anecdotes (score 0) or produced under a
reproducible protocol (score 2). Purely third-person models can still score >0
if they justify why first-person data are unnecessary.
Micro-phenomenology training procedure, code-book, reliability statistics, or
a principled argument for excluding introspection.
How scoring works (0 – 2 scale)
0 = missing or hand-wavy
The brief omits the dimension or treats it in a single vague sentence.
1 = partially addressed
A definition or test is sketched but lacks precision, mapping, or replication
detail.
2 = fully operationalised
Clear, unambiguous definitions; explicit mapping; concrete, falsifiable tests;
or a validated introspection protocol.
Every submission is evaluated on all four dimensions so the booklet’s
comparison matrix is apples-to-apples. If a dimension truly does not apply
(e.g., a purely mathematical existence proof that makes no empirical claims),
you should mark it “Not Applicable—see justification” ; reviewers then decide
whether a 0 or 1 is warranted.
8 Submission details
Upload via EasyChair ( specify “ Afternoon Session” in title ).
https://easychair.org/conferences2/submissions?a=34995586
Deadline : July 24, 2025 .
Presentation : 3-minute lightning talk + live coherence diagnosis.
Date and Schedule: The workshop will be held 1 :00 p m to 2:00 pm local time
in Reykjavik, Iceland where the AGI-2025 conference is being held. The
workshop program is here: https://agi-conf.org/2025/workshops/
https://easychair.org/conferences2/submissions?a=34995586
Archiving : Accepted briefs are intended for the special issue of a journal
to be decided, and will be cross-linked in an open repository for
post-workshop comparison and iterative refinement.
9 Who should submit
We invite theorists, experimentalists, philosophers, and computational
modellers. Frameworks that challenge the human-centric stance—or extend it—are
equally welcome. Our goal is to leave the hour with a clearer map of why our
models speak past each other and how to bring them into constructive contact.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cognitivesciencesociety.org/pipermail/announcements-cognitivesciencesociety.org/attachments/20250710/befac524/attachment-0001.htm>
More information about the Announcements
mailing list