[CogSci] Invitation to collaborate on preregistered AI-alignment experiments & CME textbook

Caribbean Center for Collective Intelligence info at cc4ci.org
Tue Jul 1 13:35:53 PDT 2025


Colleagues,



This email extends an invitation to anyone who might be interested in
participating in conducting one or more experiments deeply related to AI
alignment, or in contributing to a new textbook on a new discipline related to
the AI alignment.


Tomorrow at 11:00 am (Central European Summer Time or UTC/GMT +2 hours), in
the Symposium on Moral and Legal AI Alignment of the IACAP/AISB 2025
conference, a talk will be presented on “Computational Meta-Epistemology and
the Necessity of Decentralized Collective Intelligence for AI Alignment”.


This talk makes the argument that a functional model of intelligence (FMI), as
well as model-theoretic proofs and simulations based on that model, identify
reasoning errors that philosophy is uniquely positioned to address. It then
makes the argument that this FMI suggests that there is a finite time window
to implement more robust collective reasoning systems before reasoning
coherence in critical areas collapses. This suggests that for a window of
time, philosophy has a uniquely important role not just in the subject of this
symposium (again the moral and legal alignment of AI), but in helping to
ensure that mitigation of any existential risk is possible, much less reliably
achievable as we collectively approach that reasoning collapse. The talk
therefore argues that in this time window, philosophy has a uniquely important
role in helping to ensure continued human existence. The talk concludes by
arguing that due to current limits in the coherence of collective human
reasoning, philosophy needs new tools both to make this clear, and that a new
field, Computational Meta-Epistemology provides those tools.


This email announces the related collaboration opportunities, which are as
follows:


Collaborate to conduct a large-scale public experiment called "From Humanity's
Last Exam to Humanity's First Adaptive Intelligence Exam: Recursive
Self-Modeling for Civilizational Resilience". Across multiple disciplines
scholars have independently shown that systems without explicit self-modeling
and recursive self-correction fail when confronted by novel or evolving
challenges. We unify these strands into a single, rigorously proven Functional
Model of Intelligence (FMI) theorem: an agent that reliably solves every
problem in an open domain must explicitly track and adapt its own
problem-solving function. We then embed this insight within a crowdfunded,
bounty-driven campaign—modeled on Humanity’s Last Exam—to mobilize resources
against concrete existential risks (AI misalignment, engineered pandemics,
climate collapse, nuclear accidents, geoengineering). Finally, we add a
comprehensive Survey of Self-Modeling Across Disciplines to demonstrate how
universally every field demands internal representations of their own
processes. This collaboration is flexible. A draft of the experiment is here:
https://zenodo.org/records/15535035


Collaborate on an experiment to be preregistered that will test the breakdown
in coherence of collective human reasoning, particularly with regards to
existential risk. This is aligned with a range of research interests including
Social Epistemology, and Existential Risk. This collaboration is flexible as
well. A light role (~2-3 h) might include providing feedback on the draft of
the experimental protocol prior to preregistration. A deep role (~10-20 h)
might include co-leading the data analysis and writing the results section.
Benefits include collaborating to apply for funding to support your
participation if funding is needed, early access to the open dataset, and
collaboration to co-author a paper publishing the results. A draft of the
preregistration is available upon request.


Collaborate on an experiment to be preregistered that will validate the need
for a higher-resolution test of what constitutes intelligence so that
inserting more intelligence in addressing existential risk is reliably
achievable. This is aligned with a range of research interests including
Cognition Metrics, and AI Evaluation. This collaboration will include the same
range of roles and benefits as the above. A draft of the preregistration is
available upon request.


Collaborate to produce the first textbook on Computational Meta-Epistemology;
a preliminary draft already exists.
 • Timeline: Editing begins August 2025; publisher submission targeted for
January 2026.
 • Light role (~2-3 h): Review one chapter and provide margin comments.
 • Deep role (~10-20 h): Draft or co-lead a chapter and integrate
figures/examples.
 • Benefits: Named chapter credit or full co-author status, plus any royalty
share if applicable. 


Logistics
• All experiments will be preregistered on OSF.
• Grant templates and grant writing support are available if funding is
needed.
• Collaboration is asynchronous; meetings kept to a minimum.

Next step
Register here if interested: https://forms.gle/Gx9Nsoo14Cpx3UBVA Reply in the
comment field on the registration form with the project(s) and role tier
(light or deep) you prefer.


Andy E. Williams
Caribbean Center for Collective Intelligence (CC4CI)
info at cc4ci.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cognitivesciencesociety.org/pipermail/announcements-cognitivesciencesociety.org/attachments/20250701/54eff4f3/attachment.htm>


More information about the Announcements mailing list