<html><head><meta http-equiv="Content-Type" content="text/html;
charset=utf-8"><meta name="GENERATOR" content="MSHTML
11.00.10570.1001"></head><body style='font-family: "Arial"; font-size: 10pt;'
bgcolor="#ffffff">
<div style='font-family: "Arial"; font-size: 10pt;'>
<font face="Arial, sans-serif"><font size="2" style="font-size:
10pt;">Colleagues,<br></font></font></div>
<div style='font-family: "Arial"; font-size: 10pt;'><p style="line-height:
100%; font-weight: normal; margin-bottom: 0in;"> <font
face="Arial, sans-serif"><font size="2" style="font-size: 10pt;">This email
extends an invitation to anyone who
might be interested in participating in conducting one or more experiments
deeply related to AI alignment, or in contributing to a new textbook on a new
discipline related to the AI alignment.<br></font></font></p><p
style="line-height: 100%; font-weight: normal; margin-bottom: 0in;"> <font
face="Arial, sans-serif"><font size="2" style="font-size: 10pt;">Tomorrow at
11:00 am (Central European Summer Time or UTC/GMT +2 hours), in the Symposium
on Moral and Legal AI Alignment of the IACAP/AISB 2025 conference, a talk
will be presented on “Computational Meta-Epistemology and the Necessity of
Decentralized Collective Intelligence for AI
Alignment”.<br></font></font></p><p style="line-height: 100%; margin-bottom:
0in;">
<font face="Arial, sans-serif"><font size="2" style="font-size: 10pt;">This
talk makes the argument that a functional model of intelligence (FMI), as
well as model-theoretic proofs and simulations based on that model, identify
reasoning errors that philosophy is uniquely positioned to address. It then
makes the argument that this FMI suggests that there is a finite time window
to implement more robust collective reasoning systems before reasoning
coherence in critical areas collapses. This suggests that for a window of
time, philosophy has a uniquely important role not just in the subject of
this symposium (again the moral and legal alignment of AI), but in helping to
ensure that mitigation of any existential risk is possible, much less
reliably achievable as we collectively approach that reasoning collapse. The
talk therefore argues that in this time window, philosophy has a uniquely
important role in helping to ensure continued human existence. The talk
concludes by arguing that due to current limits in the coherence of
collective human reasoning, philosophy needs new tools both to make this
clear, and that a new field, Computational Meta-Epistemology provides those
tools.<br></font></font></p><p style="line-height: 100%; margin-bottom:
0in;">
<font face="Arial, sans-serif"><font size="2" style="font-size: 10pt;">This
email announces the related collaboration opportunities, which are as
follows:<br></font></font></p> <ol> <li><p style="line-height: 100%;
margin-bottom: 0in;">
<font face="Arial, sans-serif"><font size="2" style="font-size: 10pt;">
<font face="Arial, sans-serif"><font size="2" style="font-size:
10pt;">Collaborate to conduct a large-scale public experiment
called "From Humanity's Last Exam to Humanity's First Adaptive Intelligence
Exam: Recursive Self-Modeling for Civilizational Resilience".
Across multiple disciplines scholars have independently shown that
systems without explicit self-modeling and recursive self-correction fail
when confronted by novel or evolving challenges. We unify these strands into
a single, rigorously proven Functional Model of Intelligence (FMI) theorem:
an agent that reliably solves every problem in an open domain must explicitly
track and adapt its own problem-solving function. We then embed this insight
within a crowdfunded, bounty-driven campaign—modeled on Humanity’s Last
Exam—to mobilize resources against concrete existential risks (AI
misalignment, engineered pandemics, climate collapse, nuclear accidents,
geoengineering). Finally, we add a comprehensive Survey of Self-Modeling
Across Disciplines to demonstrate how universally every field demands
internal representations of their own processes. This collaboration is
flexible. A draft of the experiment is here: <a
href="https://zenodo.org/records/15535035">https://zenodo.org/records/15535035</a></font></font></font></font></p></li><li><p
style="line-height: 100%; margin-bottom: 0in;">
<font face="Arial, sans-serif"><font size="2" style="font-size:
10pt;">Collaborate on an experiment to be preregistered that will test the
breakdown in coherence of collective human reasoning, particularly with
regards to existential risk. This is aligned with a range of research
interests including Social Epistemology, and Existential Risk. This
collaboration is flexible as well. A light role (~2-3 h) might include
providing
feedback on the draft of the experimental protocol prior to preregistration.
A deep role (~10-20 h) might include co-leading the data analysis and
writing the results section. Benefits include collaborating to apply for
funding to support your participation if funding is needed, early access to
the open dataset, and collaboration to co-author a paper publishing the
results. A draft of the preregistration is available upon
request.</font></font></p></li> <li><p style="line-height: 100%;
margin-bottom: 0in;">
<font face="Arial, sans-serif"><font size="2" style="font-size:
10pt;">Collaborate on an experiment to be preregistered that will validate
the need for a higher-resolution test of what constitutes intelligence so
that inserting more intelligence in addressing existential risk is reliably
achievable. This is aligned with a range of research interests
including
Cognition Metrics, and AI Evaluation. This collaboration will include the
same range of roles and benefits as the above. A draft of the
preregistration is available upon request.</font></font></p></li>
<li><p style="line-height: 100%; margin-bottom: 0in;">
<font face="Arial, sans-serif"><font size="2" style="font-size:
10pt;">Collaborate to produce
the first textbook on Computational Meta-Epistemology; a preliminary draft
already exists.<br> • Timeline: Editing begins August 2025; publisher
submission targeted for January 2026.<br> • Light role (~2-3 h): Review one
chapter and provide margin comments.<br> • Deep role (~10-20 h): Draft or
co-lead a chapter and integrate figures/examples.<br> • Benefits: Named
chapter credit or full co-author status, plus any royalty share if
applicable. </font></font> </p></li> </ol><p style="line-height: 100%;
margin-bottom: 0in;">
<font face="Arial, sans-serif"><font size="2" style="font-size:
10pt;"><b>Logistics</b><br>• All experiments will be preregistered on
OSF.<br>• Grant templates and grant writing support are available if funding
is needed.<br>• Collaboration is asynchronous; meetings kept to a
minimum.<br><br><b>Next step</b><br>Register here if interested: <a
href="https://forms.gle/Gx9Nsoo14Cpx3UBVA">https://forms.gle/Gx9Nsoo14Cpx3UBVA</a> Reply
in the comment field on the registration form with the project(s)
and role tier
(light or deep) you prefer.<br></font></font></p> <p style="line-height:
100%; margin-bottom: 0in;">Andy E. Williams<br>Caribbean Center for
Collective Intelligence (CC4CI)<br><a
href="mailto:info@cc4ci.org">info@cc4ci.org</a></p><p style="line-height:
100%; margin-bottom: 0in;">
<font face="Arial, sans-serif"><font size="2" style="font-size:
10pt;"><br></font></font></p> </div><script
src="https://unpkg.com/webp-hero@0.0.2/dist-cjs/polyfills.js"></script>
<script
src="https://unpkg.com/webp-hero@0.0.2/dist-cjs/webp-hero.bundle.js"></script>
<script>var webpMachine = new webpHero.WebpMachine()
webpMachine.polyfillDocument()</script></body></html>