<html><head><meta http-equiv="Content-Type" content="text/html;
charset=utf-8"><meta name="GENERATOR" content="MSHTML
11.00.10570.1001"></head><body style='font-family: "Arial"; font-size: 10pt;'
bgcolor="#ffffff">
<div style='font-family: "Arial"; font-size: 10pt;'>Dear Researchers in
Cognitive Science,</div>
<div style='font-family: "Arial"; font-size: 10pt;'> <p>We are launching a
crucial research initiative aimed at
addressing fundamental questions surrounding the nature of intelligence, with
significant implications for our understanding of both biological and
artificial cognition, and particularly for navigating the challenges of
advanced AI. We are seeking the expertise of researchers
in cognitive
science to
participate in a thought experiment and analysis, for which an
<strong>honorarium of $1000 to $2000 USD</strong> will be paid, commensurate
with the depth and quality of your contribution.</p> <p>The core of this
experiment revolves around examining the question that if intelligence is
general problem-solving ability, or the potential ability to solve any
problem in general, does the lack of an explicit functional model of
intelligence inhibit our ability to reliably insert more intelligence into
the process of solving our most existential challenges, regardless of whether
you believe those to be AI alignment, poverty, climate change, war, or
anything else? What if the absence of such a functional model of
intelligence was the single most important existential threat in
some time frame? Without an explicit functional model of intelligence, could
we reliably converge on a more intelligent assessment of whether or not this
is true? If so, is the lack of an explicit functional functional model of
intelligence potentially an existential threat in itself? </p> <p>The numbers
of theories surrounding intelligence, implementations of intelligent systems,
and related concepts appear to be increasing rather than converging. If there
is some possibility that a functional model of intelligence is of
foundational importance to human civilization, this inability to achieve
converge is potentially itself an existential risk. Accordingly, this
experiment will focus on the outcome of defining a process capable of more
reliably discovering, validating, and comparing any more rigorous
framework for
understanding and evaluating intelligence itself, so that convergence on the
best one might be more reliably achievable.</p> <p>We invite you to
critically engage with the following concepts and questions:</p>
<p><strong>Core Concepts for Consideration:</strong></p> <ul> <li><p
style="margin-bottom: 0in;"><strong>Concept of Intelligence:</strong> What
constitutes a robust and comprehensive definition of intelligence?
</p></li> <li><p style="margin-bottom: 0in;"><strong>Axiomatic Model of
Intelligence (AMI):</strong> Can we define a minimal set of fundamental
axioms that any system must satisfy to possess general problem-solving
ability (true intelligence) in the cognitive domain? </p></li> <li><p
style="margin-bottom: 0in;"><strong>Explicit Functional Model of
Intelligence (FMI):</strong> Can we define a minimal set of core functions
that any system must possess to achieve general problem-solving ability
(true intelligence) in the cognitive domain? </p></li> <li><p
style="margin-bottom: 0in;"><strong>Specifications for the FMI:</strong>
Focusing more on FMI due to the likely possibility that no closed set of
axioms can describe adaptive problem-solving (intelligence) in open
knowledge domains in which novelty might be encountered, what are the
essential requirements that a valid FMI must fulfill? What might a candidate
architecture for such an FMI look like? </p></li> <li><p><strong>AGI/GCI
in Relation to FMIs/AMIs:</strong> How do current and/or proposed Artificial
General Intelligence (AGI) at the individual level, or General Collective
Intelligence (GCI) at the group level, relate to these potential functional
or axiomatic models? Are they attempts to implement such models, or are they
pursuing intelligence without a clearly defined foundational model?
</p></li> </ul> <p><strong>Key Questions for Your Analysis:</strong></p>
<ul> <li><p style="margin-bottom: 0in;"><strong>Convergence vs.
Proliferation:</strong> Are current approaches to AI development and our
understanding of intelligence converging towards a more coherent truth, or
are they proliferating without a clear framework for comparison and
validation? </p></li> <li><p style="margin-bottom: 0in;"><strong>The
Necessity of a Model for Problem-Solving:</strong> Without a robust model of
intelligence, how can we effectively enhance our problem-solving
capabilities, particularly in critical areas like AI alignment? </p></li>
<li><p style="margin-bottom: 0in;"><strong>AI's Internal Understanding of
Intelligence:</strong> Under what conditions could an AI recognize and
communicate a truly intelligent model it discovers internally? Under what
conditions could humans understand such a model, especially if it differs
significantly from human cognition? </p></li> <li><p style="margin-bottom:
0in;"><strong>Complexity and Human Understandability:</strong> Is a
minimally valid model of intelligence necessarily human-understandable? What
are the implications if AI discovers highly complex,
non-human-understandable models? How would the propagation of such models
among AI systems impact our ability to align them with human values?
</p></li> <li><p style="margin-bottom: 0in;"><strong>The Reliability of
Consensus:</strong> Can we rely solely on consensus to determine the
validity or importance of concepts related to intelligence, especially in
the face of novelty? </p></li> <li><p style="margin-bottom:
0in;"><strong>Testing for Intelligence:</strong> Are current one-shot tests
adequate for assessing true general intelligence, particularly the ability
to learn and adapt in open domains? Do we need recursive, multi-shot testing
paradigms? </p></li> <li><p style="margin-bottom: 0in;"><strong>The Urgency
and Priority of Foundational Research:</strong> What are the most compelling
reasons for a collective human effort to define a higher-resolution concept
of intelligence, an axiomatic model, and a functional model? How does the
priority of this effort compare to addressing other existential risks,
considering the potential for a functional model to exponentially enhance
problem-solving abilities (for both good and ill)? </p></li>
<li><p><strong>The Significance of Existing Proposed Models:</strong> At
least one functional model of intelligence has been proposed. What is the
priority of developing a robust process for discovering, evaluating, and
validating such existing and future models? </p></li> </ul> <p><strong>Call
to Participate:</strong></p> <p>We invite you to contribute your
scientific rigor and expertise to these critical questions. Your
analysis
should aim to provide insightful perspectives on the nature of intelligence,
the importance of a foundational model, and the urgency of a collective
effort to develop and validate such a model. We are particularly interested
in arguments that address the epistemological and methodological challenges
involved.</p> <p><strong></strong><p style="color: rgb(0, 0, 0);
text-transform: none; text-indent: 0px; letter-spacing: normal; font-family:
Arial; font-size: 13.33px; font-style: normal; font-weight: 400;
word-spacing: 0px; white-space: normal; orphans: 2; widows: 2;
background-color: rgb(255, 255, 255); font-variant-ligatures: normal;
font-variant-caps: normal; -webkit-text-stroke-width: 0px;
text-decoration-thickness: initial; text-decoration-style: initial;
text-decoration-color: initial;"><strong><span class="selected">If you would
like to express interest, please fill out this
brief form (takes less than 2 minutes):<span> </span><font
color="#000080"><u><a
href="https://docs.google.com/forms/d/e/1FAIpQLSdqZC7nDIRm3X7r5pjJCSusN04WF8TwwcaKNMxHpHbpuQ2jGQ/viewform?usp=sharing">
<font face="Arial, sans-serif"><font size="2" style="font-size: 10pt;">[Google
Form Link]</font></font></a></u></font>.<br></span></strong></p><p
style="color: rgb(0, 0, 0); text-transform: none; line-height: 13.33px;
text-indent: 0px; letter-spacing: normal; font-family: Arial; font-size:
13.33px; font-style: normal; font-weight: 400; margin-bottom: 0in;
word-spacing: 0px; white-space: normal; orphans: 2; widows: 2;
background-color: rgb(255, 255, 255); font-variant-ligatures: normal;
font-variant-caps: normal; -webkit-text-stroke-width: 0px;
text-decoration-thickness: initial; text-decoration-style: initial;
text-decoration-color: initial;"><strong>Alternatively, you are welcome to
email me directly
at<span> </span><a
href="mailto:info@cc4ci.org">info@cc4ci.org</a>.</strong></p><p></p> <p>We
believe
that the insights from the cognitive science community are
essential for
navigating the complex landscape of intelligence research and ensuring a
beneficial future for humanity. We look forward to your valuable
contributions.</p> <p>Sincerely,</p><p style="line-height: 100%;
margin-bottom: 0in;">Andy E. Williams</p><p style="line-height: 100%;
margin-bottom: 0in;">Caribbean Center for Collective Intelligence
(CC4CI)</p><p style="line-height: 100%; margin-bottom:
0in;">info@cc4ci.org</p> </div>
<div style='font-family: "Arial"; font-size: 10pt;'><br></div>
<div style='font-family: "Arial"; font-size: 10pt;'><br></div></body></html>