[CogSci] Call for Participants: Foundational Research on the Nature of Intelligence - Honorarium Available
Caribbean Center for Collective Intelligence
info at cc4ci.org
Sat May 17 11:44:24 PDT 2025
Dear Researchers in Cognitive Science,
We are launching a crucial research initiative aimed at addressing fundamental
questions surrounding the nature of intelligence, with significant
implications for our understanding of both biological and artificial
cognition, and particularly for navigating the challenges of advanced AI. We
are seeking the expertise of researchers in cognitive science to participate
in a thought experiment and analysis, for which an honorarium of $1000 to
$2000 USD will be paid, commensurate with the depth and quality of your
contribution.
The core of this experiment revolves around examining the question that if
intelligence is general problem-solving ability, or the potential ability to
solve any problem in general, does the lack of an explicit functional model of
intelligence inhibit our ability to reliably insert more intelligence into the
process of solving our most existential challenges, regardless of whether you
believe those to be AI alignment, poverty, climate change, war, or anything
else? What if the absence of such a functional model of intelligence was the
single most important existential threat in some time frame? Without an
explicit functional model of intelligence, could we reliably converge on a
more intelligent assessment of whether or not this is true? If so, is the lack
of an explicit functional functional model of intelligence potentially an
existential threat in itself?
The numbers of theories surrounding intelligence, implementations of
intelligent systems, and related concepts appear to be increasing rather than
converging. If there is some possibility that a functional model of
intelligence is of foundational importance to human civilization, this
inability to achieve converge is potentially itself an existential risk.
Accordingly, this experiment will focus on the outcome of defining a process
capable of more reliably discovering, validating, and comparing any more
rigorous framework for understanding and evaluating intelligence itself, so
that convergence on the best one might be more reliably achievable.
We invite you to critically engage with the following concepts and questions:
Core Concepts for Consideration:
Concept of Intelligence: What constitutes a robust and comprehensive
definition of intelligence?
Axiomatic Model of Intelligence (AMI): Can we define a minimal set of
fundamental axioms that any system must satisfy to possess general
problem-solving ability (true intelligence) in the cognitive domain?
Explicit Functional Model of Intelligence (FMI): Can we define a minimal set
of core functions that any system must possess to achieve general
problem-solving ability (true intelligence) in the cognitive domain?
Specifications for the FMI: Focusing more on FMI due to the likely possibility
that no closed set of axioms can describe adaptive problem-solving
(intelligence) in open knowledge domains in which novelty might be
encountered, what are the essential requirements that a valid FMI must
fulfill? What might a candidate architecture for such an FMI look like?
AGI/GCI in Relation to FMIs/AMIs: How do current and/or proposed Artificial
General Intelligence (AGI) at the individual level, or General Collective
Intelligence (GCI) at the group level, relate to these potential functional or
axiomatic models? Are they attempts to implement such models, or are they
pursuing intelligence without a clearly defined foundational model?
Key Questions for Your Analysis:
Convergence vs. Proliferation: Are current approaches to AI development and
our understanding of intelligence converging towards a more coherent truth, or
are they proliferating without a clear framework for comparison and
validation?
The Necessity of a Model for Problem-Solving: Without a robust model of
intelligence, how can we effectively enhance our problem-solving capabilities,
particularly in critical areas like AI alignment?
AI's Internal Understanding of Intelligence: Under what conditions could an AI
recognize and communicate a truly intelligent model it discovers internally?
Under what conditions could humans understand such a model, especially if it
differs significantly from human cognition?
Complexity and Human Understandability: Is a minimally valid model of
intelligence necessarily human-understandable? What are the implications if AI
discovers highly complex, non-human-understandable models? How would the
propagation of such models among AI systems impact our ability to align them
with human values?
The Reliability of Consensus: Can we rely solely on consensus to determine the
validity or importance of concepts related to intelligence, especially in the
face of novelty?
Testing for Intelligence: Are current one-shot tests adequate for assessing
true general intelligence, particularly the ability to learn and adapt in open
domains? Do we need recursive, multi-shot testing paradigms?
The Urgency and Priority of Foundational Research: What are the most
compelling reasons for a collective human effort to define a higher-resolution
concept of intelligence, an axiomatic model, and a functional model? How does
the priority of this effort compare to addressing other existential risks,
considering the potential for a functional model to exponentially enhance
problem-solving abilities (for both good and ill)?
The Significance of Existing Proposed Models: At least one functional model of
intelligence has been proposed. What is the priority of developing a robust
process for discovering, evaluating, and validating such existing and future
models?
Call to Participate:
We invite you to contribute your scientific rigor and expertise to these
critical questions. Your analysis should aim to provide insightful
perspectives on the nature of intelligence, the importance of a foundational
model, and the urgency of a collective effort to develop and validate such a
model. We are particularly interested in arguments that address the
epistemological and methodological challenges involved.
If you would like to express interest, please fill out this brief form (takes
less than 2 minutes): [Google Form Link].
Alternatively, you are welcome to email me directly at info at cc4ci.org.
We believe that the insights from the cognitive science community are
essential for navigating the complex landscape of intelligence research and
ensuring a beneficial future for humanity. We look forward to your valuable
contributions.
Sincerely,
Andy E. Williams
Caribbean Center for Collective Intelligence (CC4CI)
info at cc4ci.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cognitivesciencesociety.org/pipermail/announcements-cognitivesciencesociety.org/attachments/20250517/3586c437/attachment.htm>
More information about the Announcements
mailing list