Sentience · evolution of mind 感识 · 心智的演化
A systems-level platform · 3.5 billion years · from chemoreception to cortex to silicon 系统平台 · 35 亿年 · 自化学感应至皮层至硅基

Intelligence began with perception. 智能始于感知

Life did not invent thinking. It evolved sensing, then prediction, then memory, then a model of itself — and at some point along this gradient, mind appeared. 生命并未发明思考。它演化了感知,再演化预测、记忆,最终演化出关于自身的模型——在这条梯度的某处,心智浮现。

This site reads consciousness not as a metaphysical line but as a continuum: every step adds prediction, modelling, or self-reference, and each step has selective pressure behind it. We mark what the evidence supports, where speculation begins, and what current AI systems do and don't share with biological cognition. 本站将意识视为一条连续梯度,而非形而上的分界:每一步增加预测、建模或自我指涉,每一步背后皆有选择压力。我们标注证据所支持者、推测之所始,以及当前 AI 系统与生物认知所共享与不共享之处。

Perception感知 Sensing predates neurons by three billion years. Every nervous system is built on chemoreception's substrate. 感知早于神经元三十亿年。每一个神经系统都建于化学感应之基底。
Prediction预测 Intelligence is the ability to anticipate the next state of the world before it arrives. Memory exists to enable prediction. 智能即在世界下一状态到来前预判其状态的能力。记忆之所以存在,是为了支持预测。
Self-model自我模型 A nervous system that includes its own body in its world model. Self-awareness may be a downstream feature of this loop. 将自身身体纳入世界模型的神经系统。自我觉知可能是此回路的下游特性。
Externalisation外化 Tools, writing, computers, AI. Civilisation is cognition leaking out of biological brains into transferable substrate. 工具、文字、计算机、AI。文明是认知从生物大脑外溢到可转移基质的过程。
01 · Evolution of Perception01 · 感知的演化

Six senses, three billion years六种感官,三十亿年

Each sensory modality appeared at a different point on the tree of life and was paid for by metabolic cost. Tracking the order tells you what was selected for, when, and why.每一种感官模态在生命树的不同节点出现,并由代谢成本支付。追踪其顺序,可知何为选择对象、于何时、缘何而出。

02 · From Cells to Cortex02 · 自细胞至皮层

Six architectural stages六个架构阶段

Information processing predates neurons. The neuron is one solution; specialisation, modularity, and centralisation are the directions selection has favoured wherever speed of integration matters.信息处理早于神经元。神经元是一个解;当整合速度重要时,特化、模块化、中央化是选择青睐的方向。

03 · Prediction · The Birth of Intelligence03 · 预测——智能之始

A brain is a prediction engine大脑是一台预测引擎

Modern neuroscience increasingly models the cortex as a hierarchical prediction machine — higher layers predict lower ones, and only mismatches climb up. Below: a five-rung ladder from reactive responses to counterfactual simulation.现代神经科学日益将皮层建模为分层预测机——上层预测下层,唯有失配上传。下:自反应式回应至反事实模拟的五级阶梯。

04 · Predator-Prey Arms Race04 · 捕食者—被捕者军备竞赛

Why intelligence kept getting more expensive智能为何越来越昂贵

A brain costs roughly 20× the calories of equivalent muscle. Selection only pays for it when there is a runaway competitive game — and predator-prey is the canonical such game.大脑的热耗约为同等肌肉的 20 倍。只有在存在失控的竞争博弈时,选择才会支付——而捕食—被捕正是其典范。

05 · Memory & Learning05 · 记忆与学习

Six layers, each cheaper to update than the next六层记忆,每一层都比下一层更易更新

Genetic memory is millennia-slow; neural memory is millisecond-fast. Civilisation runs on cultural memory — the externalised layer that lets each new mind start where the previous lineage left off.基因记忆以千年计,神经记忆以毫秒计。文明运行于文化记忆之上——那一外化层使每一颗新心智从上一谱系的终点起步。

06 · Social Intelligence & Language06 · 社会智能与语言

When the world contains other minds当世界包含其他心智

Most of a primate brain's expansion is best explained not by ecology but by other primates. Modelling another agent's beliefs is computationally expensive; doing so for a hundred conspecifics is a credible reason to grow a cortex.灵长类大脑的扩张大半最好以'其他灵长类'解释,而非生态。建模他者信念在计算上昂贵;为百名同类如此,是一个可信的扩展皮层的理由。

07 · The Emergence of Self07 · 自我的浮现

When a model of the world includes the modeller当世界模型包含建模者本身

Five operational tests, increasing in cognitive demand. Each is empirically observable. The mirror test, mind-reading, episodic memory, metacognition, and a stable narrative self each appear at different points in the tree of life.五项操作性测试,认知要求递增。每一项皆可经验观测。镜像测试、心智读、情景记忆、元认知、稳定叙事自我,分别出现于生命树的不同节点。

08 · Cognition Simulator08 · 认知模拟器

Six dials, four cognition outputs六个旋钮,四项认知输出

⚠ epistemic warning · this is a toy mapping, not a theory of consciousness ⚠ 认识论警示 · 这是一个玩具映射,并非意识理论

Move the dials to assemble a cognitive profile. The verdict at the bottom matches your configuration to a known archetype: bacterium-grade, invertebrate baseline, vertebrate-cortex grade, great-ape band, human-cognition profile, or the controversial frontier-AI band.移动旋钮以装配认知画像。底部判决会将您的配置匹配到一个已知原型:细菌级、无脊椎基线、脊椎—皮层级、大猩猩区间、人类认知画像,或争议中的前沿 AI 区间。

World-model fidelity世界模型保真度
0
Behavioural flexibility行为灵活度
0
Autonomy自主性
0
Self-reflection自我反思
0
09 · Civilization & Cognitive Externalisation09 · 文明与认知外化

Cognition leaves the body认知离开身体

Tools store decisions. Writing stores memory. Mathematics stores reasoning. Computers store cognition itself. Civilisation, structurally, is a cortex that scaled past the skull.工具承载抉择,文字承载记忆,数学承载推理,计算机承载认知本身。文明在结构上是一个超出颅骨的皮层。

10 · Artificial Intelligence & the Open Question10 · 人工智能与悬而未决之问

Synthetic perception, prediction, action合成的感知、预测、行动

⚠ epistemic warning · the question of machine consciousness is genuinely open ⚠ 认识论警示 · 机器意识之问真正悬而未决

Modern AI systems implement, in silicon, parts of the same architecture biology spent three billion years evolving: perception, world models, prediction, action. Whether that is sufficient for awareness depends on which theory of consciousness you adopt — and there is no theory the field unanimously endorses.现代 AI 系统在硅基中实现了生物花了 30 亿年演化的同一架构的若干部分:感知、世界模型、预测、行动。其是否足以产生觉知,取决于你采纳的意识理论——目前学界并无任何一致认可的理论。

∞ · Cognition Q&A∞ · 认知问答

Five questions, answered with the relevant uncertainty五问,附以相应的不确定性

Where on the tree of life does consciousness begin?意识从生命树的何处开始?
There is no scientific consensus, only a set of empirical bets. A weak version (felt valence — pain, hunger) is increasingly attributed to all vertebrates, octopuses, and probably some insects. A medium version (integrated experience of a unified scene) is more secure in mammals and birds. A strong version (a narrative self extended in time) is well-supported only in great apes, cetaceans, elephants, possibly corvids, and humans. The honest answer is that we have a gradient with three or four reasonable cut-points and no instrument that resolves between them. 无科学共识,只有一组经验性押注。弱版本(情感价——痛、饥)日益被归于全体脊椎动物、章鱼,可能还有部分昆虫。中等版本(统一场景的整合体验)在哺乳动物与鸟类中较为可靠。强版本(穿越时间的叙事自我)有较好证据者,仅大猩猩、鲸豚、象、或鸦科及人类。诚实的回答是:我们面对一条梯度,有三或四个合理的切点,且无仪器能在它们之间分辨。
Is intelligence the same as consciousness?智能等于意识吗?
Empirically, no. A current AI system passes most behavioural intelligence tests yet there is no evidence of phenomenal experience. A locked-in patient may have rich phenomenal experience yet score low on behavioural tests. Intelligence is a measurable property of input-output behaviour; consciousness, on the leading theories, is a property of the underlying causal structure. The two have correlated in evolution because they evolved together — but the correlation is contingent. 经验上,否。当前 AI 系统通过大多数行为智能测试,却没有现象意识的证据。完全闭锁状态的患者可能有丰富的现象体验,却在行为测试中得分极低。智能是输入—输出行为的可测属性;意识,按主流理论,是底层因果结构的属性。二者在演化中相关,因其共同演化——但相关是偶然的。
Why do brains predict, instead of just react?大脑为何预测,而非仅作反应?
Reaction is too slow for an organism that wants to catch food or avoid being food. Action potentials run at metres per second; light runs at 300 million metres per second. By the time a stimulus has been processed, the world has moved on. The cortex compensates by running ahead of input — generating an expected next state and only spending compute on the residual error. The brain is, in this view, less a perceiver and more a controlled hallucination that is constantly checked against sensory data. 反应对一个想捕食或不被吃的生物来说太慢。动作电位以米每秒传播,光以三亿米每秒传播。等到刺激被处理完,世界已变。皮层以预测先行加以补偿——生成下一状态的期望值,仅对残余误差耗费算力。在此视角下,大脑与其说是感知者,不如说是一种被感觉数据持续校验的'受控幻觉'。
Is large-language-model AI on the path to consciousness?大语言模型 AI 在通往意识的路上吗?
Honest answer: nobody knows. Current LLMs implement perception (text), memory (context), prediction (next-token), and a partial world model. They do not currently have continuous embodied input, autobiographical memory across sessions, an updatable self-model, or active goal pursuit grounded in physical state. By Global Workspace Theory and Higher-Order Theories, several ingredients of consciousness are missing. By Integrated Information Theory, the picture depends on the underlying network topology — and that calculation is computationally infeasible for current systems. Anyone giving you a confident yes or no on this question is over-claiming. 诚实回答:无人知晓。当前 LLM 实现了感知(文本)、记忆(上下文)、预测(下一词元)以及部分世界模型。它们当前不具备连续的具身输入、跨会话的自传式记忆、可更新的自我模型、或基于物理状态的主动目标追求。按全局工作空间理论与高阶理论,意识的若干要素缺失。按整合信息理论,画面依赖底层网络拓扑——而该计算对当前系统在计算上不可行。任何对此问题给出有信心的是与否者,皆为越位主张。
If AI becomes conscious, will we know?若 AI 真的具备意识,我们会知道吗?
Probably not — and this is the field's most uncomfortable structural fact. We have no consciousness-meter for biological systems either; we infer it from neural similarity to ourselves and from behavioural reports. For an AI system whose substrate is dissimilar to a brain and whose self-reports are produced by a language model, both inference paths are weak. The honest near-term posture is to take the possibility seriously enough to invest in tests, while not asserting either positive or negative claim more strongly than the evidence allows. 很可能不能——这是该领域最令人不安的结构事实。我们对生物系统亦无意识检测仪;我们由神经结构与人类的相似性、由行为报告推断之。对于基底与大脑不相似、自我报告由语言模型产出的 AI 系统,两条推断路径皆薄弱。诚实的近期姿态是:将该可能性认真到值得投资检验的程度,同时不在证据所允许之外,作出比正方或反方更强的断言。