迈克尔·波伦在他的新书《世界显现》中指出,人工智能可以做很多事情——但它不能成为人。

MICHAEL POLLAN

THE BIG STORY

FEB 24, 2026

打开网易新闻 查看精彩图片

如今,布莱克·勒莫因事件被视为人工智能炒作的巅峰之作。它让“有意识的人工智能”这一概念在新闻周期内迅速进入公众视野,同时也引发了计算机科学家和意识研究者之间的一场讨论,这场讨论在随后的几年里愈演愈烈。尽管科技界仍在公开场合对这一概念(以及可怜的莱莫因)不屑一顾,但在私下里,他们已经开始更加认真地看待这种可能性。有意识的人工智能可能缺乏明确的商业逻辑(如何将其商业化?),并会带来棘手的道德困境(我们应该如何对待一台能够感受到痛苦的机器?)。

然而,一些人工智能工程师开始认为,通用人工智能的终极目标——一台不仅具有超级智能,而且拥有人类水平的理解力、创造力和常识的机器——或许需要类似意识的东西才能实现。在科技界,曾经围绕有意识的人工智能的非正式禁忌——公众会觉得这种前景令人毛骨悚然——突然开始瓦解。

打开网易新闻 查看精彩图片

转折点出现在2023年夏天,当时19位顶尖的计算机科学家和哲学家发布了一份长达88页的报告,题为《人工智能中的意识》,非正式名称为“巴特林报告”。短短几天内,人工智能和意识科学界的几乎所有人都阅读了这份报告。报告草稿的摘要中有一句引人注目的话:“我们的分析表明,目前没有任何人工智能系统具备意识,但也表明,构建具有意识的人工智能系统并不存在明显的障碍。”

作者们承认,召集这个小组并撰写这份报告的部分灵感来源于“布莱克·勒莫因的案例”。一位合著者告诉《科学》杂志:“如果人工智能能够给人以意识的印象,那么科学家和哲学家们就必须对此进行深入探讨,这已成为一项紧迫的任务。”

但真正吸引所有人目光的是预印本摘要中的一句话:“构建有意识的人工智能系统不存在明显的障碍。”当我第一次读到这句话时,我感觉自己跨越了一道重要的门槛,而且这不仅仅是技术上的门槛。不,这关乎我们作为物种的本质。

如果人类在不久的将来发现世上出现了一台完全有意识的机器,那意味着什么?我猜想,那将是一个如同哥白尼式的重大发现,它会突然动摇我们自以为是的中心地位和特殊性。几千年来,我们人类一直通过与“低等”动物的对立来定义自身。这意味着我们否认动物拥有诸如情感(笛卡尔最明显的错误之一)、语言、理性以及意识等被认为是人类独有的特征。然而,近年来,随着科学家们证明许多物种都拥有智慧和意识,拥有情感,并且会使用语言和工具,这些区分大多已经瓦解,同时也挑战了几个世纪以来人类的优越论。这种转变仍在进行中,它引发了关于我们身份认同以及我们对其他物种的道德义务的棘手问题。

人工智能对我们崇高的自我认知构成的威胁,完全来自另一个层面。如今,我们人类必须重新定义自身,不再与其他动物比较,而是与人工智能建立联系。随着计算机算法在纯粹的脑力上超越我们——在国际象棋、围棋以及数学等各种“高等”思维领域轻松击败我们——我们至少可以感到欣慰的是,我们(以及许多其他动物物种)仍然拥有意识的恩赐与重负,拥有感受和主观体验的能力。从这个意义上讲,人工智能或许可以成为我们共同的敌人,将人类与其他动物拉近距离:我们对抗它,生命对抗机器。这种新的团结将构成一个令人振奋的故事,对于那些被邀请加入“意识团队”的动物来说,或许是个好消息。但是,如果人工智能开始挑战人类——或者更确切地说,是动物——对意识的垄断,又会发生什么呢?那时,我们又会变成什么样子呢?

我觉得这前景令人深感不安,虽然我并不完全确定原因。我逐渐接受了与其他动物(就我而言,甚至可能包括植物)共享意识的想法,并且乐于将它们纳入不断扩大的道德考量范围。但是机器呢?

或许我对这个想法的不安源于我的背景和教育。我从小就浸润在人文学科的温水煮沸中,尤其是在文学、历史和艺术领域,这些学科一直将人类意识视为一种值得捍卫的卓越存在。我们所珍视的文明几乎一切都是人类意识的产物:艺术与科学、高雅文化与通俗文化、建筑、哲学、宗教、政府、法律、伦理道德,更不用说价值本身的概念了。我想,有意识的计算机或许能为这些辉煌的宝库增添一些全新的、我们尚未想象到的东西。我们当然可以抱有这样的希望。迄今为止,人工智能创作的诗歌水平与打油诗相差无几;缺乏意识或许可以解释为什么它们连一丝原创性或新颖见解的火花都没有。但是,如果(或者说当?)有意识的人工智能开始创作真正优秀的诗歌时,我们又会作何感想呢?

我们凭什么认为有意识的机器会比有意识的人类更具美德?

人工智能永远不会有意识作为一个人道主义者,我难以接受动物对意识的垄断地位可能会被打破。但我现在遇到了一些其他类型的人(其中一些人自称为超人类主义者),他们对这个未来持更为乐观的态度。一些人工智能研究人员支持制造有意识的机器,因为作为拥有自身情感的实体,有意识的机器比仅仅具备智能的计算机更有可能发展出同理心。

正如一位神经科学家和一位人工智能研究人员试图说服我的那样,制造有意识的人工智能是一项道德义务。为什么?因为另一种选择是拥有超凡智慧却冷酷无情的人工智能,它会为了实现目标而不择手段,因为它缺乏我们意识和共同脆弱性所带来的所有道德约束。只有有意识的人工智能才有可能发展出同理心,从而拯救我们。我并非夸大其词,这就是他们的论点。

真让人怀疑这些人是否读过《弗兰肯斯坦》!弗兰肯斯坦博士赋予他的造物不仅生命,更赋予其意识,而这正是问题的关键所在。玛丽·雪莱的小说记录了“一个敏感而理性的动物的诞生”,正是这两种特质的结合决定了怪物的命运。驱使怪物寻求复仇并最终走上杀戮之路的,并非怪物的理性,而是他内心的创伤。

“我所见之处皆是幸福,唯独我被无可挽回地排除在外,”怪物被逐出人类社会后向弗兰肯斯坦博士抱怨道,“我原本仁慈善良,是苦难让我变成了恶魔。”怪物的理性能力固然帮助他实现了邪恶的计划,但真正赋予他动机的却是他的意识——他的情感。我们又凭什么认为有意识的机器会比有意识的人类更具美德呢?

令人惊讶的是,巴特林关于人工智能意识的报告代表了该领域某种程度上的共识;我采访的大多数计算机科学家都赞同其结论。然而,我花在阅读这份报告(以及采访其中一位合著者)上的时间越多,就越开始质疑其关于人工智能意识即将实现的结论。值得称赞的是,作者们严谨地阐述了他们的假设和方法,但这反而让我怀疑,他们是否是建立在一个站不住脚的基础上,才得出如此大胆的结论。

本书开篇,这些计算机科学家和哲学家就提出了他们的指导性假设:“我们采用计算功能主义作为工作假设,即执行正确类型的计算是意识存在的必要且充分条件。”计算功能主义的出发点是,意识本质上是一种运行在可能是大脑或计算机的硬件上的软件——该理论完全持不可知论的态度。但计算功能主义是正确的吗?作者们并不打算对此妄下断言,只是说它是“主流观点——尽管存在争议”。即便如此,出于“务实的原因”,他们还是会假设它是正确的。

坦诚固然可贵,但这种信念上的跳跃需要极大的勇气,我不确定我们是否应该这样做。

就本报告而言,系统的“物质基础”(即大脑或计算机)“对意识而言并不重要……它可以存在于多种基础中,而不仅仅是生物大脑。”任何能够运行必要算法的基础都可以。“我们初步假设,我们所知的计算机原则上能够实现足以产生意识的算法,”作者指出,“但我们并不声称这一点是确定的。”这种对不确定性的承认远远不够。报告中未加质疑的比喻是:大脑是计算机——意识软件运行的硬件。在这里,我们看到的是一个伪装成事实的比喻。事实上,整篇论文及其结论都建立在这个比喻的有效性之上。

隐喻是强大的思考工具,但前提是我们不能忘记它们只是隐喻——一种不完美或片面的类比,将一件事物比作另一件事物。两者之间的差异与相似之处同样重要,但这些差异似乎在人工智能的狂热浪潮中被忽略了。正如控制论专家阿图罗·罗森布鲁斯和诺伯特·维纳多年前所指出的那样,“隐喻的代价是永恒的警惕。” 除了这份报告的作者之外,整个人工智能领域似乎都放松了警惕。

想想硬件和软件之间泾渭分明的区别。计算机中硬件与软件分离的妙处在于,同一台机器上可以运行许多不同的程序;软件及其编码的知识在硬件“消亡”后依然存在。这种分离也符合我们关于二元论的直觉——即,正如笛卡尔所言,我们可以清晰地划分精神和物质。但在大脑中,硬件和软件之间的区别根本不存在;在那里,软件就是硬件,反之亦然。记忆是大脑中神经元之间物理连接的模式,它既非硬件也非软件,而是两者兼具。

事实上,发生在你身上的一切——你经历、学习或记忆的一切——都会改变你大脑的物理结构,永久性地重塑其连接。(从这个意义上讲,大脑中不存在二元性;精神层面的东西永远无法与物质层面完全分离。)认为同样的意识算法可以在各种不同的载体上运行的想法是毫无意义的,因为所讨论的载体——大脑——会不断地被运行在其上的任何信息(或“意识算法”)进行物理重构。你的大脑与我的大脑在物质层面上是不同的,正是因为它们被不同的生活经历——也就是意识本身——所塑造。大脑根本无法互换,无论是与电脑还是与其他大脑。

几乎在任何方面,你只要深入探讨,就会发现“计算机等同于大脑”的比喻并不成立。计算机科学家将大脑中的神经元视为芯片上的晶体管,通过电脉冲来控制它们的开关。这种类比固然有一定道理,但实际情况却很复杂,因为电并非影响神经元活动的唯一因素。大脑中还充斥着各种化学物质,包括神经调节剂和激素,它们不仅影响神经元是否放电,还影响其放电的强度。这就是为什么精神活性药物能够深刻地改变意识(而对计算机却没有明显影响)的原因。神经元的活动还受到大脑中以波状模式传播的振荡的影响;这些振荡的不同频率与不同的心理活动相关,例如意识及其缺失、注意力集中和做梦(以及其他睡眠阶段)。

将神经元比作晶体管,是对它们复杂性的极大低估。与芯片上的晶体管相比,大脑中的神经元相互连接极其复杂,每个神经元都直接与其他多达10000个神经元通信,构成一个极其精细的网络,以至于我们距离绘制出其连接的最粗略图谱,仍然需要数十年时间。在计算机科学领域,人们对“深度人工神经网络”的出现大加赞赏——这是一种机器学习架构,据称以大脑为模型,它以惊人的数量叠加处理器,使网络能够处理和学习海量数据。这令人印象深刻,但最近的一项研究表明,单个皮层神经元就能完成整个深度人工神经网络所能完成的一切。

没错,计算机在很多方面确实与大脑相似,计算机科学也通过模拟大脑的各个方面和运作方式取得了长足的进步。但是,认为大脑和计算机在任何方面都可以互换——计算功能主义的前提——无疑是牵强附会的。然而,这不仅是巴特林报告的立足之地,也是该领域大多数理论的基石。原因不难理解。如果大脑是计算机,那么足够强大的计算机应该能够做到大脑所做的一切,包括产生意识。这个前提几乎必然得出这样的结论。换句话说,正是作者们自己扫除了构建有意识人工智能的最大“障碍”——即认为大脑与计算机在关键方面存在差异的障碍。

将神经元比作晶体管,是对神经元复杂性的严重低估。

报告的第二个方面让我质疑其结论的可信度,那就是它提出的判断人工智能是否真正具有意识的标准。这是一个严峻的挑战。作者引用了勒莫因事件(无论是否恰当),指出人工智能很容易欺骗人类,让他们相信自己拥有意识,而实际上并非如此。(或许更准确的说法是我们自己欺骗了自己,这要归功于我们对拟人化和魔法的迷恋。)当人工智能的训练数据几乎涵盖了所有关于意识的论述时,“可报告性”(哲学术语,其实就是直接询问人工智能)就无法奏效了。解决这一难题的一个方法是,从人工智能训练所用的数据集中移除所有关于意识(以及可能包括感觉和情感)的引用,然后观察它是否还能令人信服地表达自己拥有意识。

作者建议,我们应该寻找与各种意识理论预测相符的人工智能意识“指标”。例如,如果人工智能的设计包含一个工作空间,该工作空间汇集了各种信息流,但前提是这些信息流必须经过竞争才能进入该空间,那么这很符合全局工作空间理论,因此可能被视为具有意识。该报告回顾了六种意识理论,并确定了人工智能必须展现的“指标”,以满足每种理论的要求,从而被认为具有潜在的意识。

这里的问题(或者说其中一个问题)在于:它提出的用来衡量人工智能的意识理论,没有一个能达到任何人满意的程度。那么,这算什么证明标准呢?更糟糕的是,很多这类理论都可以在人工智能设计中模拟出来,这并不奇怪,因为它们都基于意识是计算问题这一前提。我们陷入了无休止的循环。

当我仔细研读完巴特林报告后,我之前一直担忧的“哥白尼时刻”似乎比报告大胆的结论所暗示的要遥远得多。在回顾了报告中提到的六种左右的意识理论后,我发现它们都存在一个共同的缺陷:它们都想当然地认为意识可以被简化为某种算法。

我也注意到,这些理论存在一些缺失。它们都没有提及具身性——即意识可能依赖于同时拥有身体和大脑——或者说,它们对任何与生物学相关的概念都只字未提。这些理论也没有解释意识主体。究竟是谁或什么接收了在全球工作空间中传播的信息?或者说,整合信息理论(IIT)中整合的信息?情感在使体验成为意识的过程中又扮演着怎样的角色?

最后一点作者们也注意到了,他们指出大多数现有理论都忽略了“情感”这一概念,并建议该领域应更加关注有意识的机器是否拥有“真实”的情感这一问题。因为如果事实证明它们确实拥有情感,我们将面临一场道德和伦理危机。报告指出:“任何能够感知痛苦的实体都应受到道德考量。”(但痛苦难道不总是有意识的吗?)报告继续说道:“这意味着,如果我们未能认识到有意识的人工智能系统的意识,我们可能会造成或允许造成具有重大道德意义的伤害。”我们究竟对能够感知痛苦的机器负有什么责任?我们真的希望给这个世界带来更多的痛苦吗?

除了这种对情感(作为赋予机器意识的棘手副产品)的高度推测性讨论之外,在人工智能领域,关于意识的讨论一如既往地抽象——如同人们所预期的那样,它冷冰冰的、没有形体的,并且完全无视生物学。当我向一位致力于构建有意识人工智能的研究人员提出“计算机是否会感到痛苦”的难题时,他轻描淡写地挥了挥手,解释说只需对算法进行简单的修改就能解决这个问题:“我们完全可以把快乐的程度调高一点。”

改编自迈克尔·波伦的《世界显现:意识之旅》。版权所有©2026 迈克尔·波伦。经企鹅出版社(企鹅出版集团旗下品牌,企鹅兰登书屋有限责任公司)授权出版。

AI Will Never Be Conscious

In his new book, A World Appears, Michael Pollan argues that artificial intelligence can do many things—it just can’t be a person.

打开网易新闻 查看精彩图片

PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES

SAVE THIS STORY

THE BLAKE LEMOINE incident is remembered today as a high‑water mark of AI hype. It thrust the whole idea of conscious AI into public awareness for a news cycle or two, but it also launched a conversation, among both computer scientists and consciousness researchers, that has only intensified in the years since. While the tech community continues to publicly belittle the whole idea (and poor Lemoine), in private it has begun to take the possibility much more seriously. A conscious AI might lack a clear commercial rationale (how do you monetize the thing?) and create sticky moral dilemmas (how should we treat a machine capable of suffering?). Yet some AI engineers have come to think that the holy grail of artificial general intelligence—a machine that is not only supersmart but also endowed with a human level of understanding, creativity, and common sense—might require something like consciousness to attain. In the tech community, what had been an informal taboo surrounding conscious AI—as a prospect that the public would find creepy—suddenly began to crumble.

打开网易新闻 查看精彩图片

COURTESY OF PENGUIN PRESS

Buy This Book At:

  • Amazon

  • Bookshop.org

  • Target

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

The turning point came in the summer of 2023, when a group of 19 leading computer scientists and philosophers posted an 88‑page report titled “Consciousness in Artificial Intelligence,” informally known as the Butlin report. Within days, it seemed, everyone in the AI and consciousness science community had read it. The draft report’s abstract offered this arresting sentence: “Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.”

ADVERTISEMENT

The most ambitious, future-defining stories from our favorite writers.

SIGN UP

By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy.

The authors acknowledged that part of the inspiration behind convening the group and writing the report was “the case of Blake Lemoine.” “If AIs can give the impression of consciousness,” a coauthor told Science magazine, “that makes it an urgent priority for scientists and philosophers to weigh in.”

FEATURED VIDEO

Michael Pollan Answers Psychedelics Questions From Twitter

But what caught everyone’s attention was that single statement in the abstract of the preprint: “no obvious barriers to building conscious AI systems.” When I read those words for the first time, I felt like some important threshold had been crossed, and it was not just a technological one. No, this had to do with our very identity as a species.

What would it mean for humanity to discover one day in the not‑so‑distant future that a fully conscious machine had come into the world? I’m guessing it would be a Copernican moment, abruptly dislodging our sense of centrality and specialness. We humans have spent a few thousand years defining ourselves in opposition to the “lesser” animals. This has entailed denying animals such supposedly uniquely human traits as feelings (one of Descartes’s most flagrant errors), language, reason, and consciousness. In the last few years, most of these distinctions have disintegrated as scientists have demonstrated that plenty of species are intelligent and conscious, have feelings, and use language and tools, in the process challenging centuries of human exceptionalism. This shift, still underway, has raised thorny questions about our identity, as well as about our moral obligations to other species.

With AI, the threat to our exalted self‑conception comes from another quarter entirely. Now we humans will have to define ourselves in relation to AIs rather than other animals. As computer algorithms surpass us in sheer brainpower—handily beating us at games like chess and Go and various forms of “higher” thought like mathematics—we can at least take solace in the fact that we (and many other animal species) still have to ourselves the blessings and burdens of consciousness, the ability to feel and have subjective experiences. In this sense, AI may serve as a common adversary, drawing humans and other animals closer together: us against it, the living versus the machines. This new solidarity would make for a heartwarming story and might be good news for the animals invited to join Team Conscious. But what happens if AI begins to challenge the human—or animal, I should say—monopoly on consciousness? Who will we be then?

ADVERTISEMENT

I find this a deeply unsettling prospect, though I’m not entirely sure why. I’m getting comfortable with the idea of sharing consciousness with other animals (and possibly even with plants, in my case) and I’d be happy to admit them into an expanding circle of moral consideration. But machines?

It could be that my discomfort with the idea stems from my background and education. I have been slow‑cooked in the warm broth of the humanities, especially literature and history and the arts, and these have always held up human consciousness as something exceptional that is worth defending. Just about everything we value about civilization is the product of human consciousness: the arts and the sciences, high culture and low, architecture, philosophy, religion, government, law, and ethics and morality, not to mention the very idea of value itself. I suppose it is possible that conscious computers could add something new and as yet unimagined to the stock of these glories. We can hope so. To date, poetry written by AIs isn’t much better than doggerel; the absence of consciousness might explain why it lacks even a spark of originality or fresh insight. But how will we feel if (when?) conscious AIs start producing really good poetry?

ADVERTISEMENT

Why should we assume that conscious machines would be any more virtuous than conscious humans?

ADVERTISEMENT

As a humanist, I struggle with the possibility that the animal monopoly on consciousness might fall. But I have now met other types of humans (some of whom call themselves transhumanists) who are more sanguine about this future. Some AI researchers endorse the effort to build conscious machines because, as entities with feelings of their own, conscious machines are more likely to develop empathy than computers that are merely intelligent. Building a conscious AI is a moral imperative, as both a neuroscientist and an AI researcher sought to convince me. Why? Because the alternative is the blazingly smart but unfeeling AI that will be ruthless in pursuit of its objectives, because it will lack all of the moral constraints that have arisen from our consciousness and shared vulnerabilities. Only a conscious AI is apt to develop empathy and therefore spare us. I am not exaggerating; this is the argument.

One has to wonder if these people have ever read Frankenstein! Dr. Frankenstein gives his creation the gift of not only life but also consciousness, and therein lies the rub. Mary Shelley’s novel chronicles “the creation of a sensitive and rational animal,” and it is the combination of those two qualities that determines the monster’s fate. It is not the monster’s rationality but his emotional injury that spurs him to seek revenge and turn homicidal.

ADVERTISEMENT

“Everywhere I see bliss, from which I alone am irrevocably excluded,” the monster complains to Dr. Frankenstein after being driven out of human society. “I was benevolent and good; misery made me a fiend.” The monster’s ability to reason surely helped him realize his demonic scheme, but it was his consciousness—his feelings—that supplied the motive. Why should we assume that conscious machines would be any more virtuous than conscious humans?

REMARKABLY ENOUGH, THE Butlin report on artificial consciousness represents something of a consensus view in the field; most of the computer scientists I interviewed endorsed its conclusions. Yet the more time I spent reading it (and interviewing one of its coauthors), the more I began to question its conclusion that artificial consciousness is right around the corner. To their credit, the authors are scrupulous about setting forth their assumptions and methods, both of which make me wonder if they haven’t erected their bold conclusion atop a dubious foundation.

Right on page one, these computer scientists and philosophers set forth their guiding assumption: “We adopt computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness, as a working hypothesis.” Computational functionalism takes as its starting point the idea that consciousness is essentially a kind of software running on the hardware of what could be a brain or a computer—the theory is completely agnostic. But is computational functionalism true? The authors aren’t quite prepared to nail themselves to that claim, only to say that it is “mainstream—although disputed.” Even so, they will proceed on the assumption that it is true for “pragmatic reasons.”

ADVERTISEMENT

The candor is admirable, but the approach demands a tremendous leap of faith that I’m not sure we should make.

For the purposes of the report, the “material substrate” of the system—that is, whether it is a brain or a computer—“does not matter for consciousness … It can exist in multiple substrates, not just in biological brains.” Any substrate that can run the necessary algorithm will do. “We tentatively assume that computers as we know them are in principle capable of implementing algorithms sufficient for consciousness,” the authors state, “but we do not claim that this is certain.” The acknowledgment of uncertainty doesn’t go nearly far enough. Unquestioned in the report is the metaphor that brains are computers—the hardware on which the software of consciousness is run. Here, we meet a metaphor parading as fact. Indeed, the whole paper and its conclusions hinge on the validity of this metaphor.

Metaphors can be powerful tools for thinking, but only as long as we don’t forget they are metaphors—imperfect or partial analogies likening one thing to another. The differences between the two things are as important as the similarities, but these differences seem to have gotten lost in the enthusiasm surrounding AI. As cyberneticists Arturo Rosenblueth and Norbert Wiener noted years ago, “The price of metaphor is eternal vigilance.” Beyond the authors of this report, the whole field of AI appears to have let down its guard on this one.

ADVERTISEMENT

Consider the sharp distinction between hardware and software. The beauty of separating hardware from software in computers is that a great many different programs can run on the same machine; the software and the knowledge it encodes survive the “death” of the hardware. The separation also speaks to our folk intuition that dualism is true—that, following Descartes, we can draw a bright line between mental stuff and physical stuff. But the distinction between hardware and software simply doesn’t exist in brains; there, software is hardware and vice versa. A memory is a physical pattern of connection among neurons in the brain, neither hardware nor software but both.

Indeed, everything that happens to you—everything you experience or learn or remember—changes the physical structure of your brain, permanently rewiring its connections. (In this sense, there is no dualism in the brain; mental stuff can never be completely disentangled from physical stuff.) The idea that the same consciousness algorithm can be run on a variety of different substrates makes no sense when the substrate in question—a brain—is continually being physically reconfigured by whatever information (or “algorithm of consciousness”) is run on it. Your brain is materially different from mine precisely because it has been shaped, literally, by different life experiences—that is, by consciousness itself. Brains are simply not interchangeable, neither with computers nor with other brains.

ADVERTISEMENT

Just about anyplace you push on it, the computer‑as‑brain metaphor breaks down. Computer scientists treat neurons in a brain as though they are transistors on a chip, switched on or off by pulses of electricity. That analogy has some truth to it, but it is complicated by the fact that electricity is not the only factor influencing the firing of neurons. Brains are also awash in chemicals, including neuromodulators and hormones that powerfully influence the behavior of neurons, not just whether or not they fire but how strongly. This is why psychoactive drugs can profoundly alter consciousness (and have no discernible effect on computers). The activity of neurons is also influenced by oscillations that traverse the brain in wavelike patterns; the different frequencies of these oscillations correlate with different mental operations, such as consciousness and its absence, focused attention and dreaming (as well as other stages of sleep).

To liken neurons to transistors is to grossly underestimate their complexity. Compared with transistors on a chip, neurons in the brain are massively interconnected, each one communicating directly with as many as 10,000 others in a network so intricate that we are still decades away from being able to draw even the crudest map of its connections. In computer science, much has been made about the advent of “deep artificial neural networks”—a type of machine‑learning architecture, supposedly modeled on the brain’s, that layers a mind‑boggling number of processors in such a way that the network can process and learn from vast troves of data. Impressive, for sure, yet a recent study demonstrated that a single cortical neuron can do everything an entire deep artificial neural network can.

ADVERTISEMENT

Yes, there are plenty of ways in which computers do resemble brains, and computer science has made great strides by simulating various aspects and operations of the brain. But the idea that brains and computers are in any way interchangeable—the premise of computational functionalism—is surely a stretch. And yet this is the premise upon which stands not only the Butlin report but also most of the field. It’s not hard to see why. If brains are computers, then sufficiently powerful computers should be able to do whatever brains do, including becoming conscious. The premise all but guarantees the conclusion. Put another way, it is the authors themselves who have removed the biggest “barrier” to building a conscious AI—the barrier that says brains differ from computers in crucial ways.

To liken neurons to transistors is to grossly underestimate their complexity.

ADVERTISEMENT

There is a second aspect of the report that makes me wonder how seriously to take its conclusion, and that is the standard it proposes for deciding if an AI is actually conscious or not. This is a serious challenge. Citing the Lemoine incident (fairly or not), the authors point out that AIs can easily dupe humans into believing they are conscious when they are not. (It’s probably more accurate to say that we dupe ourselves into this belief, thanks to our weakness for anthropomorphism and magic.) “Reportability” (philosophical jargon for just asking the AI itself) won’t work when the AI has been trained on pretty much everything that’s been said and written about consciousness. One approach to this dilemma would be to remove all references to consciousness (and presumably feeling and emotion as well) from the dataset on which the AI has been trained and then see if it can still speak convincingly about being conscious.

Instead, the authors propose that we look for “indicators” of AI consciousness that match the predictions of the various theories of consciousness in play. So, for example, if the design of an AI included a workspace that brought together various streams of information, but only after those streams had competed to enter it, that would look a lot like global workspace theory and so might qualify as conscious. The report reviewed a half‑dozen theories of consciousness, identifying the “indicators” that an AI would have to exhibit to satisfy each of them and, by doing so, be deemed potentially conscious.

ADVERTISEMENT

The problem here (well, one of them) is this: None of the theories of consciousness that it proposes we measure AIs against are even remotely close to being proved to anyone’s satisfaction. So what kind of standard of proof is that? What’s more, many of these theories can be simulated in the design of an AI, which should come as no surprise, because they’re all based on the idea that consciousness is a matter of computation. Round and round we go.

By the time I finished digesting the Butlin report, the Copernican moment I’d worried about seemed more distant than the report’s bold conclusion had led me to believe. After reviewing the half‑dozen or so theories of consciousness covered by the report, it seemed clear that all of them stacked the deck by taking for granted that consciousness could be reduced to some kind of algorithm.

I was also struck by what was missing from the theories under consideration. None of them had anything to say about embodiment—the idea that consciousness might depend on having both a body and a brain—or, for that matter anything remotely biological. Nor did the theories have anything to say about the conscious subject. Who or what, exactly, is the recipient of the information that is broadcast in the global workspace? Or the information that is integrated in integrated information theory (IIT)? And what about the role of feelings in rendering experience conscious?

ADVERTISEMENT

This last point was not lost on the authors, who noted the absence of “affect” from most current theories and recommended that the field pay more attention to the issue of whether conscious machines would have “real” feelings, because if it turns out they do, we will have a moral and ethical crisis on our hands. “Any entity which is capable of conscious suffering deserves moral consideration,” the report states. (But isn’t suffering always conscious?) “This means that if we fail to recognize the consciousness of conscious AI systems,” the report continued, “we may risk causing or allowing morally significant harms.” What would we owe machines that can suffer? And do we really want to bring any more suffering into the world?

Apart from this sort of highly speculative discussion of feeling (as a troublesome by‑product of making machines conscious), in the AI community, the conversation about consciousness is as relentlessly abstract—as bloodless, bodiless, and utterly oblivious to biology—as one would expect. When I posed the suffering‑computer conundrum to a researcher seeking to build a conscious AI, he waved away the problem, explaining it could be offset with a simple fix to the algorithm: “There’s no reason we couldn’t just turn up the dial on joy.”

Adapted from A World Appears: A Journey into Consciousness by Michael Pollan. Copyright ©2026 by Michael Pollan. Published by arrangement with Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC.