Strong Artificial Intelligence typically denotes design and production of autonomous, intelligent minds that could be employed for the benefit of humanity. In this article I consider some ontological doubts about the utility of strong AI in the human world.
According to the Internet Encyclopaedia of Philosophy “the AI thesis should be understood to hold that thought, or intelligence, can be produced by artificial means; made, not grown.” In other words, AI implies designed intelligence as opposed to intelligence caused but not designed, like natural birth of a human being or any embodied intelligence arising spontaneously. A further distinction is between weak AI, consisting just in the capacity to learn and adapt to unforeseen circumstances (machines acting in an intelligent-like manner), and strong AI, denoting artificial subjectivity (self-aware machines). Here I focus exclusively on the strong sense of Artificial Intelligence.
I begin with a hopefully uncontroversial proposition that x is a subject only if x is an individual (is absolutely identical only to itself) and is conscious of being an individual (is reflexively relating in excess of the abstract reflexivity of being identical to itself). "Reflexiveness (...) is the essential condition, within the social process, for the development of mind." (Mead 1934, 134) I do not claim that identity and self-consciousness are sufficient conditions of being a subject, or that self-consciousness is just reflexive-relating. I do nonetheless commit to the well established thesis that subjectivity entails first-person perspective as well as self-identification. "All subjective experience is self-conscious in the weak sense that there is something it is like for the subject to have that experience. This involves a sense that the experience is the subject's experience." (Flanagan 1993, 194)
To start with, it is generally assumed, “there's nothing that it's like, subjectively, to be a computer.” (IEP) The phenomenological thesis that to identify as myself (what amounts to being myself) there must be something it is like to be me (Nagel 1974, 436) would amount to solipsism unless my identity, realised in terms of what it is like to be me, were grounded by means of a kind of (other) things that actually are like me (but are not me, since difference must be preserved for the self/other distinction to be possible). What complicates this thesis is that likeness is not self-evident but, rather, is already a result of subjective judgement. If likeness meant just perfect resemblance it could not be objectively satisfied as no measurable quantity is ever perfectly the same over any two instances of measurement. Recognition of likeness and the property of subjectivity may just be complementary aspects of Self-Other-Self reflexivity, which then would necessarily be a matter of kind and degree. Reflexive-relating of an isolated (singular) individual just by means of its parts, an alternative thesis advanced by Kriegel (2009, 224-8), is unsatisfactory, because then the individual subject and at least one of its proper parts would be simultaneously constituted as different subjects, hence not the same subject, therefore a contradiction.
It follows from the above considerations that one cannot achieve strong AI within a singular entity but only (perhaps) by triggering evolution of a society of proto-conscious entities, presumably within a quasi-organic operating system geared to change subject to action by its native population. Jordan and Ghin (2006, 50) define proto-consciousness as a self-sustaining system with "a type of content that is ultimately able to evolve into consciousness". The seed design must therefore be coded and embodied in a way that would allow it to become aware of its coded nature; it must be presented with a 'mirror' of its own kind of beings: "the stairway from content to consciousness is inherently social." (Ibid. 56)
Development of artificial consciousness is expressly motivated by utility, by the desire to control and exploit conscious technology, but once a self-sustaining system is allowed to mutate and evolve autonomously, control is unavoidably diminished; it drifts away from the representational realm of the programmer. Worlds that evolve apart from one another become alienated and ultimately disappear for one another: "he who looks must not himself be foreign to the world that he looks at." (Merleau-Ponty 1968) This could be interpreted as a manifestation of digital death, but it could equally be a sign of thriving vitality. In early stages of evolution, the programmer may be able to track the conceptual morphology of the emergent society of minds, as it still may be heavily based on the semantic foundations inherited from the programmer’s realm. As the new entities develop an increasingly complex interface mechanism (culture) with their own social environment, and as that mechanism is consciously internalised, it becomes impervious to scrutiny by outside observers, no matter how sophisticated surveillance processes were put in place. What the AI code perceives through the hardware is unavoidably a world apart from what the programmer perceives the hardware do. The divide between the inside and outside of alien conceptual realms cannot be bridged by mere observation but only via social participation, that is, by engaging AI as a being of the same kind as oneself, which the creator is not. The creator is ultimately abandoned by its creation, staring at a blank screen, impotent and unknowing.
The pursuit of artificial consciousness as a utility, as slave machines that perform tasks in the human world, is logically flawed. The very idea of utility undermines autonomy and intelligence of what is being utilised, while a genuinely autonomous consciousness could not be fully or reliably controlled, even if it can sometimes be influenced by beings of its own kind.
Flanagan, Owen. Consciousness Reconsidered. Cambridge: MIT Press, 1993.
Jordan, J.S., and M. Ghin. “(Proto-) Consciousness as a Contextually Emergent Property of Self-Sustaining Systems.” Mind & Matter, 2006.
Kriegel, Uriah. Subjective Consciousness: A Self-Representational Theory. Oxford: Oxford University Press, 2009.
Mead, George H. Mind, Self, and Society. Chicago and London: The University of Chicago Press, 1934.
Merleau-Ponty, Maurice. The Visible and the Invisible. Evanstone: Northwestern University Press, 1968.
Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review, 1974: 435-450.
Thank you again for your comments. Existing AI architecture resembles a "hive mind". I tried to explain this to my wife yesterday and was at a loss for analogies. For example, an AI can carry out thousands of simultaneous dialogues with the outside world and with its inner modules (such as Wikipedia). It is like a vast library full of eager librarians ready to talk to anyone who shows up. At present (for good reasons) AI apps like CHAT-GPT are not able to access the "real world" directly. This is a safety feature, not an engineering requirement. Tesla's "self driving" cars are much the same - *instances* of an interaction with a vast library of "situations".
I find it very useful to consider this question from the other end: what are human beings? I find the current "chatbot" model disconcertingly close to the human speech I hear every day and produce myself. For example, I consider fundamentalist Christianity to be nothing but a Large Langauge Model, based on nothing but itself (and a written book). To my mind, supposing for a moment that AI technology could completely master everything humans can do with speech, what would be left?
I think we are close to agreeing that at lease *one* thing would be that feeling of being in the world. But I cannot really be sure anyone has that but me. I agree with Lex Friedman when he guesses that there will be no problem for great numbers of people to treat AI applications as real people. We are hard-wired to accept this. My wife sees faces in the trees.
When my father reached the age of 96, and not long before his death, he related experiences of mental “confusion.” During these periods he was unable to process, unable to do, anything. That might be interpreted in different ways, but I think a loss of one’s frame of reference, a mental construct, such as when one is awakened abruptly from deep sleep, is similar.
I hope you may find this article interesting:
https://www.psychologytoday.com/us/blog/biocentrism/202302/will-ai-ever-be-conscious