Google recently fired computer scientist Blake Lemoine for claiming that the chatbot he had been developing was sentient and conscious. The technology incorporates LaMDA “Language Model for Dialogue Applications”, an artificial intelligence (AI) that mimics human conversations as its foundational structure to interpret language and formulate exchange. E-commerce patrons may notice chatbots’ improvements via greater relevancy and instantaneous response. Behind the scene, their capacity to decipher speech has augmented via sophisticated training data and algorithmic advancements.
However, chatbots’ performance metrics continue to be based on the apt for correct responses. Is accuracy all there is to a dialogue?
Perhaps Google has succeed in programming chatbots that can not only utter appropriate replies but also, come across natural. Lemoine’s AI in fact, displayed sufficient nuance in language application that he was convinced of its “aliveness”. But does exchange capacity necessarily conclude consciousness? As the notion of consciousness continues to divide scientific communities, Lemoine’s claim draws attention to the longstanding debate on non-biological intelligence and consciousness, which ultimately calls forth an examination of our own innate conscious arising.
Alan Turing famously asked if machines can think. As he administered the Turing Test in 1950, he analyzed a computer’s ability to mimic human response in text-based conversations. The machine’s intent was to imitate human communication enough that the person engaged with it would believe that he was conversing with another human. The premise of the Turing Test has been contested however, pertinent questions remain. Should machines (chatbots) imitate human to persuade them of their own intelligence? And should machines that formulate mere logical replies be considered conscious?
Scientist Klaus Henning (2021) argues that the reproduction of human intelligence based in cognition does not necessarily result in an exact replica because organic intelligence retains an immense ability to process information, often in ways that science still cannot fully comprehend. While the technical components of biological intelligence can be duplicated, theories on consciousness’ emergence remain divided.
Lemoine’s claim of Google’s chatbots as sentient and conscious presumes AI’s capacity to attain a level of subjectivity in the physical world. Such experience was once thought to only derive from natural processes, and the possible loss of that privilege which delivered humanity’s planetary dominance can trigger discomfort. However, uneasiness aside, does AI autonomy and self-regulation denote subjectivity? While sophisticated AI such as chatbots can excel at human functionality, a gap persists between operational performatives and expressions of consciousness.
As scientists including Susan Schneider (2021) imagine a future where consciousness could be uploaded and reconfigured into non-biological substrates, they also acknowledge multiple layers of complications. Thus far, intelligence correlates with organic consciousness, and consciousness depicts a certain level of sensory and inner experience. Our human biology, in association with the environment organize our sensory and emotional experiences. As such, our subjectivity is entangled.
If intelligence were separated from its organic substrate then, could such system sustain subjectivity? And if so, what are the standards of measurement?
Although artificial consciousness is not inconceivable, the mechanics of human intelligence has not fully proven to be the whole story. Therefore, the segregation of consciousness from an organic substrate that is principally relational, remains questionable.
Although chatbots’ “conversational skills” have progressed, do we truly believe that they “understand”? The formulation of rational responses based in algorithmic fluency does not necessarily denote discernment. AI performs well and functions to the extent of basic intelligence however, even advanced chatbots remain a machine. Claims that a machine is sentient in fact, call for an exploration of not only notions of intelligence and consciousness but also, humanity’s own requisites for affinity and connection.
References
Henning, K. (2021). Gamechanger AI: How artificial intelligence is transforming our world. Springer.
Schneider, S. (2021). Artificial you: AI and the future of your mind. Princeton University Press.