Can Chatbots be Sentient?

Google recently fired computer scientist Blake Lemoine for claiming that the chatbot he had been developing was sentient and conscious. The technology incorporates LaMDA “Language Model for Dialogue Applications”, an artificial intelligence (AI) that mimics human conversations as its foundational structure to interpret language and formulate exchange. E-commerce patrons may notice chatbots’ improvements via greater relevancy and instantaneous response. Behind the scene, their capacity to decipher speech has augmented via sophisticated training data and algorithmic advancements.

However, chatbots’ performance metrics continue to be based on the apt for correct responses. Is accuracy all there is to a dialogue? 

Perhaps Google has succeed in programming chatbots that can not only utter appropriate replies but also, come across natural. Lemoine’s AI in fact, displayed sufficient nuance in language application that he was convinced of its “aliveness”. But does exchange capacity necessarily conclude consciousness? As the notion of consciousness continues to divide scientific communities, Lemoine’s claim draws attention to the longstanding debate on non-biological intelligence and consciousness, which ultimately calls forth an examination of our own innate conscious arising. 

Alan Turing famously asked if machines can think. As he administered the Turing Test in 1950, he analyzed a computer’s ability to mimic human response in text-based conversations. The machine’s intent was to imitate human communication enough that the person engaged with it would believe that he was conversing with another human. The premise of the Turing Test has been contested however, pertinent questions remain.  Should machines (chatbots) imitate human to persuade them of their own intelligence? And should machines that formulate mere logical replies be considered conscious? 

Scientist Klaus Henning (2021) argues that the reproduction of human intelligence based in cognition does not necessarily result in an exact replica because organic intelligence retains an immense ability to process information, often in ways that science still cannot fully comprehend. While the technical components of biological intelligence can be duplicated, theories on consciousness’ emergence remain divided. 

Lemoine’s claim of Google’s chatbots as sentient and conscious presumes AI’s capacity to attain a level of subjectivity in the physical world. Such experience was once thought to only derive from natural processes, and the possible loss of that privilege which delivered humanity’s planetary dominance can trigger discomfort. However, uneasiness aside, does AI autonomy and self-regulation denote subjectivity? While sophisticated AI such as chatbots can excel at human functionality, a gap persists between operational performatives and expressions of consciousness. 

As scientists including Susan Schneider (2021) imagine a future where consciousness could be uploaded and reconfigured into non-biological substrates, they also acknowledge multiple layers of complications. Thus far, intelligence correlates with organic consciousness, and consciousness depicts a certain level of sensory and inner experience. Our human biology, in association with the environment organize our sensory and emotional experiences. As such, our subjectivity is entangled. 

If intelligence were separated from its organic substrate then, could such system sustain subjectivity? And if so, what are the standards of measurement? 

Although artificial consciousness is not inconceivable, the mechanics of human intelligence has not fully proven to be the whole story. Therefore, the segregation of consciousness from an organic substrate that is principally relational, remains questionable. 

Although chatbots’ “conversational skills” have progressed, do we truly believe that they “understand”? The formulation of rational responses based in algorithmic fluency does not necessarily denote discernment. AI performs well and functions to the extent of basic intelligence however, even advanced chatbots remain a machine. Claims that a machine is sentient in fact, call for an exploration of not only notions of intelligence and consciousness but also, humanity’s own requisites for affinity and connection. 

References

Henning, K. (2021). Gamechanger AI: How artificial intelligence is transforming our world. Springer.

Schneider, S. (2021). Artificial you: AI and the future of your mind. Princeton University Press. 

Are we post-human?

Human consciousness has always intrigued me. Humanity’s greatest achievements in both arts and science are attributed to a unique capacity and intelligence that is solemnly human. However, computation has advanced to where information, automation and digital networks are shifting our roles, responsibility and identity. Who we are has been closely tied to what we do, that is our societal contribution and our relationship with one another. Technological tools that were created to assist us are not only reshaping human relations but also gaining a presence that changes how we think of ourselves. 

According to Dataism (Hariri, 2015), the entire human specie can be seen as single data-processing system where individual beings serve merely as chips. All of humanity’s history can be summarized by four methods of gaining efficiency: increase the number processors, variety of processors, number of connections between processors and freedom of movement along existing connections. We are therefore, mere information at our core. 

If the human experience is unoriginal and humanity’s goal is the Internet-Of-Things, then how would we explain creativity and imagination? Are they too, algorithmic information flow?

Jaron Lanier (2010) claims that as developers of digital technology design programs that require users to interact with a computer as if it were a person, they are asking for our acceptance that at least a part of our brain functions as a program (Lanier, 2010, p. 4). Our exchange with the machine “locks” us into a grid of suppositions based in minimalistic ideals and explicit commands. 

While simplicity in digital designs gets a job done, inexact details that attribute to the whole “human program” may be lost. Peripheral data that does not precisely comply to the task are ignored, resulting in an exchange template based solely in transactional efficiency. Such fragmentation of what Lanier refers to as “personhood” not only reduces our expectations of each other, but also “who a person can be and…who each person can become” (Lanier, 2010, p. 4). 

Lanier (2010) claims that we take for granted the countless ways we are networked, including via social media. We buy into surface proficiency but in truth, networks perpetuate a “program” in us that is not actually (the whole of) us. As we extend our adoption of these technologies, we empower networks through dependency. Information becomes us. We socialize and work (produce) in a “symbolic environment” of “real virtuality” (Castells, 2000). 

While dataists view such developments as human evolution, their potential results alarm others. A cybernetic “posthuman” is interchangeable with the next. S/he (it) applies “lenticular logics” (McPherson, 2012) where nodes are observed without perception of the whole. Perception rises beyond intelligence into consciousness. This is the realm of meaning, understanding and awareness. In the context of human-versus-machine, this realm represents an argument against human as information. Organisms are greater than algorithms. We are complex beings on a quest. 

Johanna Drucker references an “interior life” that is tampered by the “grand narratives” of Silicon Valley (Simanowski, 2016, p. 43). Lanier (2010) points to an augmenting “hive mind” as the result of our alikeness and displacement from the “whole”. Humanity rests at a crossroad where artificial intelligence (AI) propels singularity. Our reliance on AI for productivity must be balanced with the acknowledgement that we are not only in charge but we also reserve an authority and a sovereignty that is uniquely human. Such self recognition requires an awareness that surpasses our roles and responsibility in the social construct. 

If we were mere performances (of tasks) then we are undoubtedly replaceable by machines. Computers can outperform us. But if we are a capacity greater than information flow assigned to transactions, then it is up to us to summon that force within. Lanier refers to it as consciousness “situated in time” (Lanier, 2010, p. 42). It is a context and an embodiment that sustains us outside a performative dimension. It is “us” in a space where no information flows and yet fully defined. 

 

References 

Castells, M. (2000). Materials for an exploratory theory of the network society. British Journal of Sociology, 51(1), 5-24. doi:10.1080/000713100358408

Harari, Y. N. (2015). Homo Deus: A brief History of Tomorrow. New York: Harper, An Imprint of HarperCollins.

Lanier, J. (2010). You are not a gadget. New York: Alfred A. Knopf.

McPherson, T. (2012). Why Are the Digital Humanities So White? or Thinking the Histories of Race and Computation. Debates in the Digital Humanities, 139-160. doi:10.5749/minnesota/9780816677948.003.0017

Simanowski, R. (2016). Digital humanities and digital media: Conversations on politics, culture, aesthetics and literacy. London: Open Humanities Press.