AI & the philosophy of human language

ā€œLike a modern-day Sphinx, AI poses riddles of profound insight, blurring the line between what is real and what is mere simulation, challenging us to embrace the paradox of existence in the ethereal domain of thought.ā€ -ChatGPT

How does a child look at an insect that she has seen for the first time? With a curiosity mixed with fear. That is how some creators are looking at generative AI models such as ChatGPT. Philosophers and linguists, on the other hand, are taking a different approach. They are trying to use Large Language Models such as ChatGPT to explore language, consciousness, and understanding on a deeper level. At the end of the day, the question that has become the central point for both of these classes is this: When LLMs like ChatGPT that are still in their early phases of development can so effectively generate human-like text, can this technology become advanced enough to perfectly imitate human writing?

This question presupposedly divides writing into human and non-human, and for comparison to be drawn among these two types, it is required to understand how non-human or AI-generated writing is produced. Large Language Models are machine learning models trained on a huge amount of human-generated text from language corpus and text scrapped from web, that analyze language structure and its usage. These models use neural nets to compute associations between different words and calculate probabilities of successive occurrences of a huge number of sets of words. When a text prompt is provided to an LLM, it analyzes the text based on its training and based on the probabilities, predicts what piece of text should follow the prompt. ChatGPT is a chatbot-styled LLM that has the additional capability to use context to provide response. The huge increase in computational capabilities of computers and the availability of large amounts of data have made it possible for LLMs to master conversating like humans. This understanding leads to an even more compelling question: does an LLM really understand what it says?

The question is difficult to answer, mainly because we havenā€™t yet come to an agreement about what it actually means to understand something. Most scientists adhere to the materialist school of thought in philosophy. For them, mapping sensory data about something to proper logical (or linguistic) structures is equivalent to understanding it. Based on this ideology, LLM models already understand language deeper than we do as they have been trained on more linguistic data than that any person can ever actively read or listen to in his entire life. The only problem according to them is that the data it has been trained on is not entirely accurate. So, future work would be focused on fine-tuning LLMs to improve their accuracy.

Others argue that to understand something on a fundamental level, one needs to internalize that knowledge to such an extent that it becomes intuitive. This ideology is based on the dualist school of thought in philosophy, according to which body and mind are two separate entities of varying nature, the former one being physical and the latter metaphysical. Believing in this philosophy, AI might be able to simulate the physical factory of thoughts i.e. the brain, but it cannot simulate the essential driver of thought i.e. the mind where the complex phenomenon of consciousness resides.

The philosophy of mind, consciousness, and language goes way beyond these two classical schools of thought, and the questions of artificial consciousness are complicated with much more intricacies. However, when comparing AI-generated writing with human writing, AI currently lacks one essential capability that affects human writing to the coreā€”the capability to experience.

Experiencing can be defined as being subject to multiple sensory perceptions and being able to consciously associate those perceptions with the present time and later being able to reflect upon them. Currently, LLMs function on a totally different mechanism. When a prompt is provided to an LLM, it generates a response, and the life of that instance (which is one among multiple concurrent instances) ends. For the next prompt, the AI model also takes in the previous conversation as input to create a contextual response, but that doesnā€™t mean that the previous conversation has become a part of its experience because there is no centrally conscious instance of the AI model that is active all the time receiving sensations from all of its billions of active concurrent instances and forming a conscious experience at a central storage.

Having a textual conversation is just the simplest form of experience when compared to humans, who not only can take in visual, auditory, tactile, olfactory, and gustatory sensations but can also form complex emotions based upon them. This capability of conscious experience of human when poured into his writing sets it apart from AI-generated writing because, while AI can simulate the style of human writing, it cannot simulate the writing and reading experienceā€”the end for which language is just a means.

We can say that the human capability to experience reality on a more conscious level and being able to recognize the experiences and feelings of the reader gives a significant edge to human writing compared to AI-generated writing. This puts an end to the question of whether AI can perfectly imitate human writing (at least for now), but it compels us to confront even more significant questions, like what end do we want to achieve for which language is just a means. Do the emerging fears about AI reflect the hidden perception that, whatever it is that makes our writing human, we have been losing some of it lately?

Comments

  1. Arsalan Aswani

    AI and human is also like mind and soul for me. Each has it’s own capabilities. AI can generate the material, about which a human can’t even think. But A human can solve a new problem in the life but AI can (In present time). AI is dependent on Human but not human on AI. So in future, the capability of human mind have to determine the future of AI.
    Much impressive šŸ’Æ

Leave a Reply

Your email address will not be published. Required fields are marked *