Skip to content
Sign up for monthly email updates of faith and science articles from Magis Center!
Maggie Ciskanik, M.S., MSc.April 4, 20236 min read

Why Are We So Obsessed with ChatGPT?

With all the hype, one might think we are on the verge of creating something akin to Star Wars’ C-3PO (See-Threepio) or Star Trek’s Data. Are we?

Submarines move through water, planes fly through the air, but we do not confuse a submarine for a fish or an airplane for a bird. Why do programs like ChatGPT make it seem that it is “intelligent” just like us? Why do the amazing successes of such programs at creating sonnets, artwork "in the style of”, writing essays (even some science article abstracts!), make them seem–well–"human?” The answer might be more straightforward than you think, but the road to it has a few twists and turns.

To start down that road, it would be helpful to review what ChatGPT is and to give a brief explanation of how it gets programmed and trained.

If you ask ChatGPT what it is, this is the answer it gives:

"ChatGPT is a large language model that uses deep learning techniques to generate human-like text. It is based on the GPT (Generative Pre-trained Transformer) architecture, which uses a transformer neural network to process and generate text. The model is pre-trained on a massive dataset of text, such as books, articles, and websites, so it can understand the patterns and structure of natural language. When given a prompt or a starting point, the model uses this pre-trained knowledge to generate text that continues the given input in a coherent and natural way."

Let’s unpack this definition.

Most people are familiar with the term Artificial Intelligence (AI) which refers to software that mimics human-like capabilities. Simple pattern recognition, such as making book suggestions based on past buying history, is an example. Such early forms of machine learning relied on a list of instructions (algorithms); now programs have been developed that allow machines to “learn” without such step by step instructions. 

ChatGPT is an example of a Large Language Model (LLM). As mentioned in the definition, an LLM is “trained” on huge amounts of text and data and, in its simplest form, predicts the next most statistically probable word in a sequence. To “learn” it needs thousands to millions of examples. The explosion of information-in millions of videos, audios, articles, and pictures—available on the internet is the training ground for these models.

Advances in “computational efficiency” and the “fine tuning” of these programs through—wait for it—human feedback (Reinforcement Learning with Human Feedback or RLHF) have enabled the most recent startling improvements. Hence the current buzz and flurry of articles. This TechTalks article explains the process in detail. The fact that direct human feedback with the LLM has brought significant improvements is part of the answer why these chatbots fascinate us so much.

What Does the Fascination with ChatGPT Tell Us About Us?

"Name another entity in the cosmos that tries to prove it is not unique."
—Walker Percy, Lost in the Cosmos

Some in the tech and science world complain about this bias, termed “human exceptionalism.” It is the belief that human beings are a unique species. A dent in this belief emerged from years of animal research demonstrating that animals have abilities previously thought to be restricted to humans, from learning sign language to using tools. The more we have learned, the more some tend to think that animals are just like us. A thoughtful reminder, however, came from a veteran animal researcher, David Premack. In a 2007 seminal paper, he cautioned that whenever we find a similarity between animals and humans, the next question should be: how are we different?

The same question should be applied to “artificial intelligence.”

Well, anyone who has played around with ChatGPT and similar LLMs are initially impressed by the results. They can write brilliant summaries of information and suggest new drug targets for example. But humorous (or not so humorous) stories abound how it also “makes things up” like us (the official tech term is “hallucinate”) and be rude like us!

Human language has been one of the signs that we are different from other animals.

Now we have created something that IS like us. At least it sounds like us.

Pinocchio and Talking Toys

“Do not mistake similarity for equivalence.” —David Premack

No matter how competent these chatbots become at generating essays, no matter how “human-like” the communication seems, these LLMs are not “just like us.” They generate text based on probabilities of word associations and information—generated by humans. They are exceptionally good at detecting patterns and summarizing large amounts of data. They mimic, at light speed, our ability to make judgments and link concepts together. It’s not just spitting out a mathematical formula. Another feature that appears “spooky” is that an LLM is trained in such a way that it appears to be “conscious,”and to “know” what it is talking about. No wonder we are impressed.

But do they “know” or “understand” the contents of these texts? In 1980, John Searle developed the Chinese Room argument. In it he describes a man in a room responding to Chinese characters slipped under the door. By following a series of if-then prompts from a computer program, he is able to respond correctly using appropriate Chinese symbols.  Although he understands nothing of what was asked or said, he is able to create the illusion that he is a Chinese speaker.

Such observations generate other considerations. Michael Harris, professor of Mathematics at Columbia University, in a series of articles argues that it takes “courage to continue to use words—like ‘understand’—after we realize we cannot explain their meaning.” He continues: 

Whatever understanding may be, critical thinking is the recommended means by which it can be attained. 

Echoing Einstein who said, “Imagination is more important than knowledge,” Harris claims that the very human and subjective experience of wonder is essential to the enterprise of math and science. It is not enough to “know facts.”

Others see these developments in AI as heralding the age of bionics in which a new species of enhanced human beings will be created. Quoting futurologist Ray Kurzweil, philosopher Jonny Thomson explains what this means:

As such, the machines we make will allow us to “transcend the human brain’s limitations of a mere hundred trillion extremely slow connections” and overcome “age-old human problems and vastly amplify creativity.” It will be a transcendent, next-stage humanity with silicon in our brains and titanium in our bodies.

Thomson, however, argues that this is not inevitable. Invoking the ideas of David Chalmers and “the hard problem of consciousness,” he states:

Having a mind, or general human intelligence, involves all manner of complicated (and unknown) neuroscientific and philosophical questions…[and] is a different kind of step altogether; it is not like doubling flash drive memory size.

In other words, the computational theory of mind is a metaphor. There is more to human intelligence than computation and calculation. Doubt and lack of understanding play a critical role as well in the advancement of science and technology. Using Stephen Greenblatt’s definition of wonder (Marvelous Possessions: The Wonder of the New World, 1991), Harris makes this crucial observation:

Wonder… is a state in which understanding is missing; but as [Greenblatt] fails to note, it inspires not resignation but an active effort to acquire an understanding of the mystery.

As the debates continue, we need to remember that chatbots and other AI tools are products of human ingenuity and creativity and are meant to enhance human understanding for the good of humanity and the earth we inhabit.

Let’s not lose sight of that.

 

RELATED ARTICLES