What is the Chinese Room Argument?
The Chinese Room Argument is a thought experiment which is designed to refute conventional ideas about artificial intelligence. According to John Searle, the man who proposed the Chinese Room Argument, this thought experiment shows that computers cannot be said to have a true sense of mind or understanding of natural languages. Instead, various proposed forms of artificial intelligence are little more than advanced signal processors. Naturally, this theory has numerous opponents in the artificial intelligence community.
In the Chinese Room Argument, people are first asked to imagine a computer which has been programmed to converse in Chinese so convincingly that it passes the Turing Test, meaning that a human would mistake the computer for another human. Next, one is asked to imagine a speaker of the English language sitting in a room which contains English directions for responding to material presented in Chinese, such as manuals, charts, and so forth.
Next, someone on the outside of the room passes a slip of paper with a question in Chinese to the person on the inside. Using the English directions, an answer to the question is formulated, and passed back through. This exchange continues, with the person on the outside of the room fully believing that the person on the inside can speak in Chinese.
In fact, the person on the inside cannot speak or write in Chinese; the questions are meaningless scrawls. Instead, the person in the room looks for patterns, symbols, and syntax, using these as clues to respond to the questions. Although the person in the room has fooled the person on the outside, he or she cannot truly be said to be conversing in Chinese.
Using this scenario as an example, Searle suggests that machines which “speak” in natural languages are not, in fact, using those languages at all. According to the Chinese Room Argument, these machines are instead simply interpreting symbols and looking for patterns which can be used to formulate convincing replies. Therefore, he suggested in a 1980 paper, it is impossible to create a truly artificial intelligence, because the understanding of natural language is a key to intelligence and independent thought.
All sorts of arguments have been used to refute the Chinese Room Argument, such as the systems reply, which points out that the room is like a system, and therefore the system as a whole understands and can communicate in Chinese, even if individual components couldn't. These arguments are used to advance the study of artificial intelligence, in addition to being utilized in an attempt to respond to the Chinese Room Argument.
Understanding goes beyond just producing a reply. A person can look at a photograph and understand the contents and why the photographer took the photograph. No reply is necessary for a person to understand it. What is the equivalent process for a computer if no "reply" is necessary?
Leopold's Theorem on Understanding: To understand requires that the material be capable of changing the observer. That future responses can be different because of the interaction with the material.
Naturally, there are different levels of such understanding. There can be very simple understanding (a menu driven phone system) and the more complex understanding of a person reading a book. Regards, Leopold
Post your comments