Ontological reply to the Chinese room argument

Is the mind the same as the body? What is consciousness? Can machines have it?

Moderators: AMod, iMod

Post Reply
Roddus
Posts: 1
Joined: Sat Jan 20, 2018 11:49 pm
Contact:

Ontological reply to the Chinese room argument

Post by Roddus »

Dreaming up replies to John Searle's Chinese room argument (CRA) is a great way to idle away hours, but rebutting it is probably really important to AI. It's arguable the strongest theoretical attack on the computational theory of mind. My five cents worth is the “ontological reply” (which has actually come out of a fair bit of research). This is the ontological reply: the room's ontology is deficient. The only thing the room has to manipulate are symbols. Searle compares the room (a computer) receiving Chinese symbols to a human receiving Chinese symbols. The room fails to immediately understand what the symbols mean. From this Searle goes on to argue that computers will never understand anything. But humans don't immediately understand Chinese symbols either. First, they have to learn Chinese. Why doesn't the Chinese room try to learn Chinese? Because its ontology is deficient. Learning Chinese involves developing memory structure. There are no structural elements in the room's ontology. The ontology needs structural elements (e.g., connections, nodes) as well as symbols (the content of structure). Then the rule book can mandate manipulation of structural elements and symbols, and build memory structure. The structural elements might or might not be syntactic and might even be semantic. In any case, Searle's premiss “all computers can do is manipulate symbols” is false. Computers can easily manipulate structure and often do. The CRA is unsound.
Post Reply