Reference Information
Minds, Brains, and Programs
- John R. Searle
- Behavioral and Brain Sciences (1980)
Chinese Room on Wikipedia
Minds, Brains, and Programs
- John R. Searle
- Behavioral and Brain Sciences (1980)
Chinese Room on Wikipedia
Summary
The Chinese Room is an idea Searle thought up dealing with him responding to cards with Chinese characters being slid under a door to him. He does not understand Chinese and instead uses a specific method for responding to someone else who does understand Chinese. Searle presented various axioms which then aided in reinforcing the conclusions he wanted to reach.
The Chinese Room is an idea Searle thought up dealing with him responding to cards with Chinese characters being slid under a door to him. He does not understand Chinese and instead uses a specific method for responding to someone else who does understand Chinese. Searle presented various axioms which then aided in reinforcing the conclusions he wanted to reach.
A program is bound by syntax and cannot use semantics to truly understand why the syntax needs to fit together the way that it does. A mind, on the other hand, does understand the reasoning behind actions instead of merely doing or not doing things. No amount of complex syntactical simulation can be a substitute for reason and a true understanding of actions. And so, actions can be done to simulate what a real person could be doing, but copying actions should not imply an understanding of said actions. From this, Searle concludes that programs cannot be a replacement for a mind.
Look at you, hacker. A pathetic creature of meat and bone, panting and sweating as you run through my corridors. How can you challenge a perfect, immortal machine? - SHODAN - |
Searle states that brains are able to cause minds using what he calls “causal powers.” So if anything else was capable of causing a mind, it would require some “equivalent causal powers.” An artificial brain would then need to possess these “equivalent causal powers.” But since programs are not capable of producing a mind and therefore lack any variant of the required “causal powers” of a brain, a brain doesn’t use some sort of program to produce the mind.
Discussion
What I essentially got from Searle is that programs are incapable of actually being a brain and can only simulate a brain. Regardless of whether or not people agree with what he said, he prompted a great deal of discussion and for that alone he was successful in my eyes.
What I essentially got from Searle is that programs are incapable of actually being a brain and can only simulate a brain. Regardless of whether or not people agree with what he said, he prompted a great deal of discussion and for that alone he was successful in my eyes.
One of the first ideas that popped into my head while reading about the Chinese Room was rogue AI. For anyone who played the System Shock games, especially System Shock 2, you know this situation can present interesting dilemmas when stranded on a ship in space. An equivalent scenario is the Terminator movies where Skynet becomes self-aware and decides the humans are a major threat. As technology has advanced, the thought of Skynet or SHODAN becoming a reality has been in the back of many of our minds, even if not in a completely serious manner. But from what Searle has presented here, I drew the conclusion that at least by today’s standards, this is not possible. If becoming self-aware requires a mind to have real thoughts that could lead to hostile intent, then no program designed by man is capable of dooming man. At least not in the rogue AI sense of things.
While this is comforting on many levels, I still do not doubt the capability of humans to further develop technology. If we ever truly figure out what makes a mind a mind and find a way to synthesize it, we might require the help of Arnold in the future. But would this synthesized material ever be considered a program? What if programs are almost biological beings performing operations at this point? Who said you could eat my cookies? I suppose Searle’s argument would still be valid, but these technologies could certainly raise some questions to supplant it.
It's nice to know that I'm not the only one who's mind jumped to fictional AIs. I like your view on how even if his argument is wrong, it was still successful because of the debate it sparked.
ReplyDeleteYeah. Anything that prompts a fun and entertaining discussion is fine by me. Getting people to view things differently adds a lot of flavor to life.
ReplyDelete