Thought Shortage: The Weakness of Thought Experiments in Computer Science

(Here’s my RBA for my writing class, the Rhetoric of Research. Unfortunately, I didn’t work through it as much as I had wanted and am not entirely confident in the depth of my research and the validity of my thesis, but I’m not horribly embarrassed of it, so read if you care. Any comments are welcome)

Thought Shortage: The Weakness of Thought Experiments in Computer Science

Imagine you’re standing at a fork in the railroad tracks when you see a train speeding towards you and down a track with five workers. You can save them by flipping the switch, but if you do, it will run over one worker on the other track. Would you be willing to do it? Now imagine you’re on a bridge over the railroad tracks with a large man standing next to you. On the tracks are five workers who will be run over by the oncoming train unless you push this man onto the tracks. Would you be willing to do it? (McClure).

Imagination frames the thought experiment, or gedankenexperimente. Imagine your twin going to Alpha Centauri on a spaceship traveling near the speed of light. Imagine a melting piece of wax. Thought experiments challenge the conventional with new rules and show us the underpinnings of our beliefs. Nicholas Rescher points out the gap between what is “conceptually possible” and what is “actually possible” (xi). Although our minds contain a large space in imagination, we tend to limit ourselves to a subset of it: reality. In developing a thought experiment, one presents a creative perspective to challenge other researchers in a new way, hopefully changing how they fundamentally view a problem.

Thought experiments have been effectively used in philosophy and physics to enable breakthroughs, yet in computer science, the few thought experiments have not caused substantial advancements or empirical study. The problem is that the computer science community comprises scientists more similar to physicists in analysis and critique, while the issues presented are more philosophical. While we tend to treat physics and computer science very similarly, the theoretical worlds of these two domains diverge. Theoretical physics can ask hard questions about reality, but theoretical computer science follow the model of philosophy with unresolved, open-ended questions, without potential for specific results. Peter Elbow distinguishes that “To write like a historian or biologist involves not just lingo but doing history or biology…” (138). When one presents a philosophical argument to a scientific community, the speaker and audience don’t match. They’re speaking different languages, and the resulting conversation contains no more than mismatched responses without actual productivity. The rhetorical disconnect between these two styles prevents productive gain and ultimately create this barrier to thought experiments in computer science.

Physics and Computer Science
I want to quickly establish the proof that the computer science and physics communities share similar values and methods. Many programmers today come from closely related fields, including physics and math. According to the American Institute of Physics (AIP), in 1998-1999, 20-30% of physics undergraduate and masters students were employed in software 5 to 8 years after graduating, a larger percentage than even in “science & lab” and engineering. Furthermore, compare explanations of physics skills from Bucknell University and computer science skills from College Board:

…a good mathematical background (always useful on the job), someone who has had a lot of practice analyzing complicated problems methodically and trying to come up with logical solutions, someone who has probably had some good experience with computers, and someone who will be able to actually understand a lot of the modern technology that is so central to a lot of businesses today.

Precise and mathematical but also able to think abstractly. In order to solve problems, you’ll need to think like a human and like a computer at the same time. This requires creativity, imagination, and the ability to think logically.

These two fields emphasize extremely similar skills despite the different domains where these skills are applied. A logical, methodical approach to deconstruction seems invaluable to evaluating results from thought experiments. Essentially, the methods of both are classically scientific.

Given the apparent similarities between physics and computer science, however, a major discrepancy exists in the value and quantity of thought experiments. Far younger than physics, computer science has progressed almost entirely in the past century. Yet over the same period, physics has had many thought experiments, including the Twin Paradox, Schrödinger’s Cat, Hawking Radiation, Quantum Suicide, and more. Unusually, the plethora of thought experiments in physics actually more closely matches that of the more distant field of philosophy.

Physics and Philosophy
Going back to Ptolemy, Aristotle, and beyond, physics has existed as an academic discipline for over two thousand years. Humans behave with assumptions about how the world works, a basic framework to explain normal phenomenon. For example, we assume that an object will fall to the ground unless something supports it. Physics attempts to reconcile the borderline cases, the bizarre, the uncommon into a common, clean understanding. When a lodestone unexplicably suspends iron in mid-air, our assumptions must be wrong. In these cases, we must discard our common understanding, think through the issue, and develop a new model for the world.

In the wider space of thought experiments, researchers reject assumptions to find underlying principles to explain both the common and uncommon. As a study of real-world phenomenon, thought experiments in physics fit naturally into the research being done. Rescher paraphrases physicist Ernst Mach for saying “…any sensibly designed real experiment should be preceded by a thought experiment…” (31). The thought experiment provides a possible explanation and outcome for a belief that would be feasibly studied. If a conclusion doesn’t agree with reality, one can often run physical experiments to confirm or deny results.
Physics also benefits from a relatively unified understanding of the field. At any time, several major paradigms exist with which one either must generally agree or understand well enough to disagree. Physicists share an understanding of black bodies and electromagnetic radiation and can instead focus on underlying causes for what they observe.

Philosophy, however, lacks clarity of jargon and measurable development. Unlike breakthroughs in physics, a breakthrough in philosophy only generates discussion and a different perspective. When philosophers consider “consciousness” and “existence,” their discourse focuses on what defines these vague notions. Philosophers propose thought experiment because, unlike scientists, they cannot analyze spectrometer data or run statistical analysis on data. The focus here is “not with observation but with conceptualizations” (Rescher 47). Similar to physics, people have an intuitive sense of the meanings of philosophical terms, known as “folk psychology” (Davies). Unfortunately, while we can conclusively see an object fall and attribute it to equations describing motion, we cannot look at a dog and conclusively assert its consciousness.

Even the rhetorical effects of thought experiments in the two fields differ. Marguerite La Caze explains that thought experiments “function to change our views, to force or persuade us to agree and to help us ignore the underlying assumptions involved” (81). And “many experiments ‘simply’ demonstrate a point” (Rescher x). While these definitions convey a strong rhetorical intent, Lloyd Bitzer would disagree. He outlines three requirements for a rhetorical situation: exigence, audience, and constraints (7). Let’s limit the domain of La Caze’s definition to philosophy. The field lacks a specific exigence beyond the general need to expand knowledge. The audience of philosophers is comprised of philosophers who generate more arguments for other philosophers, sp the generally closed community bears little impact on reality. Without the capability of change, the audience is not a rhetorical audience (Bitzer 81).

Now let’s limit the argument to physics thought experiments. Humans explore the natural world partly out of curiosity, but, more pragmatically, to improve our standard of living. Physics has created the wheel, fire, radio, and slinky, a great enough exigence to enable a potential rhetorical situation. And researchers and entrepreneurs excitedly apply discoveries in new ways to improve reality. Even Hawking Radiation, a particularly esoteric subject, has enabled construction of Large Hadron Collider and our avoidance of Earth-swallowing black holes (Phillips). Regardless of whether Bitzer properly defines rhetoric, his guidelines demonstrate the difference in intent between these two fields.

Computer Science Thought Experiments
People more commonly associate computer science with “Facebook” or “Windows,” but many researchers focus on theoretical computer science, analyzing questions from natural language processing to complexity theory. In considering thought experiments in computer science, I want to analyze their significance in cognitively oriented artificial intelligence. Humans, from Descartes and earlier, have debated the origin of thought and the distinction between machines and us, and this particular branch of artificial intelligence (AI) applies recent discoveries in computing to these problems.

A long-standing question now relevant to computer science is whether a computer that exhibits intelligence actually “understands” what it is doing. 17th century philosopher and mathematician Gottfried Leibniz considered what a thinking machine would look like. Imagine a hypothetically created thinking machine. After expanding all of the parts until the machine reached the size of a barn, what would you see upon walking inside? “Nothing but parts which push and move each other, and never anything that could explain perception” (Leibniz). Effectively, he states that thinking machines are impossible (Sorensen 260). Leibniz’s argument seems quite compelling at first, attacking the “functionalist” perspective, where the mere physical composition of a mind can explain our consciousness. With recent discoveries into the structure of the brain, however, our own minds seem to fit this strict process, albeit biological instead of mechanical. We can easily see ourselves as computers, made of carbon instead of silicon. From our perspective, Leibniz’s process would seem to discredit our own perception as well. Perhaps the real question is how one defines “perception.”

David Cole, however, points out a problem of expanding the machine. As a counter thought experiment, imagine expanding a drop of water until the molecules reach the same size as us. At this point, the molecule would lose “wetness,” a quality that conventionally defines water. Similarly, a larger machine might lose thought, which might still exist on a smaller scale. Cole localizes a weakness of thought experiments: analogy. Analogies tie ideas together into fascinating syntheses, but the distinct nature of its components invites complications. We draw analogies to emphasize large themes while avoiding smaller details, yet the rigorousness of research often forbids simplification, entirely invalidating these analogies.

A natural response to Leibniz would be to ask what defines a thinking machine. Depending on the qualifications for “thinking,” the possibilities of the interior view are infinite. In “Computing Machinery and Intelligence,” Alan Turing presents the imitation game, or, as it’s more popularity known today, the Turing Test. Imagine a human questioner at a terminal typing questions and responses to two different entities: one human, one computer. After a satisfactory amount of time, the questioner must determine which of the two entities is the human and which is the computer based solely upon the text. Thus, if the computer could fool a real human into labeling it the human at least half of the time, then it would have successfully exhibited intelligence.

Turing’s work was important because he opened the field for discovery. Without the basis for computing, one could hardly consider the nature of computer intelligence beyond its potential existence. The actual impact of his discovery, however, is somewhat discounted by the vagueness of what constitutes intelligence. Since the Turing Test, many researchers have created simple programs that exhibit Turing-styled intelligence, including ELIZA, a parody of a Rogerian therapist (Weizenbaum 42). Beginning in 1990, the annual Loebner Prize promises $100,000 to the creator of the first program that successfully passes the Turing Test. While the cash would seem to drive innovation in the field, the prize has been controversial. Marvin Minsky, a founder of AI research and MIT professor, “calls Loebner’s prize ‘obnoxious and stupid’ and has offered a cash award of his own to anybody who can persuade Loebner to abolish his prize and go back to minding his own business” (Sundman). The nature of the Turing Test allows clever tricks to simplify complicated, cognitive-styled processing, and many researchers today see the so-called “chatterbots” as trivial in actual AI development. Even in ELIZA, “Input sentences are analyzed on the basis of decomposition rules which are trigged by key words appearing in the input text” (Weizenbaum 36). As much as the Turing Test has come to represent the culture of computer science research, researchers in the field reject its potential for advancement in the field.

Case Study of the Chinese Room Argument
A deeper example of the interaction between scientific and philosophical perspectives is John Searle’s Chinese Room Argument (CRA). Imagine a computer capable of taking in Chinese as an input, and outputting perfect responses also in Chinese. From the outside, the computer would appear to exhibit intelligence, and perhaps even “understand” Chinese. But imagine if the computer was just a man on the inside, like the mechanical Turk. And this man (John Searle himself in the original paper; for clarity, I will instead refer to the operator as the Turk and retain Searle as the author) doesn’t understand Chinese. He simply has a book that tells him exactly what squiggle to output for every possible squiggle that comes in, with no actual comprehension or meaning in his task. Thus, we have an apparently intelligent computer that has no capability for understanding within it.

Searle’s work has generated a significant amount of controversy and response. His conclusion has come under significant attack, from users on Internet Relay Chat (IRC) to a publication entirely devoted to CRA responses. According to Bringsjord and Noel, “Pat Hayes, John McCarthy, and Marvin Minksy… have said outright that CRA is silly, and… this argument will wither away to a quaint curiosity” (145). Much of the literature focuses on how the CRA has failed as a thought experiment. Cohen argues that a good thought experiment must be short, transparent, and definitive. Unfortunately, the CRA exhibits none of these qualities. Whereas Leibniz’s mill is 3 sentences, the CRA is 17. Whereas Turing clearly defines “intelligence,” Searle only vaguely defines a thinking machine and the bounds of “understanding.” His lack of definitiveness has allowed these counter thought experiments to propose different conclusions than his own through only small tweaks to its design. This community, more familiar with concrete, specific details, found that his definitions were not rigorous enough to be valuable.

Rescher points out that one of the most common failings of thought experiments is that “they prove inconclusive owing to insufficiencies in their specification” (21). In efforts to broadly apply his theory, Searle develops a situation where he cannot clearly define what his intent is. Compare to the clear explanation from Galileo, where no one debated the existence of mass. Or perhaps the train problem in the introduction, where no philosophical words appear at all. Stevan Harnad, however, finds it necessary to define what “mechanism” and “implementation” mean in his argument against Searle. To avoid the confusion that Searle introduced, he ensures his own stance’s clarity by qualifying the implications of his conclusions.

Another shortcoming of Searle’s argument comes from the high level perspective that he engages in. Searle proposes that “after a while the programmers get so good at writing the programs and [the Turk] get[s] so good at manipulating symbols that my answers are indistinguishable from those of native Chinese speakers” (5). He argues from the perspective of a philosopher and his proposals in computer science meet a scientific critique. While philosophers may accept his statement as a theoretical (perhaps inevitable) possibility, Harnad questions Searle on the CRA’s feasibility, noticing “there is no evidence at all that it would work.” Searle proposes that the Turk simulates Chinese understanding until he suddenly is capable. From a real world perspective, his argument seems a large jump; simulations of jet engines don’t directly translate into functional, complete products.

This disconnect separates computer scientists from philosophers: the ability to suspend disbelief for a thought. In the thought experiment world where imagination means more than substance, one must break normal constraints to more fully explore possibility. Harnad, a cognitive scientist, follows his attack with references to the technical difficulties of developing a successful chess-playing AI, which seem irrelevant to the importance of the thought experiment. When we bind ourselves to only what our current reality seems to allow, we reject any value that thought experiments could add through extrapolation.

Compare the mentioned responses to those of a philosophical thought experiment from Sydney Shoemaker. Imagine that two men, Robinson and Brown, have their brains exchanged in a surgery accident. Upon reawakening, the man with Robinson’s body and Brown’s brain¬, who Shoemaker labels “Brownson,” will see his (Brown’s old) body before him and have all the memories and characteristics of Brown. This thought experiment proves that we treat psychological characteristics as more important to our identity than physical characteristics (La Caze 74). More importantly to my argument, most consider this valid even without considering the complications of the surgery. Shoemaker’s thought experiment appeals to our instincts about the world. These aren’t necessarily our emotions, but what we immediately consider to true. The logical structure of the argument has obvious oversights; one could easily point out the problems of transferring the whole neural network, including all nerve endings and spinal cord, as being more crucial than just the brain. But in this case, it doesn’t really matter how the events of the thought experiment are executed, but what the gist of the situation is.

Given the apparent lack of progress, I should note that computer science is largely indebted to a thought experiment for its existence. Turing’s idea of a universal machine, capable of all possible algorithmic processing, established the scope of computing and its purpose. From a mathematical context, his hypothetical machines laid out a paradigm for how computing should exist, mostly directly today seen in the programming language Lisp. Clearly, the context for a particular problem matter significantly to its resolution, and the value of thought experiments in computer science exists. By spanning these two perspectives–philosophy and scientific–one can hope to gain both deep consideration and quantifiable progress in computer science.

Bibliography
Bachelor plus 5 Report, Figure 3. American Institute of Physics. 1 Nov. 2008 .
Bitzer, Lloyd F. “The Rhetorical Situation.” Philosophy & Rhetoric 1 (1968): 1-14. Samford University. 2 Nov. 2008 .
Bringsjord, Selmer, and Ron Noel. “Real Robots and the Missing Thought-Experiment in the Chinese Room Dialectic.” Views into the Chinese Room : New Essays on Searle and Artificial Intelligence. Ed. John Preston and Mark Bishop. New York: Oxford UP, 2002. 144-66.
Cohen, Martin. Wittgenstein’s Beetle and Other Classic Thought Experiments. Grand Rapids: Blackwell Limited, 2004.
Cole, David. “Thought and Thought Experiments.” Philosophical Studies 45 (1984): 431-44. PAO. ProQuest. Stanford University Library, Stanford. 19 Oct. 2008.
Computer Science. College Board. 1 Nov. 2008 .
Davies, Todd. “Mind, Body, and World.” Symbolic Systems 100. Stanford University, Stanford. 3 Apr. 2008.
Elbow, Peter. “Reflections on Academic Discourse: How It Relates to Freshmen and Colleagues.” College English 53 (1991): 135-55. JSTOR. 25 Oct. 2008 .
Harnad, Stevan. “Minds, Machines and Searle.” Journal of Theoretical and Experimental Artificial intelligence 1 (1989): 5-25. Minds, Machines and Searle. Cogprints. 20 Oct. 2008 .
Horowitz, Tamara, and Gerald J. Massey, eds. Thought Experiments in Science and Philosophy. New York: Rowman & Littlefield, Incorporated, 1991.
La Caze, Marguerite. The Analytic Imaginary. New York: Cornell UP, 2002.
Leibniz, Gottfried Wilhelm von. Monadology. Section 17. 1714. Paul Schrecher and Anne Martin Schrecher, trans. Indianapolis: Bobbs-Merrill, 1965. 150.
Majoring or Minoring in Physics. Bucknell University. 1 Nov. 2008 .
McClure, Samuel. “Neuroscience of Decision Making.” Symbolic Systems 100. Stanford University, Stanford. 15 May 2008.
Phillips, Tony. “The Day The World Didn’t End.” 15 Oct. 2008. Science Daily. 25 Oct. 2008 .
Rapaport, William J. (1986b), “Searle’s Experiments with Thought”,Philosophy of Science 53: 271-279.
Rescher, Nicholas. What If? : Thought Experimentation in Philosophy. New York: Transaction, 2005.
Searle, John. “Minds, Brains and Programs.” Behavioral and Brain Sciences (1980): 415-57.
Shoemaker, Sydney. Self-Knowledge and Self-Identity. Ithaca, NY: Cornell University Press, 1963. 23-24
Sorensen, Roy A. Thought Experiments. New York: Oxford UP, Incorporated, 1998.
Sundman, John. “Artificial Stupidity.” Salon.com 26 Feb. 2003. Artificial Stupidity | Salon Technology. 26 Feb. 2003. 20 Oct. 2008 .
Tittle, Peg. What If….Collected Thought Experiments in Philosophy. New York: Longman, 2004.
Turing, Alan. “Computing Machinery and Intelligence.” Mind (1950): 433-60.
Weizenbaum, Joseph. “ELIZA—a computer program for the study of natural language communication between man and machine.” Communications of the ACM 9 (1966): 36-45. ACM Portal. 2 Nov. 2008 .

One thought on “Thought Shortage: The Weakness of Thought Experiments in Computer Science”

  1. I have a rough understanding of John Searle’s thinking, but am generally more aligned with the reading of Heidegger by Hubert Dreyfus. I got to Heidegger as a resource to understand Pierre Bourdieu, which is really helpful to understanding Etienne Wenger.

    The essential issue that I have with thought experiments, as you’ve described them, is that it’s really hard to transmit experience from one human being to another. In that respect, there’s a relationship between this writing and your blog entry “On Coldness”. You can do a thought experiment associated with skating, but it’s much easier for a reader to relate to the story if he or she has prior experience being on skates.

Leave a Reply

Your email address will not be published.