I hate the formula for combinations. Here it is:
For n objects, the number of different ways to choose k of them is: n! / (n-k)! k!
For example, if I have 25 students and need to choose a committee of 5 of them, there are:
25! / 20! 5! possible committees.
The reason I hate the formula is because it gives virtually no clue about where it comes from or how it makes sense. Students forget it and misapply it because it doesn't fundamentally make any sense.
I always think of this kind of problem in this way: 25*24*23*22*21 / 5!
The numerator connects very straightforwardly to other counting problems. The 25*24*23 etc. represent how many students I could choose as the 1st student for my committee, the 2nd student for my committee, the 3rd student, etc. I have to divide because I'm over-counting the number of committees that are really different. For example, the committee with students {A, B, C, D, E} is really the same as the committee with students {B, C, A, D, E}--the order that I choose the students in doesn't matter. How much to divide by? Each committee I might choose is the same as 5! -1 other committees, because that's how many ways to re-order the same 5 people.
This is probably too quick an explanation to feel comfortable unless you've worked with other combinatorics problems recently--but my students have taken to it quite well after understanding the more fundamental types of counting it relies on.
This seemed to me a great leap forward in student understanding. I didn't tell them that there was a "formula for combinations" because I feared that they would be unwilling to just think about how the problem made sense. As I said, this worked out very well until I had to give them more elaborate problems:
How many 5-card poker hands have (exactly) 2 aces and 2 kings? This is a trickier problem to think through from first principals. It's a fairly long process with many subtle places to go wrong. It would be much easier to think about it in terms of a few independent choices:
(# ways to choose 2 aces)*(# ways to choose 2 kings)*(# of ways to choose another card), or
(4 choose 2)*(4 choose2)*(43 choose 1)
This is where the abstraction of (n choose k) would be handy. It would let students chunk the problem more easily.
This reminded me of something I read somewhere about quick thinkers. The author compared the mind to a pipe with water (thoughts) flowing through it. A lot of people think that fast thinkers are like pipes with faster running water; the thoughts come sequentially at a faster rate. This might happen sometimes, but far more common is for a quick thinker to have a wider pipe--they use abstractions so their thoughts simply contain more, even if they happen at the same speed as everyone elses.
The use of combinations and permutations seems to me a good example of this. A student /could/ reason their way, step by step to the correct solution, but that's many sequential steps. If they can think of the problem in terms of combinations and permuatations, each of their fundamental operations encapsulates several of the old sequential steps at once. This lets their minds arrive at the same place faster, and with less chance of error (or wandering, or boredom).
In my class, students who have an ability to focus their attention for longer periods didn't have trouble with the longer problems. It was students who "got lost" in the steps due to lapses of attention, or an inability to conceptualize the process as a whole, that had trouble.
Next time I'm going to try and achieve a best-of-both-worlds. I'll teach permuations and combinations my way, but then make sure they recognize each as a fundamental type of situation; and help them make the gestalt switch to breaking larger problems down into them, instead of building them from the ground up, as they're used to.
Sunday, May 17, 2009
Mental models
We had a department meeting last week in which we briefly discussed one of my list-item goals: to explicitly teach mental models instead of just procedures.
Some teachers wanted clarification about how I thought of a mental model. The short answers was: a mental model is what a concept means, instead of a procedure to get a certain kind of answer. This can be tricky because often students are looking for "the steps to get the answer"--they're not attuned to what something means. It's also common to accidentally "proceduralize" a question--that is, take a question that required applying one's understanding of the meaning of a concept, and instead turn it into a question that students can solve by following a procedure they've memorized. Typically this happens by giving the same type of problem repeatedly, allowing the students to notice the steps you follow to solve it. Then they can follow those same steps without really understanding where they came from or why they make sense.
Here's an example of this that occurred to me while writing questions for my final.
The Topic: What does the "end behavior" of a function refer to? Specifically, what does it mean to say (for example): as x->infinity, y-> 0; as x-> -infinity, y->0 ?
The original question: The target sort of question is to give students an equation, and have them describe its end-behavior. For example: Describe the end-behavior of f(x) = 1/x
I discovered that I had inadvertantly proceduralized this kind of question for students. The procedure most of them know at this point is:
Unfortunately, many of those same students were stumped when I asked the following:
Say that the function f(x) has the following end-behavior: As x->infinity, y -> 0. For each point, explain whether or not you think it's likely to be on the graph of f(x). [if you don't have enough information to know for sure, clearly explain why]
a). (0, 2) b). (99999, 0.9999) c). (9999, 9999) d). (9999, -0.00001)
It wasn't that students gave wrong answers (although many did). Many students didn't even know where to start. This is a clear indication that they don't really understand the /idea/ of end-behavior.
A similar phenomenon occured when I asked them the following:
Say the function f(x) has a horizontal asymptote at y = 3. What is it's end-behavior?
Many students didn't know where to start, because they had memorized procedures to /find/ the horizontal asymptote, without ever really thinking about what it means to be a horizontal asymptote, and how that idea is related to the idea of end-behavior.
So, to get back to mental models, what would the mental model(s) be for end-behavior? I think it's hard to capture what someone's mental model is exactly; it's whatever they use to think about what a question means.
When I see: "as x -> infinity, y -> 0". I always say out loud in my head "as the x-values get bigger and bigger, the y values get closer and closer to 0"
I also always picture a graph in my head something like this...
As the x-values of my points go to the right, the y-values of my points will get closer to 0 (the x-axis). In other words, as I keep plotting points, their height (the red dotted lines) will get shorter and shorter. (Actually I imagine an animation with the dots appearing from left to right, as the x-values get bigger)
Since the idea of end-behavior is a slight formalization of a very natural question to ask about a graph, I think a good way to introduce it would be to have students do their best to answer the natural question as best they can.
It could be a "describe the graph over the telephone"-type activity. Students could have a series of graphs, and their goal is to write a short (i.e. 1-2 sentence) statement explaining what the the trend at the "ends" of the graph are, so that someone else could re-create it only from the description. After discussing student descriptions, you could unveil the sort of description mathematicians have agreed upon in a context that makes sense.
This would also be a good lesson for a party game I like. Take a vertical strip of paper and fold it into (say) 4 sections. The first person in the group draws a graph. The second person in the group looks at the graph and describes its end-behavior. The paper is folded so the 3rd group member can only see the 2nd member's writing. The 3rd member has to draw a graph that matches the end-behavior described by the 2nd member, and so on. It's like "telephone" using paper, and switching back and forth between representations. At the end, the group checks to see if they ended with the same kind of end-behavior they started with.
To emphasize to students that they should be learning what the concept means, it would be good to give them assignments that require them to express this understanding directly--not use it to get an answer to another sort of problem.
Since the concept of end-behavior has a natural interpretation in terms of a progression, I think a nice assignment would be to have students create a short multi-panel comic strip explaining or illustrating the idea of end-behavior for someone in a younger grade (like 9th).
In my experience, some students won't produce anything useful, but many students will produce clearer explanations than anything I was likely to give; so re-distributing some of the better comic strips as "notes" would be a good follow-up.
Some teachers wanted clarification about how I thought of a mental model. The short answers was: a mental model is what a concept means, instead of a procedure to get a certain kind of answer. This can be tricky because often students are looking for "the steps to get the answer"--they're not attuned to what something means. It's also common to accidentally "proceduralize" a question--that is, take a question that required applying one's understanding of the meaning of a concept, and instead turn it into a question that students can solve by following a procedure they've memorized. Typically this happens by giving the same type of problem repeatedly, allowing the students to notice the steps you follow to solve it. Then they can follow those same steps without really understanding where they came from or why they make sense.
Here's an example of this that occurred to me while writing questions for my final.
The Topic: What does the "end behavior" of a function refer to? Specifically, what does it mean to say (for example): as x->infinity, y-> 0; as x-> -infinity, y->0 ?
The original question: The target sort of question is to give students an equation, and have them describe its end-behavior. For example: Describe the end-behavior of f(x) = 1/x
I discovered that I had inadvertantly proceduralized this kind of question for students. The procedure most of them know at this point is:
1). Plug in a very large x-value (or a few).This process usually yields correct answers to questions like the target.
2). If it's large then write: "As x->infinity, y-> infinity". If it's close to 0 write: "As x->infinity, y->0", etc.
3). Plug in a large negative value for x, etc.
Unfortunately, many of those same students were stumped when I asked the following:
Say that the function f(x) has the following end-behavior: As x->infinity, y -> 0. For each point, explain whether or not you think it's likely to be on the graph of f(x). [if you don't have enough information to know for sure, clearly explain why]
a). (0, 2) b). (99999, 0.9999) c). (9999, 9999) d). (9999, -0.00001)
It wasn't that students gave wrong answers (although many did). Many students didn't even know where to start. This is a clear indication that they don't really understand the /idea/ of end-behavior.
A similar phenomenon occured when I asked them the following:
Say the function f(x) has a horizontal asymptote at y = 3. What is it's end-behavior?
Many students didn't know where to start, because they had memorized procedures to /find/ the horizontal asymptote, without ever really thinking about what it means to be a horizontal asymptote, and how that idea is related to the idea of end-behavior.
So, to get back to mental models, what would the mental model(s) be for end-behavior? I think it's hard to capture what someone's mental model is exactly; it's whatever they use to think about what a question means.
When I see: "as x -> infinity, y -> 0". I always say out loud in my head "as the x-values get bigger and bigger, the y values get closer and closer to 0"
I also always picture a graph in my head something like this...

Since the idea of end-behavior is a slight formalization of a very natural question to ask about a graph, I think a good way to introduce it would be to have students do their best to answer the natural question as best they can.
It could be a "describe the graph over the telephone"-type activity. Students could have a series of graphs, and their goal is to write a short (i.e. 1-2 sentence) statement explaining what the the trend at the "ends" of the graph are, so that someone else could re-create it only from the description. After discussing student descriptions, you could unveil the sort of description mathematicians have agreed upon in a context that makes sense.
This would also be a good lesson for a party game I like. Take a vertical strip of paper and fold it into (say) 4 sections. The first person in the group draws a graph. The second person in the group looks at the graph and describes its end-behavior. The paper is folded so the 3rd group member can only see the 2nd member's writing. The 3rd member has to draw a graph that matches the end-behavior described by the 2nd member, and so on. It's like "telephone" using paper, and switching back and forth between representations. At the end, the group checks to see if they ended with the same kind of end-behavior they started with.
To emphasize to students that they should be learning what the concept means, it would be good to give them assignments that require them to express this understanding directly--not use it to get an answer to another sort of problem.
Since the concept of end-behavior has a natural interpretation in terms of a progression, I think a nice assignment would be to have students create a short multi-panel comic strip explaining or illustrating the idea of end-behavior for someone in a younger grade (like 9th).
In my experience, some students won't produce anything useful, but many students will produce clearer explanations than anything I was likely to give; so re-distributing some of the better comic strips as "notes" would be a good follow-up.
Monday, April 27, 2009
Lessons learned
Here are two problems I encountered while teaching problem-solving:
1). My students aren't attuned to what I'm trying to teach. When they see a problem, they're immediately filtering for "what are the steps for this problem type?" There's a lot of resistance to the idea that there is no algorithm to learn, and even more to the idea that we won't look at a single "type" of problem over and over again; instead each problem will look totally new every time.
2). Even when they get better at problem-solving, they feel adrift because there's nothing concrete they can point to and say "THIS is what I learned". As a result, they feel anxious and unsatisfied, because it's hard for them to know what they're doing or if they're getting better.
To address these problems, I'd like to give students a general algorithm for problem-solving. This will give them steps to follow when facing a new problem, but also be what I want them learn. I can introduce it early and have it appear (to start) on quizes as pure regurgitation. The goal is for the problem-solving steps to be so internalized as to become habits of mind (eventually). I can introduce heuristics a few at a time, and ideally have some accountability structure that will make students feel like they're making progress towards improvement on those heuristics.
Hopefully this will give them a cognitive reference point so they feel (and can explain to others) exactly what they're getting better at.
Thursday, April 9, 2009
Hypothetical v. Counterfactual
Jen told me a story about an episode of 30 days she saw in which an athiest lived with a fundamentalist Christian. The atheist was trying to explain why she was uncomfortable with the phrase "In God we Trust" on our currency. The Christian kept saying things like, "Well, that's normal; that's just what our money says, what's the problem?" She attempted to explain using a counter-factual: "Imagine if all our money said: There is no God. How would that make you feel?" Apparently the man seemed unable (or maybe unwilling) to consider this possibility. He just kept repeating things like: "But that's not what our money says."
Of course, this misses the whole point. But I have definitely had my share of this same experience in all sorts of situations. I ask someone to consider a scenrio to illustrate a point, and often get the reply back: "but things aren't that way" or "but things would never be like that".
This initially confused me; people should be good at this kind of reasoning since we all have to use hypothetical reasoning all the time. We imagine what if we did this or that, imagine the consequences and use that to guide our actions. So what's the difference?
My hypothesis is that the difference may lie in how well someone is able to imagine the alternative world. Hypothetical situations (what if I did X) may be generally easy for people to imagine. Counter-factuals that involve more drastic changes in the world (what if we lost WWII; what if there were a God? What if there were no God?) are, for some reason, harder for some people to imagine.
It's difficult for me to separate ability from inclination here; are they really unable to imagine a counter-factual situation in detail, or just unwilling to expend the effort to imagine it (they don't see the point, maybe)? My gut feeling is that usually the scenario they're being asked to imagine (in an argument, say) is both difficult and _unpleasant_ to imagine. These may be enough to make most people resist the attempt ("but it wouldn't be like that"), rather than really try and grasp the point.
People I've discussed this with in the past often identify this as a failure of logic or rationality--people are too dumb to get it. I now think it's a failure of imagination and of trust--a difficulty and discomfort with imagining things one finds unpleasant.
If we were going to rectify this kind of failure in school, the solution probably isn't a focus on the logical structure of such arguments. It probably will have more to do with creating safe environments to use, experience, and appreciate the legitimacy of the technique.
Of course, this misses the whole point. But I have definitely had my share of this same experience in all sorts of situations. I ask someone to consider a scenrio to illustrate a point, and often get the reply back: "but things aren't that way" or "but things would never be like that".
This initially confused me; people should be good at this kind of reasoning since we all have to use hypothetical reasoning all the time. We imagine what if we did this or that, imagine the consequences and use that to guide our actions. So what's the difference?
My hypothesis is that the difference may lie in how well someone is able to imagine the alternative world. Hypothetical situations (what if I did X) may be generally easy for people to imagine. Counter-factuals that involve more drastic changes in the world (what if we lost WWII; what if there were a God? What if there were no God?) are, for some reason, harder for some people to imagine.
It's difficult for me to separate ability from inclination here; are they really unable to imagine a counter-factual situation in detail, or just unwilling to expend the effort to imagine it (they don't see the point, maybe)? My gut feeling is that usually the scenario they're being asked to imagine (in an argument, say) is both difficult and _unpleasant_ to imagine. These may be enough to make most people resist the attempt ("but it wouldn't be like that"), rather than really try and grasp the point.
People I've discussed this with in the past often identify this as a failure of logic or rationality--people are too dumb to get it. I now think it's a failure of imagination and of trust--a difficulty and discomfort with imagining things one finds unpleasant.
If we were going to rectify this kind of failure in school, the solution probably isn't a focus on the logical structure of such arguments. It probably will have more to do with creating safe environments to use, experience, and appreciate the legitimacy of the technique.
Friday, April 3, 2009
"What's the next step?" v. "What do I need?"
Today I had an interesting experience that revealed a large gap between how my students and I think about the problems we've been doing. This difference corresponds to how you learn the algorithm for a specific skill vs how you tackle a problem you've never seen before.
If you're learning a specific skill (say, how to solve a quadratic equation), the process generally involves recognizing some cue that tells you what to do next, over and over again. The operative question is "What's the next step?"
If you have to solve a problem you've never seen before, you can't ask yourself "What's the next step?", because the whole point is that you don't know; maybe no one knows. So instead you have to ask yourself "What do I want to find out?", "What do I know already?", and "How can I start to connect the two?"
It's astonishing how few of the problems students get in school require them to ask themselves these three questions. Even if a problem starts out requiring these genuine problem solving questions, it's extremely easy for it to become "proceduralized"--either when I (or other students) give too many hints and reduce it to following-the-steps, or if we do the problem so many times that it becomes a "type" of problem the students recognize "the steps" for.
Students (usually) do a pretty good job listing what they know, and what they want to figure out; the whole trick is how to connect them together. You could "work forwards" by asking what else you can figure out with what you know. It might not seem directly relevant, but if you keep figuring out more and more things, eventually you may see a connection to what you want to know. You can also work backwards by identifying what key piece of information would let you solve your problem. This sets a new sub-goal to aim at.
My students seem to get this idea in general, but have trouble applying it. The specific phrasing of the questions I ask seem to make a big difference. To me, all these questions are equivalent:
For struggling students, it was difficult to get them to think of more than 1 thing they could try. I'd given them a diagram with two perpendicular lines, the equation of one of the lines, and a point on the other. I was hoping they'd suggest things to try such as finding the equation of the other line, finding the point of intersection, finding the x or y intercepts of either line, etc. None of these seemed to present themselves as salient pieces of information. I tried to prompt them by asking what kinds of questions they'd been asked before about situations like this, or what skills they remembered from algebra I or II about lines, points, or perpendicular lines, but without specific queues to trigger their memory, they weren't able to brain-storm a list effectively. At least not individually.
For my last block of the day we started as an entire class. I wrote the diagram on the board, and started a list with the two pieces of information we had (the equation of one line, and the coordinates of the point on the other), as well as the piece of information we were trying to find in the problem. Then we played a game where I would hand the pen to a student who would silently add a new "fact" about the diagram to our list of facts to figure out. They could add labels or additional points or lines to the diagram if they wanted. If they were really stuck, they could hand the pen to another student.
This seemed to work. Students were able to get a good list, extending or modifying ideas that were already in the list. It was interesting to notice that sometimes students would add a potentially relevant addition to the drawing (such as a line that made a promising-looking right triangle with existing lines), while a few would add a seemingly random line or point just for the sake of extending the list.
We had a discussion afterwards about which facts seemed more or less "helpful" in solving the problem. Again, as a class we were able to formulate a series of sub-goals to lead from the givens to the solution.
I think this general approach could work for small group problem solving. The main obstacle, I think, is actually getting the students to follow the process. As a whole class, they were happy to do it, but in small groups they tend to focus directly in on the problem, and seem to regard this sort of activity as long and circuitous.
Some ideas I'd like to try are these:
* Give them a situation and tell them to list, and find, as many things about the situation as possible (so there is no specific "problem" to solve).
* Scaffold the process by providing some of the steps. E.g. on the first problem, I provide the list of facts and they figure out how they connect together. Next time, we generate the list together first. After that, I prompt them to do it on their own. After that, I just give them the problem with no prompting to think about all the things they might try.
* Have the work on half-sized poster paper. Each sub-goal they formulate goes on a 3x5 card. They're reponsible (as part of their graded work) for creating and sequencing the cards on the paper (to show their problem-solving plan). After they have their plan, they can actually show the calculations on the paper.
With respect to how each student understands the specific self-questions involved in working forwards or working backwards, I think it might be best to have students say the idea in their own words, and we can just create a small collection of student phrasings. In my prior experience, this works much better than me trying to figure out the clearest way to express it, since student phrasings tend to make sense to other students. Having a diversity of phrasings will also increase the probability that one of them will catch.
Anyway, a lot of interesting issues to think about. More updates on the teaching of basic problem-solving later!
Students (usually) do a pretty good job listing what they know, and what they want to figure out; the whole trick is how to connect them together. You could "work forwards" by asking what else you can figure out with what you know. It might not seem directly relevant, but if you keep figuring out more and more things, eventually you may see a connection to what you want to know. You can also work backwards by identifying what key piece of information would let you solve your problem. This sets a new sub-goal to aim at.
My students seem to get this idea in general, but have trouble applying it. The specific phrasing of the questions I ask seem to make a big difference. To me, all these questions are equivalent:
- What could you figure out from here?
- What could you try to do next?
- What could you try to figure out next?
- What facts could you try and figure out about this situation?
- What information could you try and figure out next?
For struggling students, it was difficult to get them to think of more than 1 thing they could try. I'd given them a diagram with two perpendicular lines, the equation of one of the lines, and a point on the other. I was hoping they'd suggest things to try such as finding the equation of the other line, finding the point of intersection, finding the x or y intercepts of either line, etc. None of these seemed to present themselves as salient pieces of information. I tried to prompt them by asking what kinds of questions they'd been asked before about situations like this, or what skills they remembered from algebra I or II about lines, points, or perpendicular lines, but without specific queues to trigger their memory, they weren't able to brain-storm a list effectively. At least not individually.
For my last block of the day we started as an entire class. I wrote the diagram on the board, and started a list with the two pieces of information we had (the equation of one line, and the coordinates of the point on the other), as well as the piece of information we were trying to find in the problem. Then we played a game where I would hand the pen to a student who would silently add a new "fact" about the diagram to our list of facts to figure out. They could add labels or additional points or lines to the diagram if they wanted. If they were really stuck, they could hand the pen to another student.
This seemed to work. Students were able to get a good list, extending or modifying ideas that were already in the list. It was interesting to notice that sometimes students would add a potentially relevant addition to the drawing (such as a line that made a promising-looking right triangle with existing lines), while a few would add a seemingly random line or point just for the sake of extending the list.
We had a discussion afterwards about which facts seemed more or less "helpful" in solving the problem. Again, as a class we were able to formulate a series of sub-goals to lead from the givens to the solution.
I think this general approach could work for small group problem solving. The main obstacle, I think, is actually getting the students to follow the process. As a whole class, they were happy to do it, but in small groups they tend to focus directly in on the problem, and seem to regard this sort of activity as long and circuitous.
Some ideas I'd like to try are these:
* Give them a situation and tell them to list, and find, as many things about the situation as possible (so there is no specific "problem" to solve).
* Scaffold the process by providing some of the steps. E.g. on the first problem, I provide the list of facts and they figure out how they connect together. Next time, we generate the list together first. After that, I prompt them to do it on their own. After that, I just give them the problem with no prompting to think about all the things they might try.
* Have the work on half-sized poster paper. Each sub-goal they formulate goes on a 3x5 card. They're reponsible (as part of their graded work) for creating and sequencing the cards on the paper (to show their problem-solving plan). After they have their plan, they can actually show the calculations on the paper.
With respect to how each student understands the specific self-questions involved in working forwards or working backwards, I think it might be best to have students say the idea in their own words, and we can just create a small collection of student phrasings. In my prior experience, this works much better than me trying to figure out the clearest way to express it, since student phrasings tend to make sense to other students. Having a diversity of phrasings will also increase the probability that one of them will catch.
Anyway, a lot of interesting issues to think about. More updates on the teaching of basic problem-solving later!
Wednesday, March 19, 2008
Rationalization
On NPR just now there's a novelist discussing his new novel. The main character is a reporter, reporting on "sex tours", who intends to remain detached, but ends up participating in the tours himself.
Terry Gross reads several excerpts from the novel of the different justifications that the men on the tour give for their participation. One considers how many different services are considered legitimate: we pay people to carry our things, clean our houses, massage our bodies; so why the special standard for sex? (presumably the gentleman in question thinks there isn't an important difference). Another considers the argument that it's dangerous for women and reasons that there are lots of risky jobs in society: is being a sex worker riskier than being a police officer or fireman?
At this point the novelist said something that set me off; he said that this illustrates the problem with "words". He claimed that your moral sense has to be deeper than, beyond "words", because words can lead you astray (as they did his main character).
Let's re-phrase what he really seems to be saying: Often our reasons (those "words") are really rationalizations, and we can too-easily convince ourselves with spurious arguments. If your "moral compass" is immune to such reasoning, you can escape the danger of rationalizing.
This is perfectly true, but neglects the equally compelling danger of the alternative: having a moral sense that isn't sensitive to reason means you can never check whether your moral compass makes any sense. If rationalizing is pretending you don't see, dogmatism is never looking in the first place.
I don't want to overstate people's (myself included) ability to correctly reason--I think all evidence is that it's pretty poor. But I think it's the only weapon we've got. The danger of rationalizing isn't that we've done too much reasoning; the problem is that we haven't done enough. Rationalizations, if such they are, shouldn't be able to withstand closer scrutiny. The problem is that we stop as soon as we get to an answer we like and don't look at them as closely as we really should.
I don't see many good practical prospects for improving our situation. How do you compel people to consider more deeply? The problem is even worse than it may seem, since there are a lot of people who admittedly continue to do things they've judged that they shouldn't. These, perhaps, are the more honest among us who recognize they won't change, but at least don't try to rationalize their choices. What's to be done?
I have no idea, but I do know that a promising way to deal with individual weakness is through the social bonds formed in groups; there are plenty of "keep-each-other-strong" organizations in other areas; so why not these? Religious groups are particularly well-positioned to do this since they already have the infrastructure in place, so to speak.
The foundation of such a group would be a collective recognition that no one probably knows the truth about things, and our best shot at figuring anything out is through challenging, though respectful, dialogue and that everyone needs help to become who they wish they were.
Terry Gross reads several excerpts from the novel of the different justifications that the men on the tour give for their participation. One considers how many different services are considered legitimate: we pay people to carry our things, clean our houses, massage our bodies; so why the special standard for sex? (presumably the gentleman in question thinks there isn't an important difference). Another considers the argument that it's dangerous for women and reasons that there are lots of risky jobs in society: is being a sex worker riskier than being a police officer or fireman?
At this point the novelist said something that set me off; he said that this illustrates the problem with "words". He claimed that your moral sense has to be deeper than, beyond "words", because words can lead you astray (as they did his main character).
Let's re-phrase what he really seems to be saying: Often our reasons (those "words") are really rationalizations, and we can too-easily convince ourselves with spurious arguments. If your "moral compass" is immune to such reasoning, you can escape the danger of rationalizing.
This is perfectly true, but neglects the equally compelling danger of the alternative: having a moral sense that isn't sensitive to reason means you can never check whether your moral compass makes any sense. If rationalizing is pretending you don't see, dogmatism is never looking in the first place.
I don't want to overstate people's (myself included) ability to correctly reason--I think all evidence is that it's pretty poor. But I think it's the only weapon we've got. The danger of rationalizing isn't that we've done too much reasoning; the problem is that we haven't done enough. Rationalizations, if such they are, shouldn't be able to withstand closer scrutiny. The problem is that we stop as soon as we get to an answer we like and don't look at them as closely as we really should.
I don't see many good practical prospects for improving our situation. How do you compel people to consider more deeply? The problem is even worse than it may seem, since there are a lot of people who admittedly continue to do things they've judged that they shouldn't. These, perhaps, are the more honest among us who recognize they won't change, but at least don't try to rationalize their choices. What's to be done?
I have no idea, but I do know that a promising way to deal with individual weakness is through the social bonds formed in groups; there are plenty of "keep-each-other-strong" organizations in other areas; so why not these? Religious groups are particularly well-positioned to do this since they already have the infrastructure in place, so to speak.
The foundation of such a group would be a collective recognition that no one probably knows the truth about things, and our best shot at figuring anything out is through challenging, though respectful, dialogue and that everyone needs help to become who they wish they were.
Wednesday, February 27, 2008
argument and propaganda
Jen had posed two questions in an earlier post:
How is argument different from force, and why is it preferable?
Of course, argument can be seen as a kind of force; after all, you're trying to logically (instead of physically) compel someone to accept your position. But we think of argument as a legitimate means to convince someone, whereas pure rhetoric or propaganda represent an illegitimate means of "force" through psychological coersian. So, to re-phrase the question, what's the difference between convincing and coercing? What makes one legitimate and the other not?
The major difference between these has to do with the use of reasons. As a first pass, we could say that argument provides reasons, whereas propaganda attempts to bypass them. Ridicule (name-calling, stereotyping), for example, can be a way to get someone to dismiss a position without ever considering reasons for or against it. More generally, propaganda often functions by trying to associate a position directly with something that will be evaluated positively or negatively, in the hopes that these feelings will transfer.
This isn't quite sufficient, though. There are other forms of psychological coersian which don't it this model. Lying, for example, involves providing false information in the hopes that it will lead someone else's reasoning in the direction you want. Deliberate oversimplification would be another example; it also aims to influence the other person by using their reason.
I think the real unifying feature of these illegitimate techniques is their attempt to manipulate. In this sense, legitimate argument is an attempt to work with (cooperate with) someone's capacity to reason. The alternatives try to "work against" it. Let's flesh this out a bit more.
Providing reasons isn't the distinctive feature of legitimate argument, since the alternatives do that, too. Legitimate argument involves helping someone else's reasoning system do for itself what they would wish it to be able to do anyway. We do this, for example, by providing accurate and relevant information, or by pointing out logical inferences. It's very much like helping someone do a math problem. You can point out things they didn't see, but would have wished they could see.
This analogy works because both activities involve trying to find the truth about something. Because arguments are so often framed as a debate, it's easy to think the purpose of argument is to convince someone. The real purpose of legitimate argument is to help them see the same truth you do.
Let's consider some difficulties:
#1 "What if their reasoning system is so faulty (or, "different", let's say) that "helping it do what they would want to do with it anyway" violates your own standards of reason?" For example, how do you legitimately convince a fundamentalist Christian who only believes in the literal word of scripture that evolution is right?
One option here would be to accept that sometimes you just can't get what you want with legitimate argument. I think this is dangerous. For the above case, I would seek principals of reason you both do accept and leverage those. It will probably be a long (maybe life-long) conversation, because before you can tackle evolution, you might need to convince them (based on shared principals of reason) to modify how they reason.
Those who are severely mental abnormal are a much more difficult example. Patients whose perception of the world, or faculties of reasoning are radically different may, in the end, be unreachable by a shared process of reasoning. Just because someone's reasoning process is different, however, doesn't mean that it shares no commonalities with your own, or that you can't build commonalities.
#2: "Your characterization implies that if there is no external truth, there is no possibility for legitimate argument, since such argument involves helping them come to see this truth." For example, when discussing issues of personal taste, aesthetics, and arguably, morality, the "truth" might just be a certain way of looking at the world; and no one way is fundamentally better than any other.
These are interesting cases. Let's consider something definitely subjective, such as whether "chocolate is more delicious than vanilla." It's definitely wrong to say that a legitimate argument for this position involves helping the other person discover the truth about this statement. But a legitimate argument for this claim might involve helping someone see if the truth _for them_ is the same as the truth _for you_. You do this by describing aspects of your own experience that are decisive in the hopes that they may discover they agree. This need not be manipulative.
This same idea seems as if it extends to other aesthetic questions fairly easily. What about moral questions?
This idea is similar to that of "framing" an issue. Linguists have long thought that the specific words you choose to describe something carry with them a set of assumptions that will partially determine what makes "common sense". George Lakoff discusses the use of framing for political/moral questions in his books _Moral Politics_ and _Don't Think of an Elephant_.
At first glance, framing can look a lot like illegitimate manipulation. After all, the words you choose can establish "hidden" assumptions that will influence the other person's reasoning system.
Certainly, frames can (and are) used manipulatively in this way. The question is, can they be used as part of legitimate argument? We all frame issues all the time anyway; we can't help it--it's the way our brains and language work.
So if frames are always manipulative, this might put a serious kink in any hopes of purity.
I said that a frame carries with it certain assumptions. What are these assumptions? They represent a particular way of understanding a situation; a way of looking at the world. In fact, they underlie how we experience the world. I think they also contain a mix of objective and subjective elements.
This makes them difficult to know what to do with.
If frames were purely my subjective way of looking at things, then using a frame might be like testing out to see if someone else also finds that way of looking at the world natural. It would be directly analogous to the painting case.
Too often, however, the assumptions a frame brings have elements that could be checked objectively. The fact that a frame hides these, however, means that typically they are not.
I will leave the issue of framing for consideration in another post.
How is argument different from force, and why is it preferable?
Of course, argument can be seen as a kind of force; after all, you're trying to logically (instead of physically) compel someone to accept your position. But we think of argument as a legitimate means to convince someone, whereas pure rhetoric or propaganda represent an illegitimate means of "force" through psychological coersian. So, to re-phrase the question, what's the difference between convincing and coercing? What makes one legitimate and the other not?
The major difference between these has to do with the use of reasons. As a first pass, we could say that argument provides reasons, whereas propaganda attempts to bypass them. Ridicule (name-calling, stereotyping), for example, can be a way to get someone to dismiss a position without ever considering reasons for or against it. More generally, propaganda often functions by trying to associate a position directly with something that will be evaluated positively or negatively, in the hopes that these feelings will transfer.
This isn't quite sufficient, though. There are other forms of psychological coersian which don't it this model. Lying, for example, involves providing false information in the hopes that it will lead someone else's reasoning in the direction you want. Deliberate oversimplification would be another example; it also aims to influence the other person by using their reason.
I think the real unifying feature of these illegitimate techniques is their attempt to manipulate. In this sense, legitimate argument is an attempt to work with (cooperate with) someone's capacity to reason. The alternatives try to "work against" it. Let's flesh this out a bit more.
Providing reasons isn't the distinctive feature of legitimate argument, since the alternatives do that, too. Legitimate argument involves helping someone else's reasoning system do for itself what they would wish it to be able to do anyway. We do this, for example, by providing accurate and relevant information, or by pointing out logical inferences. It's very much like helping someone do a math problem. You can point out things they didn't see, but would have wished they could see.
This analogy works because both activities involve trying to find the truth about something. Because arguments are so often framed as a debate, it's easy to think the purpose of argument is to convince someone. The real purpose of legitimate argument is to help them see the same truth you do.
Let's consider some difficulties:
#1 "What if their reasoning system is so faulty (or, "different", let's say) that "helping it do what they would want to do with it anyway" violates your own standards of reason?" For example, how do you legitimately convince a fundamentalist Christian who only believes in the literal word of scripture that evolution is right?
One option here would be to accept that sometimes you just can't get what you want with legitimate argument. I think this is dangerous. For the above case, I would seek principals of reason you both do accept and leverage those. It will probably be a long (maybe life-long) conversation, because before you can tackle evolution, you might need to convince them (based on shared principals of reason) to modify how they reason.
Those who are severely mental abnormal are a much more difficult example. Patients whose perception of the world, or faculties of reasoning are radically different may, in the end, be unreachable by a shared process of reasoning. Just because someone's reasoning process is different, however, doesn't mean that it shares no commonalities with your own, or that you can't build commonalities.
#2: "Your characterization implies that if there is no external truth, there is no possibility for legitimate argument, since such argument involves helping them come to see this truth." For example, when discussing issues of personal taste, aesthetics, and arguably, morality, the "truth" might just be a certain way of looking at the world; and no one way is fundamentally better than any other.
These are interesting cases. Let's consider something definitely subjective, such as whether "chocolate is more delicious than vanilla." It's definitely wrong to say that a legitimate argument for this position involves helping the other person discover the truth about this statement. But a legitimate argument for this claim might involve helping someone see if the truth _for them_ is the same as the truth _for you_. You do this by describing aspects of your own experience that are decisive in the hopes that they may discover they agree. This need not be manipulative.
This same idea seems as if it extends to other aesthetic questions fairly easily. What about moral questions?
This idea is similar to that of "framing" an issue. Linguists have long thought that the specific words you choose to describe something carry with them a set of assumptions that will partially determine what makes "common sense". George Lakoff discusses the use of framing for political/moral questions in his books _Moral Politics_ and _Don't Think of an Elephant_.
At first glance, framing can look a lot like illegitimate manipulation. After all, the words you choose can establish "hidden" assumptions that will influence the other person's reasoning system.
Certainly, frames can (and are) used manipulatively in this way. The question is, can they be used as part of legitimate argument? We all frame issues all the time anyway; we can't help it--it's the way our brains and language work.
So if frames are always manipulative, this might put a serious kink in any hopes of purity.
I said that a frame carries with it certain assumptions. What are these assumptions? They represent a particular way of understanding a situation; a way of looking at the world. In fact, they underlie how we experience the world. I think they also contain a mix of objective and subjective elements.
This makes them difficult to know what to do with.
If frames were purely my subjective way of looking at things, then using a frame might be like testing out to see if someone else also finds that way of looking at the world natural. It would be directly analogous to the painting case.
Too often, however, the assumptions a frame brings have elements that could be checked objectively. The fact that a frame hides these, however, means that typically they are not.
I will leave the issue of framing for consideration in another post.
Subscribe to:
Posts (Atom)