Wednesday, February 27, 2008
argument and propaganda
How is argument different from force, and why is it preferable?
Of course, argument can be seen as a kind of force; after all, you're trying to logically (instead of physically) compel someone to accept your position. But we think of argument as a legitimate means to convince someone, whereas pure rhetoric or propaganda represent an illegitimate means of "force" through psychological coersian. So, to re-phrase the question, what's the difference between convincing and coercing? What makes one legitimate and the other not?
The major difference between these has to do with the use of reasons. As a first pass, we could say that argument provides reasons, whereas propaganda attempts to bypass them. Ridicule (name-calling, stereotyping), for example, can be a way to get someone to dismiss a position without ever considering reasons for or against it. More generally, propaganda often functions by trying to associate a position directly with something that will be evaluated positively or negatively, in the hopes that these feelings will transfer.
This isn't quite sufficient, though. There are other forms of psychological coersian which don't it this model. Lying, for example, involves providing false information in the hopes that it will lead someone else's reasoning in the direction you want. Deliberate oversimplification would be another example; it also aims to influence the other person by using their reason.
I think the real unifying feature of these illegitimate techniques is their attempt to manipulate. In this sense, legitimate argument is an attempt to work with (cooperate with) someone's capacity to reason. The alternatives try to "work against" it. Let's flesh this out a bit more.
Providing reasons isn't the distinctive feature of legitimate argument, since the alternatives do that, too. Legitimate argument involves helping someone else's reasoning system do for itself what they would wish it to be able to do anyway. We do this, for example, by providing accurate and relevant information, or by pointing out logical inferences. It's very much like helping someone do a math problem. You can point out things they didn't see, but would have wished they could see.
This analogy works because both activities involve trying to find the truth about something. Because arguments are so often framed as a debate, it's easy to think the purpose of argument is to convince someone. The real purpose of legitimate argument is to help them see the same truth you do.
Let's consider some difficulties:
#1 "What if their reasoning system is so faulty (or, "different", let's say) that "helping it do what they would want to do with it anyway" violates your own standards of reason?" For example, how do you legitimately convince a fundamentalist Christian who only believes in the literal word of scripture that evolution is right?
One option here would be to accept that sometimes you just can't get what you want with legitimate argument. I think this is dangerous. For the above case, I would seek principals of reason you both do accept and leverage those. It will probably be a long (maybe life-long) conversation, because before you can tackle evolution, you might need to convince them (based on shared principals of reason) to modify how they reason.
Those who are severely mental abnormal are a much more difficult example. Patients whose perception of the world, or faculties of reasoning are radically different may, in the end, be unreachable by a shared process of reasoning. Just because someone's reasoning process is different, however, doesn't mean that it shares no commonalities with your own, or that you can't build commonalities.
#2: "Your characterization implies that if there is no external truth, there is no possibility for legitimate argument, since such argument involves helping them come to see this truth." For example, when discussing issues of personal taste, aesthetics, and arguably, morality, the "truth" might just be a certain way of looking at the world; and no one way is fundamentally better than any other.
These are interesting cases. Let's consider something definitely subjective, such as whether "chocolate is more delicious than vanilla." It's definitely wrong to say that a legitimate argument for this position involves helping the other person discover the truth about this statement. But a legitimate argument for this claim might involve helping someone see if the truth _for them_ is the same as the truth _for you_. You do this by describing aspects of your own experience that are decisive in the hopes that they may discover they agree. This need not be manipulative.
This same idea seems as if it extends to other aesthetic questions fairly easily. What about moral questions?
This idea is similar to that of "framing" an issue. Linguists have long thought that the specific words you choose to describe something carry with them a set of assumptions that will partially determine what makes "common sense". George Lakoff discusses the use of framing for political/moral questions in his books _Moral Politics_ and _Don't Think of an Elephant_.
At first glance, framing can look a lot like illegitimate manipulation. After all, the words you choose can establish "hidden" assumptions that will influence the other person's reasoning system.
Certainly, frames can (and are) used manipulatively in this way. The question is, can they be used as part of legitimate argument? We all frame issues all the time anyway; we can't help it--it's the way our brains and language work.
So if frames are always manipulative, this might put a serious kink in any hopes of purity.
I said that a frame carries with it certain assumptions. What are these assumptions? They represent a particular way of understanding a situation; a way of looking at the world. In fact, they underlie how we experience the world. I think they also contain a mix of objective and subjective elements.
This makes them difficult to know what to do with.
If frames were purely my subjective way of looking at things, then using a frame might be like testing out to see if someone else also finds that way of looking at the world natural. It would be directly analogous to the painting case.
Too often, however, the assumptions a frame brings have elements that could be checked objectively. The fact that a frame hides these, however, means that typically they are not.
I will leave the issue of framing for consideration in another post.
Sunday, February 24, 2008
Small Rant
It has apparently been widely reported in conservative (and other) media that in a focus group conducted by Fox news, 25 Obama supporters couldn't name a single one of his legislative accomplishments. I heard a group of people on the train today recount this fact (“discuss” would be too strong a word), shaking their heads and clucking their tongues: “...You would think that his supporters would know something about him.” And thus, the subject was dismissed; which, of course, was the point of the “news” story.
Of course, it would be easy to perform the same stunt with any candidate. I would have bet money that I could have found 25 McCain supports on the train who wouldn't have been able to name any of his legislative accomplishments; I doubt that's the way most Americans decide who to support. Indeed, I would be surprised if the certainty of the McCain supporters in question was based on their extensive legislative knowledge.
How big a factor should it be? Elections are about the future. Knowledge of a candidates' past accomplishments (and failures) will be valuable to the extent that they can inform your interpretation of the candidate's current self-presentation and plans. As such, their utility will vary greatly; but in the end I view their role as a supportive (almost secondary) one; as one of confirmation or correction of what should be primary which is what the candidate is presenting now.
But, as I alluded to above, these considerations give too much credit to a media stunt which doesn't even appear to try addressing such worthwhile questions.
For me, politics is about two things, neither of which (in my experience) come naturally to most people: hard research, and compromise. Doing a lot of hard thought and research is the only way to know what you should think about the difficult questions of politics (and they're pretty much all difficult). Compromise is the only way you make progress towards solving them.
If most people thought about politics this way, I think we would see more humility, curiosity, uncertainty, openness and goodwill. Take social security, for example. Most people are not economists, know none of the statistics that might be relevant, and haven't really thought very carefully about ideas like "personal responsibility" they're so ready to invoke; nonetheless, many people will confidently state a position, and caricature or ridicule those whose positions differ.
I have no idea what to think about social security; it's a question I haven't thought about yet, and which, quite frankly, seems pretty daunting. Far from being ready to dig in my heals, I'm actively searching for people who can make sense of their (or any!) position for me. Sadly, most people's views seem to bottom-out in something they heard someone say on tv that they didn't think about very much.
When push comes to shove, I don't think most people are really interested in doing the work required to construct a reasoned view for themselves. I don't blame them; it's a lot of work. But their certainty and demeanor is wholly inappropriate to their understanding. It's like having strong feelings about different interpretations of quantum mechanics that you're ready to vigorously defend, even though, when it comes right down to it, you don't really know very much about physics.
Despite this, I'm amazed at what people do to avoid conceding a point to "the other side." Compromise is often seen as (if not openly declared to be) weakness. The attitude I sense from most people is that politics isn't about finding compromises we can agree on, but about winning enough power to impose your ideas on everyone else. I remember in 2004 when an NPR correspondent asked a member of a republican think-tank whether Bush had any responsibility to the democratic 49% of the country, considering the narrow margin of his victory and the fact that his party now controlled both the house, the senate and the executive branch. His answer was along the lines of: "We won. Why on earth would we give-up power to the losers?"
This sort of attitude deepens divisions, clouds clear thinking, and, in my opinion, impedes the possibility of long-term progress. What kind of progress is it if it's just reversed 4 or 8 years later?
If politics should be about scholarship and compromise, what's gone wrong? I'm not totally sure, but the media seems to be fanning the flames. The way most stations cover politics differs little from the way they cover sports. It's about who's ahead, who's strong and who's weak, what strategies or tactics are successful. Politics is too difficult and important to be trivialized in this way.
Here's my advice for discussions of politics. Try to ask more questions than you answer. Try to figure out how your opponents' view makes sense to them. Find something you can agree with before finding something to disagree about. These dictates are harder to practice than preech, and I'd be the first to admit I'm not very good at them. But they're what I aspire to and would appreciate being reminded of as frequently as possible.
Friday, February 22, 2008
This might be too hard for me, pt. 1
Draw a 1x1 square. In a minute, I'm going to take a ruler and draw a straight line that will "cut" accross some part of the square. It might be through the middle, or it might only be a small section of the corner; you don't know. You should draw anything you want inside the square so you can be certain that any line I draw must cross one of the lines in your drawing. The challenge is to do this with the shortest possible amount of drawing.The first, and most obvious, suggestion would be this:


(Here's an imagined weird scatter with few blocked red lines. Of course, a scatter like this could consist of a huge number of almost point-like line segments....so, pretty hard to think about on a case-by-case basis....)
One friend of ours suggested the following proof (actually "proof" since it turned out to be wrong). It's an interesting and useful form of argument, however.
Idea #1: Find the shortest solution for only some of the lines
Any solution that blocks all possible lines will at least have to block all the diagonal lines (in both directions).

Let's divide the square into a bunch of parallel "slices"...


So, if that blue line is the shortest solution for a slice, we can put that same solution together for all the slices...


If we also want to block the diagonal lines in the other direction, we just repeat the procedure and overlay the results:

Since an actual solution (that blocks all the lines) will have to block those diagonal ones, and we know it has to be at least as long as this scatter to block the diagonal ones, the full solution will also have to be at least as long as this scatter.
But our "x" solution is exactly that long. It's easy to see why: instead of putting each little blue segment in a random spot, we line them all up along the diagonals.

So, the "x" is the shortest possible way to block all the diagonals, but it also blocks all the other lines lines! Therefore, it must be the shortest full solution, because if there was another, shorter solution, we'd know it couldn't be blocking all the diagonal lines.
Since we thought the "x" solution was probably the best, we weren't very critical of the "proof" and thought we'd done a pretty good job!
Here's the problem:

What went wrong with the original proof? Maybe lots of things, but one clear problem was how we broke the problem down into two smaller problems, and then re-combined them.
First we asked "what's the shortest way to block all the diagonal lines in one direction? Ok, let's keep that. Now let's do the same thing in the other direction to block all the other diagonal lines."
The problem is that one little bit of our "x" solution blocks lots of one type of diagonal (green), but none of the other type of diagonal (red)...


Idea #2: A Connected Solution is Always Shortest
The solution which beat the "x" is formed by adding two imaginary points so that, when you connect all the dots, they form 120-degree angles.

Of course, a shortest path connecting points is different than a shortest drawing that will block all possible lines, but it still seemed like a step in the right direction. Maybe we could connect the two problems together...
I hoped that this solution was, in fact, the shortest and we could prove it with a two-step argument:
1). If there's a "disconnected solution" (like a scatter), there will be a "connected solution" (continuous lines with no breaks) that's the same length or shorter.
2). Any "connected solution" must connect the vertecies of the square with each other.
3). Therefore, because of the shortest-path theorem, our solution will also be the shortest blocking solution.
However, we didn't get very far along this path until we found...

Curious Student: "But wait! Isn't it possible that you could find an even shorter solution that is connected? Then you'd be right that the shortest solution is a connected one!"
Sadly, no. That theorem about the shortest connecting path implies that the shortest connected solution is the one we found. This disconnected solution is shorter. So if we find an even shorter solution than this, we know it won't be a connected one; it will have breaks in it.
Idea #3: Try A Smaller Problem. Measure Blocking Efficiency.
Since we didn't have a good way to analyze the square, we decided to try the simplest case we could think of: the equilateral triangle.
Here is the shortest solution we found for the equilateral triangle:

Once again we have the problem of how to prove this is the shortest.
Nick was hoping to use an idea similar to the last proof that didn't work. Here's the idea:
Any solution that blocks all lines will at least have to block all lines perpendicular to the three sides of the triangle.
If I have a random tiny line segment, I can measure how much of this blocking it does by measuring how long a "shadow" it would cast on each side of the triangle if I shined a light behind it (perpendicular to the wall) (and add them up).

In other words, the orientation of the lines in our solution gives you the maximum amount of blocking possible. That would show that it was the shortest solution for lines perpendicular to the 3 sides. Since our solution also blocks the other lines, it would be the shortest overall solution (by an argument similar to our idea #1).
Unfortunately, we found that a small piece (in isolation) blocks the most when it's parallel to one of the sides; the pieces in our solution are all perpendicular.
This gave me a new idea, however. How much blocking a solution does depends on two things: how much blocking each little piece does, and whether or not any pieces are blocking the same lines (how much redundant blocking there is).
If you want to make your solution shorter, you could either try to change the orientation of your lines so that they block more, or you could move them around so that their blocking isn't redundant.
This led us to...
Idea #5: Measure Redundant Blocking
Here was my idea...
1). Find a measure of how much total blocking needs to be done.
2). Measure the maximum amount of blocking a given tiny line segment can do
3). This will let us calculate a theoretical shortest solution solution. If we assume no redundant blocking, the shortest solution will be...
(total blocking needed) / (how much blocking a unit of solution can do) = total units of length in solution.
4). Calculate the amount of redundant blocking in our proposed shortest solution.
5). Show (somehow) that exactly that much redundancy is required for any 100% blocking solution.
The really hard part of this is step 5.
But steps 1 and 2 are also tricky because they require us to measure blocking in some way. Before we were only measure how many of certain types of lines were being blocked. I wanted a way to measure all the lines being blocked.
How can we do it? Here was our idea...
...inscribe our shape in a circle. Now, the lines you need to block can be seen as all the lines going from one arc of the circle to another.

Here's a picture of what I mean. The three arcs are in different colors. Any line starting in one arc and ending in another is a line we need to block.
This gives us a possible way to measure how much blocking a little line segment does: Imagine a light on the circle, shining onto the segment. Measure the length of the "shadow" it casts on the other side. This is a measure of how many lines it blocks from that point.

Mathematically speaking, if we have a function to find the arc length, and integrate it from 0 to 360 (as the light source moves around the circle), this should allow us to measure it. We haven't made this integral yet because it seemed hard and we're hoping if we're clever enough we won't have to.
My great hope was that the total amount of blocking of a segment only depends on its length; not on its angle or location within the triangle. The reason this would be good is that a certain length of solution would directly yield a certain total length of blocking. Then, to find the shortest solution, all we have to do is find an arrangement which minimizes redundant blocking. So, instead of minimizing length of solution, we'd be minimizing redundancy of solution.
Unfortunately (for reasons I won't go into), it doesn't seem like that's going to work, though I'm not 100% convinced yet.
That's all I'm going to write for now....perhaps more later...
But first:
What's making this problem hard?
The difficulty we've been having in this problem is that we can't break it down into simple sub-problems we can solve independently.
A short solution depends on two things: the "blocking efficiency" of each little piece, and the amount of redundant blocking for the arrangement. The blocking efficiency seems to depend on the angle and location of each little piece; and the "blocking" of the pieces need to be balanced between all the directions that require blocking. The amount of blocking depends on how each piece is oriented with respect to every other piece; it's a global property of the solution. And so far we can't think of an easy way to divide all the possibilities into cases we can treat separately.
If anyone has new ideas, we'd love to hear them!
Friday, February 15, 2008
What we expect of morality
Here are some examples:
- Morality should be a simply-describable function. The inputs to the function will be actions (or perhaps biographies), and the outputs will be evaluations. The function itself will be describable by a fairly short list of rules.
- Morality involves publicly-expressed reasons. If we judge a certain thing morally good, then we must be able to give an intelligible account of why that thing is good, and what it would take to change that good thing into a bad thing. If we have moral judgments that we can't back up with reasons, then those are not real moral judgments at all, but rather psychological biases or distortions. (I take this to be more or less what Peter Unger thinks, after reading parts of Living High and Letting Die.)
- Morality should judge my individual confrontation with possibilities in the world. If it is right for me to act a certain way, a change in the behavior of those around me can't make it wrong for me to act that way. (This seems absurd to me, but I hear it suggested by classmates, and see it expressed in a more limited domain as the "Compliance Condition" in Liam Murphy's "The Demands of Beneficence".)
- Morality applies only when our actions affect other people.
Probably there are more -- I'll keep collecting them here as I find them. The next questions are which of these make sense to include in our conception of morality, and how we would go about deciding that.
Wednesday, February 6, 2008
Biting the Bullet
Those who I considered weaker and less intelligent were often uncertain what to believe, would believe things because they hoped them to be true, or, worst of all, didn't seem much interested in examining their beliefs. I imagined the philosopher as the exact opposite sort of person--someone who is constantly scrutinizing their beliefs to ensure they're held for compelling reasons, or not at all. Naturally, I saw myself as this latter sort of intelligent, tough-minded person.
Nothing let me publicly declare this allegiance as strongly as "biting the bullet." My earliest memory of biting the bullet was my far too joyful rejection of free will. I had heard arguments that free will was incompatible with determinism, which was a very difficult conclusion for most people I knew to accept. Part of my certainty that we didn't have free will came from the arguments themselves, but I think a large part of it came from how it let me set myself up as the sort of person I wanted to be. If someone didn't immediately embrace the unpalatable conclusion, it was easy for me to see them as either too stupid to grasp the arguments, or too weak to accept the conclusion. And, of course, my certainty was a clear indicator of my own superior intelligence and commitment to reason.
Now, I tend to lean in the opposite direction--I usually refuse to bite the bullet of a difficult conclusion because (or so I tell myself), its difficulty usually indicates that it represents something important, not to be overlooked or rejected. But I think this can also be seen as a sort-of biting the bullet. The psychological mechanisms are the same; I just have a new concept of what it really is to be tough-minded.
As my above descriptions may have revealed, I consider the sort of absolutist thinking involved in biting the bullet sort of juvenile. Biting the bullet isn't bravely following reason wherever it may lead, but desperately clinging to easy answers that might make you feel smart, or certain, but at the cost of ignoring important aspects of reality. A really tough-minded individual won't be seduced by too clean an argument, but will steadfastly accept logical tensions and conflicts because they reflect how reality really is. Once again, this kind of set-up re-inforces my feeling of certainty because it gives me a way to understand my conclusion as the one a really intelligent and tough-minded person would make. Biting the bullet is what those who are intellectually narrow or insecure do.
Having realized this, one response might be to re-double my efforts to believe things on the strength of the arguments themselves, rather than on how their structure lets me interpret myself. Knowing these twin pitfalls, I might re-examine my conclusions to look specifically for their biasing effects. For whatever reason, I haven't felt like this is the right way to look at things at all.
Instead, I've increasingly started to see philosophy as a normative activity. Accepting or rejecting free will, for example, isn't a matter of looking to the arguments to see what's most likely the truth. It's simply a choice to see the world one way or another; and each way has its own focus, blind-spots, dilemmas, and consequences. This leaves for me the question of how philosophy should be related to other disciplines (including those that I do see as pursuing specific forms of factual truth) and our everyday lives.
Monday, February 4, 2008
Moral Relativism
Most people think it would be wrong to walk by a baby drowning in a puddle without doing anything.In my experience, if someone disagrees, they object that the situations are actually different: of course they should save the baby from the puddle, but the other case is importantly different somehow, so it's ok for them to keep their money. Then there ensues a back-and-forth about whether the differences between the two situations are really morally important differences.
So, what about babies elsewhere in the world whose deaths we (rich, blog-readers in the West) could easily prevent by giving a small amount of our resources?
It doesn't seem like it should matter whether you actually walked past the baby or not; in both cases, you are easily able to prevent the death of an innocent. If it's wrong for you not to in once case, it's wrong in the other case also.
So why aren't you sending more money in foreign aid??
Once my moral intuitions conflict with those of others, it's easy for doubts about the whole process to creep in:
I have one vague feeling, you have another. How can we really decide who's right? In fact, is there really a right answer, or is it just a matter of opinion?Most people I talk to are very quick to give up the fight for a real truth-of-the-matter in moral questions. They agree (and even vigorously argue) that morality is totally subjective. It's a mistake to try and find out what's really right and wrong. That would be like trying to find out what's really more delicious: chocolate or vanilla. There is no Truth independent of what people think it is. If I think chocolate is better, that's true for me. If you think vanilla is better, that's true for you. If you talk with me and convince me that vanilla is really better, then that also becomes true of me.
But, of course, most people don't really act in their everyday lives as if morality works like this. They may deny that anything is objectively right or wrong when they're arguing with you, and then try to convince other people to buy "cruelty-free" meat (or go vegetarian). Why? Because it's wrong to inflict suffering on animals, of course. Or they may try to convince their public representatives that it's unfair for them to be required to pay taxes to help the poor; that's money they earned, so they should be able to decide how to spend it. If neither of these examples move you, think for a minute about what cause really motivates you; I think we all have them.
There's more going on in our championing on a cause than simply wanting the world to be a certain way and trying to convince other people. We try to convince each other, not just to get our way, but because we think our way is the right way, and that other people shouldn't just do what we want, but should agree that we're right. If I think about why other people act against my moral causes, I don't think it's because they have different opinions which they have every right to; I think they're too confused or ignorant to see the truth, or too lazy and self-deluded to admit it!
This is just to point out that we treat moral questions differently than we treat questions of taste (like what the best ice-cream is). I can't stand heath ice cream. My wife loves it. I might think that's strange or funny, but I don't think she's confused or wrong about what's good; she's just got a different idea than I do. If I think water-boarding is an appalling practice and you don't, it's very hard for me to adopt the attitude that you just have a different preference than I do. I think that, for some reason, you cannot see or admit the real truth about the practice.
So, if we don't stop to ask ourselves, but covertly watch ourselves in action (to catch ourselves unaware, in our natural habitat), we don't look like we really believe that our moral intuitions are just our opinions. We believe they're really right and that other people should think so too!
Let's look at what research in moral psychology has to say about this. Several psychologists define moral judgements as judgements that we universally hold others accountable to. This is what distinguishes our thinking something's (morally) wrong from just thinking that it's unwise, imprudent, distasteful, etc. So they, too, recognize this fact about our everyday behavior.
Philosophers take this fact and ask "but are we right that there are moral laws that are really universal the way we seem to treat them?"
Psychologists instead recognize that we naturally seem to universalize certain of our judgements. This is different from thinking hard and deciding that there are certain universal principals that we should hold each other accountable to. Instead, certain judgments just strike us as being universally right or wrong.
Noticing this difference, psychologists ask "why is this a natural part of our psychology?" If it's not the result of a considered decision on each of our parts, the answer must lie deeper in a psychology we all share. One answer they've suggested is that we evolved to have this type of judgment because they are effective at helping us create and maintain stable social groups, which improved our chances of survival.
What makes our moral judgments effective in this capacity is merely the fact we think they apply universally; it seems that their content--what we actually think is right and wrong--could vary widely. In other words, you can form just as stable a community with one set of universal principals as with another; it's the fact of their being treated as universal that's important for the social effects.
And, indeed, there not only seems to be disagreement among cultures as to what kinds of behaviors are right or wrong, but also disagreement about what kinds of behaviors should be considered through a moral lens at all. For example, academic discussion of morality tends to focus on issues of fairness and harm, but in many cultures around the world there are strong moral judgements associated with eating, ones relationship to tradition or authority, menstruation, sexual practices, and cultural taboos.
What should we make of this disagreement? I am strongly inclined to think that most traditional cultures are simply mistaken in much of what they consider to be moral questions; but am I really right in this, or is it just an opinion? If menstruating women are prevented from entering the workplace in India, can I rightly condemn that practice as morally wrong?