Wednesday, May 21, 2014

Of argument and ethics

How are argument and ethics linked? I’ve already touched on this question previously in the blog, exploring the relations between moral philosophy and actual ethical conduct. It’s clearly a question a lot of people like to think about: more people read that post than any other on this website!

Here, I want to consider a different way philosophy and ethics might intertwine, namely in the common ground between norms of argument and norms of ethics. The thought is this: Arguing well, in the philosophical sense, involves taking seriously what people say. Taking seriously what people say is one way of treating them with respect. As such, teaching people to argue well, and to do so naturally and instinctively, helps them act morally.

I first started to really consider this issue when I found myself mired in the Comments section of an online website. In my case, it was the academic-journalist website, The Conversation, but I think what I say will resonate with anyone who has waded into the to-and-fro of dialogue on just about any online discussion-board or comments section, or even on Facebook or Twitter, at least when the debate touches on moral and political views. If anything, we might suspect discussion on websites like The Conversation to be of a relatively high standard. Not only are the articles there written by academics, and so usually well-informed and bolstered with evidence, moderators patrol the comments section, and (to stymie anonymous trolling) everyone must use their real names.

Yet even with such measures in place, the standard of argument leaves much to be desired. I’m not speaking here of ‘trolls’, who just leave nasty comments to upset their victims, but rather about many ordinary people who (it seems to me) genuinely want to contribute to a discussion but succeed only in destroying it.

In my experience, once responders ascertain that a contribution (either the original article, or an earlier comment) maintains a position that differs from their own moral or political view, they will typically engage one or more of the following four modes of response (let’s call the contribution they are responding to ‘X’). Responders will tend to:
1.       Interpret X in the most extreme and unqualified way possible.
2.       Demand that implementing X would inevitably wind up creating a morally catastrophic situation, and that the author of X either endorses this outcome or recklessly fails to acknowledge its inevitability.
3.       Demand that the assertion of X must be driven by the most extreme and unpalatable moral principle imaginable.
4.       Demand that someone could only hold that moral principle if they were utterly evil, irrational, ignorant or ideologically duped.

Sometimes, opening with innocent-seeming phrases like ‘So basically you’re saying that…’, a single response can manage to work its way through all four modes of response. Such tactics, moreover, are not limited to one side or another of the political divide. Both progressive and conservative, left and right, employ them lavishly. The prevalence of these modes of response helps explain the oft-invoked Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.” Response 1 can do this by implying X resembles some position or policy employed by the Nazis. Response 2 says implementing X would lead to Nazism; Response 3 says X’s underlying principle accords with Nazism, while Response 4 says that the reasons for accepting X’s underlying principle would be endorsed by Nazis.

Some readers will have immediately recognized various sorts of informal logic fallacies (slippery slope, ad hominem) in the modes of response listed above, but I think the root cause of them all is the ‘straw-person’ fallacy; namely, caricaturing an opponent’s position by interpreting what they have said, and their reasons for saying it, in the most uncharitable way possible. The straw-person fallacy works by entirely avoiding the actual argument that has been presented, and in its place erecting a quite new argument (the ‘straw-person’ or ‘straw-man’) that is much easier to defeat. Constructing a straw-person represents an improper maneuver according to the standards of philosophical argument because it is a non sequitur—it ‘does not follow’ from what the opponent has said. Rather than responding to the argument at hand, the straw-man comment responds to some other argument entirely. On a logical level, the straw-person-response at best proves irrelevant to the issue at hand. More usually however, it serves to distract attention away from the actual position someone has proposed, and makes it appear that the defeat of the caricatured argument represents a defeat of the proposed position itself.

So much establishes, I hope, that these four modes of response fail logically and philosophically. But are they also a moral failing? And even if they are, does this sort of moral failure really matter? Is it worth us worrying about?

I think the prevalence of such responses does matter: they fail to respect others with opposing views and they contribute to an unhealthy political environment.

First, these responses inflict an immediate harm. The original author who has been dealt with in this way normally either flees the discussion or retaliates angrily. Even if they respond constructively, trying to clarify their position, a second wave of the same straw-person-ing responses typically drives them into frustration. The four responses demean their victim, precluding the possibility that the author is a reasonable and reflective person who could make a contribution to the dialogue. Instead, the author retreats, wounded and insulted.

Such responses also (and this is the second worry) undermine the potential of these domains to play a genuine role in the participatory side of democracy—in people being exposed to and engaging with other citizens who hold opposing views. As well as sundering this potentially promising mode of democratic participation, straw-person responses can impact upon people’s overall judgments about political legitimacy. In a democracy, legitimacy hinges on accepting that we have reason to comply with democratically chosen policies and laws (except in extreme cases), even if we morally oppose them, voted against them and plan to vote against them in the future. The more we view our fellow citizens as reasonable people holding morally defensible views, the more we will apprehend democratic processes and legislation as legitimate, even if we remain personally opposed to any given result. However, the more we conceive all citizens who oppose us as rapacious ideologues, immune to constructive discussion, the less likely we are to endorse a democratic process where they hold a majority. And, naturally, by foreclosing all their attempts to engage in rational discussion with us, we enhance the possibility that they will see us in precisely this way: as ideologically-driven dogmatists incapable of rational thought.

(Of course, I can’t deny that sometimes one’s political opponents will really prove to be morally beyond the pale. It may turn out that their reasons for advancing some policy actually are intolerant, racist or totalitarian. But this judgment can only happen at the end of the discussion, not the beginning.)

If rampant straw-person-ing yields these morally worrisome impacts, then why do so many responders engage in it? And when the responders do it, why are their tactics so often endorsed by those who share their political allegiance? Do the responders think they actually have a chance of changing the original author’s mind by using these tactics? Surely not. If one wants to persuade someone of the errors of their view, then the necessary first step must be to engage with the author’s actual views, and not some other views. Scorning another person is probably the worst imaginable manner of changing that person's mind about something.

Indeed, I doubt these responses even have much to be said for them from a strategic viewpoint—that is, from the position of working out what will best promote the power and importance of one’s own faction’s agenda. In democracies, the best way of getting contested policy enacted is almost always to convince the center to change their mind—to bring over onto your side those precious swing voters in the middle. Taking seriously the views of those who oppose you is the crucial first step towards teasing out whether they possess extreme views on the topic, or hold a perspective not so different from your own. In contrast, using the above responses to treat moderate and centrist voters as if they hold extreme views simply pushes them from the center to the opposing extreme—exactly the last thing you would want to do if you really want to see some positive change made in the world.

I confess I do not know why the practice riddles online commentary. I could darkly speculate that the responders draw on these tactics unconsciously in order to cement a pleasing worldview where their opponents are obviously wrong and immoral. This makes for a neat world where they can wallow in self-righteous outrage at anyone who opposes them. But this is mere speculation on my part (and probably involves my own collapse into Response 4 above).

One other possibility, though, is that responders comment in these ways because they have never learned any other way. After all, we are not born knowing the norms of philosophical argument. It takes effort, patience, concentration and empathy to understand what a person is really saying, as distinct from what we presume they are likely to say. Such virtues can be difficult to muster when a person opposes our views, and the instinctual reaction is to defend ourselves.

If that is right, it underscores why teaching philosophy (especially in ‘critical reasoning’ and ‘informal logic’ courses and elements of courses) possesses real ethical value. In teaching the norms of argument in schools and universities, we provide learners with tools and instincts that allow them to do something that proves notoriously difficult to do: to genuinely listen to what people from other perspectives say, and to understand their reasons for saying it. True, giving people the tools to act rightly does not guarantee they will be motivated to do so, but it does at least open the possibility of their doing so. And often being empowered to act in a particular way, to live up to a particular standard (in this case of philosophical excellence), actually does count as a reason for behaving in that way.

And as Gibbs noted in his 2010 Moral Development and Reality, the capacity and practice of trying to see things from another person’s perspective, especially in the course of argument, yields impressive results in terms of moral development. Philosophy itself, done properly, makes us better people.

[This blogpost was originally published as an article in Australian Ethics, May 2014.]


Michael Cowley said...

Reading the likes of Daniel Kahneman, would you not say it's likely that the "reason" people treat their opponent's arguments unfairly is that they are using the same shortcuts in reasoning as they probably did when assembling their own thoughts?

I don't know where ideas like this leave moral philosophy. If there are biological weaknesses in thought (I don't claim that's exactly what Kahneman says, I'm just hypothesizing) that affect all of us to some degree, can you still hold someone fully morally responsible for actions that are based on faulty reasons?

Or am I just rehashing yet another free will argument that someone demolished thousands of years ago?

Hugh Breakey said...

Hi Michael,
That's a very interesting thought. Have I grasped it right if I put it like this:
People assemble their own thoughts poorly. Then when they consider others’ views, they use the same shoddy thinking—but in this case it *looks* like a moral failing because it fails to respect the views of that other person. But really it isn’t a moral failing, as they are no more mistreating the other person then they do their own thought processes. They respond to both thought-processes clumsily and unreflectively—and can hardly do any better. They can’t properly assess the cogency of others’ views—because they cannot even properly assess the cogency of their own.
Perhaps we could go further, and say that they are psychologically compelled to commend their own ideas. They don’t make a morally-loaded decision to privilege their own views over those of others. Rather, (blinded to their own shortcomings) they inevitably endorse their own standpoint—that is what it means for them to have a belief, after all. Then, when confronted with another person’s views, their limited capacities force them to see only a caricature of it. They commit no moral failing—just a cognitive shortfall that applies differently to their own beliefs versus others’ beliefs.
It’s possible this is true, but for my own part I think it is overly pessimistic. Having been both a student and teacher of philosophical argument and critical thinking, in my experience people really can improve their capacity for clear thinking and argument. And in a classroom situation at least, when they are being ranked on their ability to do so and placed in a social context where it is expected, students really can develop this capacity.
So I’m still a little optimistic

Michael Cowley said...

Hi Hugh

I think you have represented my thoughts fairly up to a point, although I agree with you that people can improve their capacity for clear thinking.

I have the impression from my reading lately that this "improvement" might be better thought of as having an awareness of the weaknesses and cognitive biases inherent in both the biological brain and the developed personality (to the extent that they are separate concepts), and developing coping mechanisms and strategies to minimise the damage. Having had no formal education in critical thinking other than what little we did on the subject in high school (not much, in the 80s at least), I have no idea to what extent educators and philosophers explicitly use this way of thinking?

Anyway, I don't think I'd want to go as far as saying that people "commit no moral failing" when they fail to look beyond a caricature of others beliefs. I think I want to say the moral failing is perhaps a step further removed, in that I think there is perhaps a moral duty to examine your habits of mind and behaviour and improve them as best you can (isn't that Aristotle?). But to the extent that you have not had the opportunity or motivation to do so (due to upbringing, circumstances, innate "intelligence" whatever that is, etc), then perhaps our judgement of those people should be less harsh.