Can we learn about moral decision-making
from the psychological literature on human error – that is, the study of how
and in what ways human beings are prone to mistakes, slips and lapses? In this
blogpost I offer some brief – but I hope enticing – speculations on this
possibility.
At the outset it is worth emphasizing that
there are some real difficulties with linking error psychology with moral
decision-making. One obvious point of dis-analogy between ethics and error is
that one cannot deliberately make mistakes. Not real mistakes. One can fake it,
of course, but the very practice of faking it implies that – from the point of
view of the actor – what is happening is not at all a mistake, but something
deliberately chosen. But it seems to be a part of everyday experience that one
can deliberately choose to do the wrong thing.
However, the most significant dis-analogy
between moral psychology and error psychology is that in most studies of the
psychology of error, there is no question whatever about what counts as an
error. In laboratory studies the test-subjects are asked questions where there
are plainly right and wrong answers – or rational and irrational responses.
Equally, in studies of major accidents, the presence of errors is pretty much
unequivocal – if there is a meltdown at a nuclear reactor, or if the ferry
sinks after crashing – then it is clear that something has gone wrong
somewhere.
In ethical decision-making, on the other
hand, whether a judgement or an action is ‘in error’ – if this is supposed to
mean ‘morally wrongful’ – is often very much in dispute. So someone who judges
that euthanasia is wrong, say, cannot be subject to the same error analysis as some
who contributes to a nuclear disaster. At least, not without begging some very
serious questions.
The way I aim to proceed is to think about
those cases where the person themselves comes to believe they made a moral
mistake. The rough idea is that a person can behave in a particular way,
perhaps thoughtlessly or perhaps after much consideration, but later decide
that they got it wrong. Maybe this later judgement occurs when they are lying in
bed at night and their conscience starts to bite. Maybe it happens when they
see the fallout of their action and the harm it caused others. Maybe it happens
when someone does the same act back to them, and they suddenly realise what it
looks like from the receiving end. Or maybe their local society and peers react
against what they have done, and the person comes to accept their society’s judgement as the better one.
One reason moral psychology might be able
to learn from error psychology is in the way that error psychology draws on
different modes of action and decision-making and welds them all into one unified
process. Interestingly, the different modes error psychology uses parallel the distinctions
made in moral psychology between virtue, rule-following (deontology) and
consequentialism (utilitarianism).
GEMS
I’ll use here the account relayed in James
Reason’s 1990 excellent book, Human Error.
Drawing on almost a century of psychological study on the subject, Reason puts
forward what he calls the ‘Generic error-modelling system’ or GEMS. GEMS is
divided into three modes of human action, in which different sorts of errors
can arise. (What follows is my understanding of GEMS, perhaps infected by some
of my own thoughts – keep in mind I am surveying a theory that is rather
outside my realm of expertise, so I make no great claims to getting it exactly
right.)
Skill-based action
The first mode of action is ‘skill-based’.
This is the ordinary way human beings spend most of their time operating. It is
largely run below the level of conscious thought, on ingrained and habitual processes.
We decide to make a coffee, or drive to the store, but we do not make executive
decisions about each and every one of the little actions we perform in doing
these tasks. Rather than micro-managing every tiny action, we allow our
unconscious skills to take over and do the job. This mode of action draws
heavily on psychological ‘schemas’ – these are roughly speaking processes of
thought or models of action that we apply to (what we take to be) a
stereotypical situation in order to navigate it appropriately. The context
triggers the schema in our minds, or we deliberately invoke a schema to manage
some task, and then our conscious minds sit back (daydream, plan something
else, think about football, etc) as we proceed through the schema on automatic
pilot. Schemas are created by prior practice and habituation, and the more
expert we become on a particular area, the more of it we can do without
thinking about every part. To give an example: when a person is first trained
in martial arts, they need to consciously keep in mind a fair few things just to throw a
single punch correctly. After a while, getting the punch right is entirely
automatic, and the person moves to concentrate on combinations, then katas, and
so on. More and more of the actions become rapid and instinctual, leaving the
conscious mind to focus on more sophisticated things – gaps in an opponent’s
defences, their errors of footwork, and so on.
Errors usually occur at this skill-based level
when, (a) the situation is not a stereotypical one, and faithfully following
the schema does not create the desired result, or (b) we need to depart from
the schema at some point (‘make sure you turn off the highway to the library,
and don’t continue driving to work like a normal day’) but fail to do so.
Rule-based thinking
The second mode of action is ‘rule-based’.
This arises when the schema and the automatic pilot have come unstuck. When
operating at the ‘skill-based’ level described above we were unaware of any
major problem; the context was ‘business as usual’. Rule-based action emerges
when something has come unglued and a response is required to rectify a
situation or deflect a looming problem. In such cases, GEMS holds, we do not
immediately proceed to reason from first-principles. Rather, we employ very
basic rules that have served us well in the past; mainly ‘if-then’ rules like:
‘If the car doesn’t start, check the battery terminals are on tight’; ‘if the
computer isn’t working, try turning it off and on again’.
Mistakes occur at this level if we apply a
bad rule (one that we erroneously think is a good rule) or we apply a good
rule, but not in its proper context. We can also forget which rules we have
already applied, and so replicate or omit actions as we work through the
available strategies for resolving the issue.
Knowledge-based thinking
The third and final mode of action is
‘knowledge-based’. Knowledge-based thinking requires returning to
first-principles and working from the ground up to find a solution. At this level
an actor might have to try and calculate the rational response to the risks and
rewards the situation presents, and come up with solutions ‘outside the box’. It
is at this level where a person’s reasoning begins to parallel ‘rational’ thinking
in decision-theoretic or economic senses. That is, it is in this mode where a person really tries
to ‘maximise utility’ (or wealth).
GEMS says that human beings do not like
operating at the knowledge-based level; it takes not only concentration but
also real mental effort. For the most part, effort is not enjoyable, and we
only move to knowledge-based decision-making when we have reluctantly accepted
that rule-based action has no more to offer us – we have tried every rule which
might be workable in the situation, and have failed to resolve it
appropriately. We are dragged kicking and screaming to the point where we have
to think it out for ourselves.
James Reason argues in his presentation of
GEMS that human beings are not particularly good at this type of knowledge-based
thinking. Now my first thought on reading this was: ‘not good compared to who?’
It’s not like monkeys or dolphins excel at locating Nash equilibriums to game
theory situations. Compared to every other animal on this planet, human beings
are nothing short of brilliant at knowledge-based thinking. But Reason was not
comparing humans to other animals, but rather knowledge-based thinking to
rule-based thinking. He argues that, comparatively, human beings are far more
likely to get it right when operating on the basis of rules. Once the rules
render up no viable solutions, we are in trouble. We can think the issue
through for ourselves from the ground up, but we are likely to make real errors
in doing so.
The opportunity for error at this stage,
therefore, is widespread. Human beings can read the situation wrongly, be
mistaken about the causal mechanisms at work, make poor predictions about
likely consequences of actions, and be unaware of side-effects and unwanted
ramifications. This isn’t to say knowledge-based decision-making is impossible,
of course, just that the scope for unnoticed errors is very large.
A full-blown theory of human action
Ultimately, in aiming to give a
comprehensive theory of how human beings make errors, this realm of psychology
has developed a full-blown account of human action in general. Most of the time
we cruise through life operating on the basis of schemas and skills. When a
problem arises, we reach for a toolbox of rules we carry around – rules that
have worked in previous situations. We find the rule that looks most applicable
to the current context, and act on its basis. If it fails to resolve the
situation, we turn to another likely-looking rule. Only after we despair of all
our handy rules-of-thumb resolving the situation are we forced to do the hard thinking
ourselves, and engage in knowledge-based decision-making – which is effortful
and fraught with risk.
(Now one can have worries about this
picture. In particular, psychologists of error seem to me to work from a highly
selective sample of contexts. Their interest focuses on cases where errors are
easily recognizable, such as in artificial laboratory situations and slips of
tongue, or where the errors cry out for attention, such as in piloting mishaps
and nuclear meltdowns.)
What’s interesting about GEMS, from a moral
theory perspective, is how it aligns with the three main theories of moral
action: virtue, duty-based theories and consequentialist theories. In what
follows, I’m going to insert these three theories of moral reasoning into the
three categories of human action put forward by the GEMS process, and see what
the result looks like.
Virtue and skill-based action
Virtue theory has deep parallels with the
schemas of skill-based reasoning. Virtues are emotional dispositions like
courage and truthfulness. When operating on the basis of the virtues, one isn’t
focusing on particular rules, or on getting the best consequences. Instead, the
point is to have the correct emotional response to each situation. These steadfast
emotional dispositions – the virtues – will then guide the appropriate
behaviour.
Now, to be sure, it is mistaken to view Aristotle (the first and greatest virtue-theorist) or contemporary virtue-ethicists
as basing all moral behaviour on habit and habituation, especially if this is
taken to imply not actively engaging one’s mind (Aristotle’s over-arching
virtue was practical wisdom). But the formation of appropriate habits, and learning
through practice and experience to observe and respond to the appropriate
features of a particular situation is a hallmark of this way of thinking about
morality. (Indeed, it is reflected in the roots of the words themselves: our English word ‘ethics’ comes from the Greek term meaning ‘custom’ or ‘habit’ – and ‘morality’
comes from the Roman word for the same thing.)
Paralleling the psychology of error, we
might say that the primary and most usual way of being moral is to be correctly
habituated to have the right emotions and on their basis to do certain actions
in particular contexts. We need to be exposed to those contexts, and practice
doing the virtuous thing in that type of situation until we develop a schema
for it – until the proper response is so engrained as to become second nature
to us. Operating on this mode, we don’t consciously think about the rules at
all, much less have temptations to breach them. Most good-hearted people don’t
even think about stealing from their friends, for instance. It isn’t that an
opportunity for theft crosses their mind, and then they bring to bear the rule
on not stealing. Rather, they don’t even notice the opportunity at all.
Sometimes, though, problems arise. Even if
we have been habituated and socialized to respond in a particular way, we might
find a case where our emotionally fitting response doesn’t seem to provide us
with what we think (perhaps in retrospect) are good answers. This may be
because the habit was a wrongful one. Schemas are built around stereotypes –
and it is easy to acquire views and responses based on stereotypes that wind up
harming others, or having bad consequences. Equally, if we are not paying
attention to the situation, we may be on emotional autopilot, and not pay heed
to the ways in which we need to modify our instinctual or habitual response in
response to the differences between this situation and a more stereotypical
one.
Deontology and Rule-based thinking
At the points where our instinctive emotional responses lead us awry, (if we follow the analogy
to the GEMS psychology of error) we will look to rules that have served us well in
the past. In ethics these rule-based systems are called deontological theories. Deontology says that the point is not to
have the right emotions, or achieve the best consequences, but to follow the
proper rule in the circumstances. So when we are jerked out of habitual
response by some moral challenge (unexpected harm to someone else, etc), the first thing
we do is to scout around for rules to resolve the situation – rules that have
previously served us well in the past. These might be very general rules: ‘What
would happen if everyone did what I am thinking of doing?’ or ‘What would
everyone think of me if they knew I was doing this?’
Or the rules we appeal to might be quite
specific. In the GEMS system, we have a variety of options for rules to select,
and we try and gauge the most appropriate one for the situation we are in. In
ordinary moral thought, this process in fact happens in all the time. In fact
there is a technical word for it (and a long history behind it): casuistry. When one reasons
casuistically, one analogizes to other, closely related situations, and uses
the rule from that situation. For instance, if we are unsure if first-term
abortion is morally acceptable, we might first true analogizing to murder of a
child, which has a clear rule of ‘thou shalt not kill’. But as we think about
it, we might decide that the dis-analogies here are very strong, and perhaps a
closer analogy is one of contraception, with which (let us suppose) we accept a
rule that contraception is legitimate. Or we might analogize to self-defence,
especially in cases where the mother’s life is in danger. In attending to the
relevant features of the situation, we select what seems to be the most
appropriate rule to use. Sometimes we use multiple rules to develop highly sophisticated and qualified rules for a specific situation.
Utilitarianism and knowledge-based thinking
But what happens when this doesn’t seem to
resolve the issue, or we feel torn between two very different rules (as might
have occurred in the above abortion scenario)? At this point the third,
knowledge-based reasoning would come online. We must return all the way to
first principles. In GEMS one way this can occur is through means-end
rationality, where we take into account how much we want each outcome, and what
the chances of each outcome are – given a particular action of ours. We then
choose the action that has the best mix of likelihood and good consequences; we
‘maximise expected utility', as the point is put technically.
And of course there is a moral theory that
requires exactly this of us, except that rather than maximizing our own
personal happiness, we are directed to maximise the happiness of everybody,
summed together. This is the ethical theory of utilitarianism which requires
(roughly speaking) that we create the greatest good for the greatest number of
people (or sentient creatures more widely). It is here that we have really hard thinking to
do about the likely costs and benefits to others of our action. We weigh them
up and then act accordingly.
Large-scale pluralist theory of moral action
Is this a plausible over-arching model of
what moral thinking looks like? I think it has some merits. It is true that
most of what we do, we do without a lot of conscious thought, on the basis of
engrained habits and customs. We don’t go through life forever applying rules
or calculating consequences; we hope that our habits of action, thought and
emotion will generally push us in the right direction more often than not. And
this might be particularly true when we are interacting with friends and lovers
and family, where following ‘rules’ or coolly calculating risks and rewards
might seem quite inapt.
But sometimes we are confronted with
ethical ‘issues’ or ‘challenges’. We encounter a situation where habit is no
longer an appropriate guide. It sounds right to me that in these situations we
do cast about for rules-of-thumb to resolve the problem. We think about what
worked before in similar situations, and go with that.
In some cases, though, there can seem no
‘right’ rule; no rule that will work fine if everyone does it, or no rule that
does not clash with what looks to be an equally fitting rule. These will often
happen in novel situations, rather than everyday encounters. And if they are
worth thinking about, then it may be that the stakes in them are rather high –
they might be the decisions of leaders, generals, or diplomats. In such cases,
really weighing up the possible pros and cons – and trying to quantify the
merits and demerits – of each approach seems appropriate.
Note also that some of the major objections
to each theory might be managed by this pluralist, sequential account, where we
proceed from virtue, to duty, to utilitarianism. For instance, utilitarianism
can be philosophically disputed because the pursuit of sum-total happiness may
lead us to sacrifice the rules of justice – or even to force us to give up on our
deepest personal commitments. But on the approach here this wouldn’t happen. On
everyday matters of personal commitment, we operate on the basis of our
emotional dispositions and mental schemas and habits. When there is a clear
rule of justice at stake, we accept its rule-based authority. But in cases where neither
works – and only in those cases – we
look to the ultimate consequences of our actions as a guide to behaviour. And
even at this level we still have constraints based on our emotional habits and
rules-of-thumb. The utilitarian decision-making operates within the
psychological space carved out by these two prior modes of judgement.
Of course, like any such speculations, I am
sure I raise more questions than I have answered. But it is striking the way
that the three modes of decision-making that occur in the psychology of error
map onto the three ways of theorizing about ethics, and that the process
developed whereby a decision-maker moves from one mode to another does have
prima facie plausibility in the way we go about making moral decision-making.
2 comments:
Hugh, you say that your model might address some objections to the major ethical theories but you only go into some more detail for utilitarianism.
Were you thinking of using a utilitarian framework to derive the virtues and rules to be followed and then falling back to directly considering the exceptional cases in the utilitarian framework, or only that you have your virtues and rules deriving from some other framework and using the utilitarian framework when you have no other ideas?
I would have thought the major objections to both virtue ethics and deontology were around how the virtues or rules were defined in the first place (assuming no god or similar dodge), but then I am not a philosopher.
Hi Mike,
That's a good question.
You're absolutely right that some decent account would have to be given defining the virtues and the rules. My thought wasn't to use utiltiarianism as an overarching theory defining and justifying these, though I agree that would be theoretically elegant and is a real option here. However, I'm pretty happy to let the virtues, rules and utilitarianism operate in their own way, each ceding to the next in line when they confront too-hard cases.
From a philosopher's perspective, I think it's fair to say the problem isn't with the justification for virtue or deontology - the philosophical justifications for these developed by Aristotle and Kant respectively are about as good (or, if you rather, about as bad!) as anything put forward by utilitarians like Bentham or Mill. I suspect the glaring philosophical problem the above account blunders into, rather, is using all three justifications at once. For if Mill was right, then Kant and Aristotle were definitely wrong, and vice versa.
Still, an ethical theory does get some level of justification from its 'fit' with people's moral intuitions. If it did turn out that people actually reasoned in the fashion above - moving from virtues to rules to utility depending on the difficulties of the context (just as they do with respect to other domains of action, according to GEMS) - then that would still be an interesting result.
Post a Comment