Monday, December 24, 2007

Choice Blindness

A couple of colleagues of mine in Lund made quite a splash a couple of years ago with a fun experiment on "choice blindness". Petter went to Tokyo University for a post-doc, and now Asahi Shinbun has a good write-up of it in their weekend section. It is available online (for now) here.

The premise is really simple. You show a volunteer two pictures of women. The volunteer looks at the pictures and points at the one they like better. Both pictures are put down on the table face-down, and the chosen picture is given to the volunteer. They look at the chosen picture and answers why they picked that picture. When they are giving their answer they only see the chosen picture, in other words.

But there's a trick to it: sometimes the researcher does not pass over the chosen picture, but the other, less liked, one. The volunteer is actually looking at the picture they didn't pick, and be asked why they picked it.

So the subjects get confused, perhaps angry, wondering why the picture changed? Nope. Usually people don't realize that anything happened. Most subjects happily describe why they picked that picture - that they didn't pick, remember - with no hesitation. It didn't matter if the pictures were of fairly similar or dissimilar faces, and changing the time allotted to choose only affected it a little.

When they look at how people motivate their choice, there is little difference between switched pictures and unswitched ones. People are just as confident about their reasons, and describe their reasons in just as much detail. When asked outright if they would have noticed if the picture had been changed, most people (84%) said that they most certainly would - even as they were sitting with a changed picture in their hand. Once a participant had detected a change did they become more vigilant and more difficult to mislead.

Change Blindness

This experiment is making use of a phenomenon called change blindness, an inability to detect visual change in some situations. Basically, if a change is somehow masked so that we don't see the actual change take place, we can miss it completely even if the change is substantial. One way to hide the change is to have it occur very slowly so we don't notice a difference from one moment to the next; another is to distract the viewer at the critical moment (a favourite tactic of stage magicians). A third is to "mask" the change - instead of switching from one image to the changed image directly, we hide it altogether for a moment so that everything changes at once.

Briefly, change blindness works because we do not keep in memory every single detail out there. We have neither the visual nor memory capability to do so - besides, normally it would be quite useless since we can just take a look at any detail whenever we want. Instead we seem to create a rough sketch of the situation, and then depend on us detecting and taking note of any changes. But when we miss the change, we never update our sketch. There's an entertaining example from British TV on Youtube here, and of course, you can search the site for lots of other examples.

I Am (not) The Decider

OK, so we're lousy at detecting change. But what about that choice blindness we started with? That wasn't just a failure to detect change - indeed, the participants were specifically asked about the very details that they had noticed in the original picture, and that changed in the subsequent one. We don't have the whole answer - this goes deep into questions about intentionality and (gulp) consciousness - but it lends weight to the view that our deciding bits and our self-aware bits are somewhat independent, with our conscious selves mostly along for the ride.

The "real" us - all our perception, context, and evaluation systems - chose a picture, the picture got switched, and then our consciousness makes up a story about why we chose that picture. A story that is patently, obviously untrue, but sincerely believed by the participant. And since there is little difference in the explanations when the picture is changed and when it is not, the perhaps uncomfortable conclusion is that we always lie to ourselves. We have no idea about why we do what we do - we have no direct access to the real, lower-level systems; instead we observe our own behaviour, then make up some post-hoc explanation for it. This is not the only experiment that points in this direction; there are data from many different sources, including split-brain patient experiments, that confirms this view.

This view has a number of interesting consequences. With Christmas coming up (it is a couples' holiday here), let's consider what it can tell us about scary movies and dating. Everybody knows that going to a scary movie with your date is often a good idea, and this can explain why. We sit there and see scary events unfolding. Our minds, who aren't all that sophisticated, believes the scenes to be real, and reacts with fear and excitement - pulse races, blood pressure changes, we might even raise our hairs and break out in a cold sweat. Our conscious minds, however, know this is just a movie and not for real. There is really nothing to be upset about. But we are upset or excited about something, and since it can't be the movie it must be something else. Like that really very attractive person sitting right next to us, perhaps - we seem to like them a lot better than we thought we did.

2 comments:

  1. Yanne -

    I do not see this as an example of lying to oneself. It seems more of a rationalized consevation of energy--an individual minimizing the number of value tags he or she assigns to an action.

    For me, an interesting expansion of the test would be to tease out the correlation between persons who are able to easily describe why they like about a certain picture and that individual's ability to be avoid being bewildered by choice--as indicated perhaps by the amount of time he or she takes to select a meal from a long list of choices. My guess is one would find a direct correlation between an inability to tell the picture had been switched (because one had internationalized the reasons for one's choice and then moved on) and the brevity of an individual's consideration of a menu.

    ReplyDelete
  2. Well, it's not lying - it never is. It is simply a case of one subsystem - our "stream of thought" - not having direct access to the inner workings of other subsystems, and instead inferring it when needed from incomplete, brief data. And normally it doesn't need it, of course; the "stream of thought" system isn't in charge of the moment-to-moment decision making but (speculatively) is engaged in long-term planning and prediction. Both these systems are "us" of course. We (and all animals) are a somewhat ad-hoc conglomeration of various more or less idiosyncratic subsystems that all try to do their part. It usually works fine, but sometimes - especially when we deliberately try to tease this structure apart - they can get fooled and work at cross-purposes.

    If you go to the linked paper you'll find a link to "supporting online materials" on the left; there you have more data and a much more detailed analysis of it. Note first, that the vast majority of subjects never detected even one switched picture. I can say that they could find no statistical difference in the type, length or specificity of utterances between those who eventually realize an image has been switched and those who don't (I asked them about it at one point). It does seem it's somewhat "random" for lack of a better word if you discover it or not. Once people have discovered that switch once, they're more likely to catch the next one - but the chances are _still_ not great to do so, even if they've been primed to the possibility.

    In short, no, I doubt you would find such a correlation you're asking for. Might be worth looking at, but don't be surprised if it yields no positive result.

    ReplyDelete

Comment away. Be nice. I no longer allow anonymous posts to reduce the spam.