Thinking

OCW Scholar

« Previous | Next »

Session Overview

Photo of Rodin's "The Thinker," a statue of a seated man, chin resting on hand and deep in thought.

How do we make decisions about the situations we experience every day? In this session, we'll use brain teasers and word problems to highlight some of the mechanisms that drive human thinking — e.g. functional fixedness, heuristics, and framing. The lecture also touches briefly on the role of the brain's frontal lobes in problem solving and emotions.

Keywords: thinking, functional fixedness, heuristics, anchoring, adjustment, framing, frontal lobes, risk taking, psychopathology

Image courtesy of marttj on Flickr.

Session Activities

Readings

Read the following before watching the lecture video.

  • [Sacks] Chapter 13, "Yes, Father-Sister" (pp. 116-119)
  • One of the following textbook selections:

Lecture Videos

View Full Video

View by Chapter

Video Resources

  • Clips removed from lecture video due to copyright restrictions:
    • Two segments, "Math Problems" and "Square Feet?", from Candid Camera Classics for Introductory Psychology. DVD/VHS. Candid Camera, Inc., and McGraw Hill. 1993. [Find in a library via WorldCat]
  • Lecture Slides (PDF - 1.3MB)

Discussion

Some discussion content on this topic is provided within the next session on Intelligence.

Check Yourself

Short Essay Questions

1) Two approaches to problem solving include using algorithms and heuristics. How are these two approaches different? Give two real-life examples of problems and for each explain how it might be approached using an algorithm and a heuristic. Given the differences between algorithms and heuristics, under what circumstances might it be preferable to use one or the other?

Sample Answer

Algorithms are "sets of steps that, if followed methodically, will guarantee the correct solution to a problem", whereas heuristics are "rule-of-thumb strategies that do not guarantee the correct solution to a problem but offer likely shortcuts to it." The principle difference is that an algorithm guarantees a solution to the problem, whereas a heuristic may sometimes fail.

For example, if you want to figure out which student in your class is the oldest, you might go around and ask everyone their birth date (an algorithm). Alternately, you could look to see who is the tallest, who acts the most mature, who has the most gray hair, etc., as these all tend to be associated with age (a heuristic).

As another example, if you want to figure out how many jellybeans there are in a jar, you can arrive at exactly the right number by counting every single jellybean (an algorithm). However, that is a long and tedious process. A faster way would be to calculate the volume of the jar (pi times the squared diameter, times height, divided by 4) and divide this by the volume of a single jellybean (a heuristic). You might be surprised to see that a heuristic could be more "mathematical" than the algorithm, but notice that this heuristic is still inexact and will only approximate the solution.

Because they are exact, algorithms are preferable when you need to arrive at exactly the right answer every time. Heuristics, which tend to be faster and less effortful, are appropriate when a solution can be just "good enough", when it needs to be achieved faster than the appropriate algorithm will allow, or when an algorithmic solution is unknown.

 

2) There are many examples of how problem solving can go awry. Some of these involve logical errors, others involve heuristics that bias you towards the wrong conclusion. Give four examples of different logical errors or troublesome heuristics, and how they lead the wrong conclusions.

Sample Answer

Affirming the consequent: "When you assume a specific cause is present because a particular result has occurred." For example, you come outside and notice the sidewalk is all wet. "It must have rained," you think, "because the sidewalk is wet when it rains." However, the sidewalk could also be wet because the sprinklers came on, or because someone was washing their car, etc. Just knowing the sidewalk is wet is not enough to be sure that it rained.

Confirmation bias: The bias to seek information that confirms a rule, rather than to disprove it. This is the logical error that scientists must avoid when testing their hypotheses! For example, let's imagine a scientist who has hypothesized that "Only people with the O+ blood type can catch North American Speckled Fever*." If this scientist only examines people with O+ blood type to see if they have the fever, he is demonstrating confirmation bias. To successfully test the hypothesis (that is, to see evidence that disproves it) he must also examine people with other blood types to see whether they have caught the fever. If someone with A- blood type catches the fever, then his hypothesis is false; but studying people with O+ blood can't disprove the hypothesis.

*(a disease just as imaginary as the rest of the example)


Representativeness heuristic: People tend to "assume that the more similar something is to a prototype stored in our memory, the more likely that thing is to be a member of a category." Let's return to our example of North American Speckled Fever. The symptoms for this imaginary disease include feeling tired in the morning, mild indigestion, and headaches. However, North American Speckled Fever is very rare, and only about 1 in 100,000 people will ever catch it. Several patients heard about the symptoms of this disease on the news and then went to to their doctors insisting they had the Fever because they, too, were tired in the morning and sometimes had indigestion and headaches. However, these patients are biased by the representativeness heuristic – just because they have the symptoms described doesn't necessarily mean they have the Fever. Lots of other things are much more likely to cause being tired in the morning (didn't get enough sleep?), indigestion (ate too much greasy food?) and headaches than this disorder, which you'll remember is very rare.

Availability heuristic: People tend to "judge objects or events as more likely, common, or frequent if they are easier to retrieve from memory." For example, people often have friends who have the same political views they do. When asked how many people are likely to support a new political initiative, you might think of all your friends and whether they would support the initiative. If most of them would support the initiative, then you might believe that people in general are also very likely to support it. However, because the people you know are a biased selection of the population in general, you will likely overstate support for the initiative simply because you aren't friends with the types of people who would be opposed to it (and therefore can't easily bring to mind how many people that might be).

 

Further Study

These optional resources are provided for students that wish to explore this topic more fully.

TYPE CONTENT CONTEXT
Supplemental reading Seabrook, John. "Suffering Souls." The New Yorker, November 10, 2008. Story about Dr. Kent Kiehl, mentioned by Prof. Gabrieli at the end of lecture. Dr. Kiehl is a studying psychopathy with a a truck-mounted MRI scanner that he brings into prisons.
Related research Delude, C. M. "Culture influences brain function, study shows." MIT News, Jan. 11 2008. Short news article about research led by Prof. Gabrieli, which identified cultural influences on making quick judgements.
Textbook supplement Study materials for Ch. 8 "Language and Thinking: What Humans Do Best." In Kosslyn & Rosenberg, Psychology in Context, 3/e (Pearson, 2007) Practice test questions, flashcards, and media for a related textbook

 

« Previous | Next »