Book - "Priceless: The Myth of Fair Value" by William Poundstone

Mal7

Dagobah Resident
I am halfway through the book Priceless: The Myth of Fair Value (and How to Take Advantage of It, by William Poundstone (NY: Hill and Wang, 2010).

One of the main concepts in the book is “anchoring”, or how a suggested number can then influence how we answer a question. This has been shown to happen in many psychological experiements, even when it is clear that the suggested number should have no relevance to the question that follows. For example, in one experiment people had to write down the last two numbers of their social security number, and they were then asked to place bids in an auction, with their own money, for various items like a box of chocolates or a bottle of wine. People with higher 2-digits in their Social Security Number, e.g. 81, tended to bid higher amounts, while people with lower numbers, e.g. 23, bid less.

There is also a chapter on how these kinds of results can be seen in the sums chosen by juries as compensation payouts, e.g. in one example participants in an experiment were asked to say what they thought a fair payout would be in a mock case, as if they were on the jury. All details given about the case were the same for each participant, except for the different initial demands made by the plaintiff. The results were:

Demand Award (average)
$100 $990
$20,000 $36,000
$5,000,000 $440,000
$1 billion $490,000

The book outlines the debate or battle between early 20th century economists like John von Neumann who thought people’s choices were a rational reflection of the perceived utility or value or their options, and other psychologists who found evidence we were often not so rational. Sometimes our choices about how we value something can end up contradicting each other, e.g. someone might think $400 is too high a price to buy a ticket to a concert, but if they then won a free ticket, they might not want to sell it even for $1200.

There is also a section on how strategies can change dramatically when one option is a certainty, rather than just a near-certainty. The differences between risk-averse behavior and risk-seeking behavior are described, with examples of when one might switch from one kind of behavior to the other.

The book is easy to read, and written for a popular audience. It has a 7 page bibliography of mostly psychological and economic research papers, including a couple co-authored by Timothy D. Wilson.
There is nothing in the index on psychopaths, but I thought the following quote (although it is more an anecdote than a statistical study) from the book could indicate why lying could be such a successful strategy for psychopaths:

A rumor once went around that McDonald’s used ground earthworms in its hamburgers. Sales plummeted as much as 30 percent in some areas. Practically nobody believed the rumor. Certainly 30 percent of the public did not believe that a big corporation would risk its billion-dollar brand in order to save a few dollars on beef. The point is, things no one believes still affect behavior.
- page 206.

So a psychopath could tell lies or spread rumors about some person, and even though we might think “That is not true, I don’t believe that for one moment”, those rumors could have an anchoring effect, and how we do then think of or act towards that person could be drawn towards how we would think or act if the rumors actually were true.
 
Mal7 said:
So a psychopath could tell lies or spread rumors about some person, and even though we might think “That is not true, I don’t believe that for one moment”, those rumors could have an anchoring effect, and how we do then think of or act towards that person could be drawn towards how we would think or act if the rumors actually were true.

This is related to how we process information. In "Thinking, Fast and Slow", Kahneman referred to the work of Daniel Gilbert who wrote a paper named "How Mental Systems Believe" ( pdf link ). In this paper, Gilbert analyzed the mechanics of belief, comparing the established Cartesian model with the alternative hypothesis put forward by philosopher Baruch Spinoza. The Cartesian model states that comprehension precedes and is separate from assessment of an idea. In this model, comprehension is taken as a passive, automatic activity whereas assessment involves conscious and effortful activity as a result of which we accept or reject the idea. This separation and sequence of comprehension and assessment while processing information is taken as logical and self-evident and that is how we humans have programmed computers to operate.

The alternative hypothesis put forward by Spinoza was that in order to comprehend an idea, it is necessary to initially accept the idea as true. Thus the initial step of comprehension and acceptance go hand in hand and cannot be separated. Later, one can assess and then reject the idea - or "unaccept" it based on various criteria eg, if the accepted idea conflicts with previously stored ideas, then one will reject it.

The breakdown of the these two models is as follows:
The first stage of the Cartesian model is comprehension followed by assessment leading to acceptance or rejection.

The first stage of the Spinoza model is comprehension and acceptance followed by assessment leading to certification or unaccpetance.

Gilbert presents the analogy of a library where books are classified into the categories of fiction and non fiction. In the Cartesian model, a book has a red tag to indicate it is fiction and a blue tag to indicate non-fiction. A new book that comes in and is not assessed yet has no tag on it and thus uncategorized in the Cartesian model.
In the Spinoza model, books use the tagged-untagged system. All untagged books, thus including newly arrived ones, were non-fiction. Those which were tagged were fiction.

Despite the similarities in the two models, there are important distinctions with regard to processing efficiency as well as consequences. Consider in the library analogy that the librarian is somehow stressed, overworked, distracted or just plain lazy. In the Cartesian model, a lot of books would remain untagged and therefore in an unassessed state. The Cartesian librarian would be unable to tell if a book is fiction or non-fiction if he did not read and tag it. The Spinozan library in a similar state would however be biased towards non-fiction - since untagged books are regarded as non-fiction.

In other words, if the veracity of information has not been checked out to a satisfactory degree, the Spinozan model would be biased towards accepting the information as true. Gilbert argues that based on empirical results from cognitive psychology, the human mind tends to be closer to the Spinoza model. Observation of children show that they are very suggestible - disbelief is harder for them and accepting what is offered is more natural. For adults, putting them under a "resource depleted" condition - eg by making them perform other tasks while simultaneously providing them with doubtful propositions - leads to a consistent bias towards accepting information. Prisoners subjected to indoctrination are put under stress through various methods like sleep deprivation in order to facilitate the process of accepting the offered information.

Kahneman tied these findings to his System1-System2 model.

[quote author=Thinking, Fast and Slow]
Gilbert proposed that understanding a statement must begin with an attempt to believe it; you must first know what the idea would mean if it were true. Only then can you decide whether or not to unbelieve it. The initial attempt to believe is an automatic operation of System1, which involves the construction of the best possible interpretation of the situation. Even a nonsensical statement, Gilbert argues, will evoke initial belief. Try his example: "whitefish eat candy". You probably were aware of vague impressions of fish and candy as an automatic process of associative memory searched for links between the two ideas that would make sense of the nonsense.

Gilbert sees unbelieving as an operation of System2 , and he reported an elegant experiment to make his point. The participants saw nonsensical assertions, such as "a dinca is a flame" followed after a few seconds by a single word, "true" or "false". They were later tested for their memory of which sentences had been labeled "true". In one condition of the experiment, the subjects were required to hold digits in memory during the task. The disruption (through loading) of System2 had a selective effect: it made it difficult for people to "unbelieve" false sentences. In a later test of memory, the depleted participants ended up thinking that many of the false sentences were true. The moral is significant: when System2 is otherwise engaged, we will believe almost anything. System1 is gullible and biased to believe, System2 is in charge of doubting and unbelieving, but System2 is sometimes busy, and often lazy.
[/quote]

So how does someone react when provided with invalid information? Experiments were conducted where invalid information was provided first and then the subjects were told that the information provided to them was invalid. Even after learning this fact, subjects tended to show a tendency towards believing the invalid information. In some cases, subjects were told upfront that they would receive invalid information and were then provided that information. Even being forewarned did not take away the tendency towards believing the invalid information. The conclusion Gilbert draws from the data is that "subjects were unable to represent statements in a truth-neutral fashion, even when directly motivated to do so." Thus as Spinoza predicted, comprehension of an idea goes hand in hand with accepting it.
 
Back
Top Bottom