This is a sequel to “The Power of Rearticulating Insights in Your Own Words.” If that post convinced you, this post is a way to turn all that up to eleven. Also conceivably related is our latest autodata integration, Postminder, for beeminding forum posts. The extremely curious can check out the forum thread last year where these ideas originated.
You know how if you just passively read a book you eventually forget literally everything you briefly learned? Possibly including the fact that you ever read the book at all, after enough years? Book brigades aim to solve that. A book brigade is a small group of very like-minded people collaborating on getting a book read and understood by taking turns reading sections of it and recapping for the others. It’s powerful because:
- you can get a book loaded into your head from only reading a fraction of it,
- the part where you have to explain it to others gets it cemented in your own head,
- the part where someone similar to you explains it in their own words means skipping all the fluff and obvious parts, focusing on the new information, and
- you end up with a nice Cliff’s notes of the book from all the summaries (see the appendices below for examples).
Also social accountability! Of course we’re making a few assumptions here:
- The book isn’t fiction, where the point is to get absorbed in the story — you just want the information transferred from the book to your brain
- The others in the book brigade are similar enough to you to know what to highlight in their summaries (and what’s safe to skip)
- Everyone is committed and diligent (Beeminder to the rescue here)
I’ve found it a nice collaborative exercise that leads to good discussions, like a book club, but better.
Last year I formed a group of 9 other Beeminder superusers to read Thinking in Bets, by Annie Duke. Then Bee and I did a two-person book brigade for the ur-self-help book, How To Win Friends and Influence People, by Dale Carnegie. You can see the notes we generated for both of those below.
But first let me tell you how we set these up. The central piece is creating a group goal on Beeminder. That’s a single graph that a group of people can all add data to. Also if the goal derails, everyone gets stung (charged money).
That also means the group has to pick how many pages per day to commit to reading. We voted on that like so:
Bee and I read Carnegie somewhat slower — just a page a day, so it took the better part of a year. We called the goal for the Annie Duke book “dukeitout” and it looked like this at the end:
We called the goal for the Dale Carnegie book “dailydale” and it ended up looking like this:
No derailments at all on dukeitout (so much social accountability!) but Bee and I derailed dailydale once. Total cost of finally getting that book read: $5 (each). Cheap!
Other logistics we settled on:
- We used Odometer goals on Beeminder, so you can enter your current page number rather than the number of pages you read.
- No weekends-off, just get into the green by Friday if you want that.
- When it’s your turn, read till a natural stopping point. This was the end of a section for Duke and the end of a chapter for Carnegie.
- Tell Beeminder the last page you finished — both reading and summarizing — before passing the baton.
- When in doubt about how far to read, or anything else, talk to the person you’re passing the baton to.
- No extra time for summarizing/writing — plan ahead to leave time for that when you approach the end of your section.
I think the biggest lesson was to be less thorough than you’re inclined to be, at least among the hyper-conscientious group we had. The temptation for us was to describe everything the author said, point by point. The other extreme would be just giving a gestalt impression of a chapter or describing what happened to stick out in your memory. Instead, aim to compose something tailored to the audience (i.e., each other). Consider each point the author makes and deliberately include it or not according to your own assessment of whether the group will benefit from hearing it.
If you’re inspired to try this and looking for book ideas, here’s the voting we did for which book to brigade:
Appendix: Book Brigade notes for Thinking in Bets by Annie Duke
NB: These notes are by 9 different people, all concatenated with no indication of who wrote what. The best part is all the discussion in between the summaries but I’ve omitted most of that for fear of this ending up longer than the book itself.Introduction
Duke made millions playing professional poker and thinks it’s a nice microcosm for studying human decision making. All decisions can be (wait for it) thought of as bets.
Here are the goals of the book in, as Dynomight says, superior list form:
- To avoid common decision traps, learn more effectively from errors, and keep emotions out of it. (This is standard rationality community fare. Except the last one, ironically. The rationality canon emphasizes the value emotions can have for optimal decision-making. Maybe this will come up later.)
- To “build and maintain pods of fellow truthseekers to improve our decision process”. Hi friends!
- To “recruit our past and future selves to make fewer emotional decisions”. No idea what that means yet but bee-dar is buzzing.
Duke emphasizes that we’ll still make mistakes, succumb to emotion, and lose – since we’re humans. Just that we can do that less and less, and that our improvement compounds. (This is the inspiration for the name Less Wrong.)
Her last point is that everything comes down to exactly two things: the quality of our decisions and luck. Thinking in bets means recognizing the difference.
Chapter 1: Life Is Poker, Not Chess
Pete Carroll and the Monday Morning Quarterbacks
Duke tells a sports story about a football game to illustrate “Monday Morning Quarterbacking,” an American metaphor for second-guessing a decision after it’s been made.
This (American/gridiron) football example might be confusing to those unfamiliar with the game. Pete Carroll, the coach of the NFL team Seattle Seahawks, made a controversial decision in the Super Bowl, the annual championship game. They were very close to scoring a touchdown, which could win them the game. Carroll decided to have the quarterback, the player who executes the coach’s plan when play begins, pass (throw) the ball to another player instead of handing it to another player.
The result of this decision was very unlikely: the pass was intercepted, or caught, by the other team. According to Duke, in the previous fifteen seasons, interceptions like this one only happened 2% of the time.
The press and most viewers thought Carroll’s decision was very bad, although contrarian journalists who think about statistics argued that it was sound. Carroll also defended his choice. Duke uses this case as an example of “resulting,” using the outcome of a decision to influence one’s judgment of whether it was good or bad. She says that making this kind of error is a very common habit, calling it “our tendency” and implying that it’s basically a universal human failing.
The last paragraph of the section is a bunch of questions, promising answers to why people tend to make this kind of mistake and how to avoid it.
The 2% statistic had me wondering if interceptions would really be that unlikely in that particular condition: are they more likely than 2% when the opposing team is fired up to win the Super Bowl?
DIALOG:
- Ooh, I didn’t know this verb — “resulting” — and am instantly in love. Sounds like it’s Duke’s coinage. Another existing term is outcome bias.
- She says it’s a term poker players use, so probably not her coinage? I prefer “resulting” to “outcome bias,” because the meaning is more obvious.
- Another term I have heard, from somewhere in the poker lineage, is “results-oriented thinking”
The Hazards of resulting
People basically always judge their decisions by the results rather than what went into the decision. The don’t identify lucky results as bad decisions or vice-versa.
She asks CEOs and business owners to describe their best and worst decisions in the previous year. They have without exception presented their best and worst results instead.
Specific example: a CEO identified his worst decision as firing a president who it turned out could not be replaced even at the same quality level. He felt he had made a grave mistake, and his decision making after that incident was negatively impacted.
When investigating the decision process, he had considered and tried enough things that the group of business leaders all though he had been correct to think that he could likely find someone better and that it was not a bad decision. (The main steps of the process were concluding that poor company performance was due to poor leadership, coaching president to attempt to improve leadership skills, and considering past company experience hiring people at high levels and also expected current available talent.) The CEO was merely experiencing hindsight bias: the tendency to believe that known outcomes could have been predicted beforehand.
Analogy: if someone drives home drunk and it works out, they don’t conclude that it is a good idea to drive drunk. No one would do that, but they do all kinds of similar things such as this case.
DIALOG:
This is going beautifully. Thanks, y’all!
I see the book does not contain the terms ex ante and ex post (Latin for “from [the perspective of] before” and “from [the perspective of] after”). I find these highly useful for avoiding resulting. Like for the CEO anecdote you can clarify that the firing was a good decision ex ante but a bad decision ex post. Of course you can’t make decisions ex post so that just means it was a good decision, full stop.
I guess the widespread confusion about this is why it’s worth being technically redundant and saying “good decision ex ante”.
Another way I like to drive this point home: Wearing a seatbelt has so far been a total waste of time. We’ve never crashed! But ex ante it’s been prudent and we should keep doing it.
Quick or dead: our brains weren’t meant for rationality
The author begins the section by citing “a number of excellent books” including Dan Ariely’s Predictibly Irrational, so not a great start. But then again, the citation is for nothing more than the general proposition that humans don’t always make perfectly rational decisions, so sure, I guess.
Type I errors were less costly to our ancestors than type II errors, so we evolved to overreact. Better to make the sort of mistake where you run away from a lion that doesn’t actually exist than to mistakenly not run away from a lion that does.
Our brains can be modeled as having two subsystems, System 1 and System 2, the reflexive mind and the deliberative mind. System 1 is more powerful but less flexible, so while we can’t use it for everything, we should use it for everything it’s good at. It’s a mistake to use the expensive and slow System 2 for stuff we can do with System 1.
System 1 functions on the basis of “the perfect is the enemy of the good”. The point is to be cheap and efficient. Perfection is unnecessarily expensive, better to use heuristics which are mostly right. Being sometimes wrong is not the end of the world.
One heuristic is to round off probabilities to certainties. If you calculate that there’s an 80% chance you are about to be ambushed by a lion, you’ve done something wrong—the extra effort involved in figuring out the exact probability is wasted, as whether the chance is 80% or 100% or 30% you’re going to run away just the same.
Point is: Heuristics are awesome, cheap, and effective, so system 1 has a huge arsenal of them, and uses them freely. This is a good thing; life wouldn’t be possible otherwise.
But this means we’re not in the habit of thinking in probabilities; which means that we’re out of practice, as it were, when we try to do so. Poker is a good way to get that practice.
DIALOG:
[endless discussion of how much of a charlatan and fraud Dan Ariely is, which I’m tempted to include but it’s just too much so I’m mostly omitting discussions from here on]
Two Minute Warning
-
In poker, speed is of the essence - you have to make multiple decisions, all with significant consequences, in a short time frame. The rules of the game mean you can’t slow down and think about your decisions – taking too much time will result in another player “calling the clock” on you.
-
The goal is to get our “reflexive” minds to execute on our “deliberative” minds’ best intentions.
-
Solving the problem of “how to execute” is the most important thing. “How to execute,” meaning, how to execute on the intention of the deliberative, (more) rational mind, while playing in a high-stress environment, at the speed expected. Learning to get better at poker will involve doing post-mortems of games, separating the signal from the noise, analyzing decisions, and guarding against resulting.
Dr Strangelove
-
This section talks about John von Neumann (apparently also a poker player), one of the inspirations for Stanley Kubrick’s Dr. Strangelove.
-
John von Nuemann, in addition to immense contributions to Math, and a whole lot of other achievements, was the father of game theory. He co-authored Theory of Games and Economic Behavior, a classic book on the subject. Game theory, of course, is hugely influential, revolutionizing economics, and influencing other fields like the behavioural sciences. Game theory forms the basis for the study of decision making, including the challenges of hidden information, luck and other variables.
-
The author says John von Neumann modeled game theory on a basic version of poker.
Poker vs. chess
- John von Neumann, father of game theory, was asked by Jacob Bronowski (notable intellectual and author of “The Ascent of Man”) if game theory is like chess. Von Neumann replied:
“Chess is not a game. Chess is a well-defined form of computation … In theory there must be a solution, a right procedure in any position … Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.”
- “Real games”, therefore, are made under conditions of uncertainty, risk, and occasional deception. Chess contains no hidden information and very little luck. Poker, in contrast, is a game of incomplete information. It is a game of decision making under conditions of uncertainty over time, where valuable information remains hidden, and there is an element of luck in any outcome.
- Life is much more like poker than chess. You can make the smartest, most careful decision, and still have it blow up in your face.
- Incomplete information is a challenge not just for decision-making, but also for learning from past decisions. But if we want to improve in anything, we have to learn from the results of our decisions. In chess, you can’t legitimately say “I played perfectly, but caught some terrible breaks!”; in poker, you hear that a lot.
- This is why Von Neumann and Morgenstern based game theory on poker: uncertainty is key, and making - and learning from - our decisions starts with deeply understanding this.
A lethal battle of wits
- Consider the following situation. Someone walks up to you and asks “I flipped a coin and it landed heads four times in a row. How likely is that to occur?”
- You may be tempted to consider the probability of this observation. The probability of a coin landing heads is 1/2. The probability of this happening four times in a row is 1/2 ^ 4 = 1/16.
- While this line of reasoning seems very reasonable, it also showcases the perils of underestimating the amount and the effect of what we don’t know.
- In this case, the question we received didn’t specify many important details, which can undermine our calculations. Did this coin have 1/2 probability of landing tails? (Maybe it has two faces with heads. It could also be an unfair coin.) Is the person throwing the coin skilled in influencing the probability of the outcome?
- (There could be incidental factors that could help us. For instance, a much larger number of flips could help us evaluate the unknowns regarding the coin itself.)
- Life puts us in many situations like this question when it comes to learning lessons from our experience. We get to make (and evaluate) big decisions like buying a house very rarely.
- In such situations, realising we don’t know enough to be confident about our conclusions is valuable. There’s a correct answer to the question about the four coin flips: “I’m not sure how likely this is.”
The title of the section is about a scene from The Princess Bride. Two characters play a game of life and death. Character A leads the other to make a grave mistake: assuming that they have equal probabilities of dying in this game and that it is 50%. Under these assumptions, character B proudly “outwits” character A. Only to pay the price of his wrong assumptions with his life.
“I’m not sure”: Using Uncertainty to Our Advantage
- Society teaches us that saying “I don’t know” is bad. In school, we get zero
points if it’s our answer to a question, and in a business context, it could be
interpreted as unhelpful or even evasive. - As we have learned, a great decision is not defined by its outcome (that
would be resulting). Instead, a great decision is the result of a good process,
which must include an attempt to accurately judge our own knowledge. - Therefore, to make good decisions, we need to embrace saying “I don’t know”
and “I am not sure.” (“Our strength as rationalists is our ability to notice
our own confusion.”) - Good poker players and decision-makers embrace uncertainty. They make
decisions based on the information they have, their experience, and their best
guesses about the probability of different outcomes. - Sometimes, even the best choice might have a low probability of a good
outcome. - All this doesn’t mean there is no such thing as objective truth. Our brains
don’t perceive the world objectively, but our goal should be to perceive it as
objectively as possible.
Redefining wrong
- Most people are idiots.
- If you say X has a 75% chance to happen and not-X happens, they will say you were wrong.
- In addition to poker, gives brexit predition and polls before donald trump election as examples of this. Yeah, I was there, I remember being pissed off at the public discourse about this too.
I can’t emphasize enough how this section is really just trying to get the meaning of “don’t result” through your skull. If you already completely accept it, then you can indeed just skip reading it as you are doing. If you aren’t 100% sure you’ve drunk the kool-aid, it’s probably worth reading through the chapter and letting it sink in.
So yes, just because things go badly doesn’t mean you were necessarily wrong:
- Maybe we made a decision out of many bad choices, none of which were likely to go well
- Maybe we were missing crucial information, so we had no hope.
- Maybe we took a justifiable risk for a huge payout, and just didn’t get lucky.
- Maybe we were simply unlucky.
The poker example is easy. In real life, it is hard to “redefine wrong” because we will never actually know for sure whether we had assigned correct probabilities to events.
Finally, we are implored to also give up on feeling “right” when things happen to go our way.
Chapter 2
Thirty days in Des Moines
This is purely a fun anecdote, about the time Las Vegas legend John Hennigan bet his friends $30k he could last for a month in Des Moines, Iowa – chosen as the diametric opposite of Las Vegas. He lasted about 2 days, and paid up. Apparently Des Moines is very boring. But it wasn’t just a stunt. He was genuinely unsure if he might prefer a folksy town and seeing actual sunlight.
We’ve all been to Des Moines
Duke’s point is that any time you’re making a big life decision, you’re in a pretty similar situation to the protagonist of that anecdote. Say you’re deciding whether to move across the country for a new job. Unlike Johnny World (as he’s known, because he’ll bet on anything in the world), you don’t have $30k in cash riding on the outcome, but that’s actually small potatoes compared to the rest of your stakes. Like how much money you’ll make at your new job. To say nothing of the bet you’re making about how you’ll fit in to a new community, how much you’ll turn out to like or hate your new commute, etc etc.
(I am very much the preached-to choir here, in terms of thinking in bets.)
Or as an employer offering a job, you’re facing similar tradeoffs: making a high enough offer vs spending too much money. So, yes, all decisions under uncertainty are fairly charactized as bets.
Also betting is a tax on bullshit. Duke doesn’t use that phrase but expresses the same lovely sentiment. Put your money where your mouth is. Make the uncertainty explicit and explicitly factor it in to the decision-making.
The central claim of the book is that we can make better decisions, including mitigating human biases, by explicitly casting those decisions as bets.
All decisions are bets
In this section, Duke presents a thesis about human decision-making. According to Duke’s definition, every time we make a decision, we are making a bet. She encourages us to widen our definition of a bet to include every instance of taking a risk in uncertain conditions.
Laying out the qualities that she believes make something a bet, Duke comes up with the following criteria: choice, probability, risk decision, and belief, and she proceeds to argue that all decisions are bets because they share these aspects. I feel as though it’s kind of gimmicky to shoehorn all decisions into “bets” - if every decision, and we constantly make decisions, is a bet, isn’t “bet” kind of meaningless? Nevertheless, this is Duke’s contention. According to her thesis, even long-term decisions made in relatively low-pressure environments, like parenting choices, are bets because they share these aspects. She closes the section with the sentence, “Everything is a bet.” If everything is a bet, is anything a bet?
Most bets are bets against ourselves
With the title of this section, Duke bolsters the argument that everything (every decision) is a bet, anticipating that the reader might object to the thesis by objecting that a bet requires two participants. Even with one individual participating, there can be a second participant: the same individual separated by time and choice. Duke asserts that the second party to a bet against oneself is “all of the future versions of ourselves that we are not choosing.” This passage sounds a little like an alternate reality speculative fiction: we’re making bets against our hypothetical future selves. To ground this idea a little more, we can think not of imaginary selves but of missed opportunities. Like John Hennigan, with every decision, we are giving up opportunities that are closed off by that decision.
When we make any decision, we confront the uncertainty that the choice we’re making is the best one. At the end of the section, Duke returns to talking about poker. I have the idea that we’re heading toward a call to action. In returning to poker, Duke may be heading towards introducing a way to use the “everything is a bet” framework in everyday life. Good poker players have a strong awareness both of the probability that their decision will result in a good outcome and in the stakes of their bets. I’m envisioning Duke explaining that we can apply probability to ordinary decisions and that this application will help us get better at making those decisions.
Our bets are only as good as our beliefs
This opens with a fictional scene from a sitcom where someone tries to run a turkey giveaway promotion by dropping live turkeys out of a helicopter. It is a disaster. The punchline is that he believed that turkeys could fly. We then revisit the anecdotes we have covered, and note that the bets that people made were based on their beliefs. The takeaway is that we need to (1) improve the accuracy of our beliefs by taking in information and experience and (2) notice what thinking patterns take us astray and develop strategies to be more open-minded, objective, and accurate.
Hearing is believing
Short summary:
We default to believing things we hear, only occasionally verifying after forming the belief. This makes sense evolutionarily; prior to humans, beliefs were formed based on direct sensory evidence. Worse, even when we receive information to the contrary, we tend to continue to believe what we originally heard.
Long summary:
People think that they vet and think about something they have heard before believing it, but in reality they believe it by default and sometimes think and vet later if they have the time and inclination.
Evidence that have a strong tendency to believe what they hear includes (1) The strength of myths such as baldness coming from the maternal grandfather and dog years x 7 = human years and (2) an experiment where students were given color-coded true and false statements under time pressure and distraction. The mistakes they made were to mistreat false statements as true rather than true statements as false.
Abstract belief formation is new evolutionarily. It does make sense that we should not waste time and energy doubting our senses when forming perceptual beliefs (beliefs about our surroundings based on sensory input, such as believing a tree is there because we see it). Believing and then possibly verifying later is the correct thing to do when you see something that might be a lion.
Example from poker: In the variant Texas Hold’em, each player has two hidden cards and additional face up cards that count towards everyone’s hand are dealt after each round of betting. Two consecutive cards of the same suit are playable in some specific situations, but the subtleties were lost as this idea was repeated and it turned into “win big or lose small” and people at poker seminars she teaches tend to think that this is a good starting hand without having verified it. When she tells them to track their profit/loss with it, they find that they are losing.
Example from life: Based on some studies that were secretly funded by the sugar industry, Americans cut calories from fat by 25% in one generation, replacing the fat with carbs. It had the opposite of the desired effect.
The problem isn’t just that we default to true, it’s that we don’t easily update to false. There was a study where subjects read messages including a retraction of part of an earlier one, and when asked questions later they often answered as if they had not seen the retraction.
“They saw a game”
This section is first and foremost about a 1954 paper. 1954! Way before the replication crisis—I give it about as much credence as a random overhead anecdote, and you should too. That doesn’t mean it’s wrong—but it’s not remotely a source to be relied on.
If I seem annoyed about this, I am. So far my experience with this book has been the author citing sources that range from outright frauds to completely unreliable. That doesn’t mean anything in particular is incorrect; but inasmuch as the point of citation is to build trust with the reader, this is doing very much the opposite for me. It’s not even like in the Ariely case, where you can say she was bamboozled by him; at the time this book was written the sorry state that social science was in pre-replication crisis (and honestly nowhere nearly completely cured since) was well-known.
Be that as it may: there was a hotly contested college football game between Princeton and Dartmoth, and when college students at each university where shown a video of the game, they focused on different things in it. Specifically, the students from the different colleges had different views on the number of penalties each side committed.
A 2012 study called “They saw a protest” in reference to the earlier study’s name (“They saw a game”) showed subjects a video of a police action halting a political demonstration. One group were told this was a protest outside an abortion clinic, and the other were told it was outside a college career-placement facility at which the military were recruiting (with the protest being against the military’s “don’t ask, don’t tell” policy.)
In theory the video was cleverly edited to not show the actual topic of the protest. I’m skeptical. Yes, you can blur out signs and so forth, but if those different potential protests might have for instance different demographics you’re not going to be able to hide that. This too is an example of how one should read all social science, even post-replication-crisis, with a very critical eye; humans are complicated and multi-dimensional, and no, you can’t “just” control for traits or likewise flatten people out in any reasonable way.
In any case: the people who saw a video that was alleged to be about a protest for a cause they said up front they favored saw the police being more aggressive and intimidating to the protesters, despite the fact that everyone saw the same video.
I’d take this book a lot more seriously if it said the same things but without the thin veneer of social science. If it just said “hey, you know how when people see the police disrupting a protest that aligns with their own political opinions they think the police are being more aggressive than when the protest doesn’t align with their own political opinions?” then I’d say “yeah, that tracks my own experience, I guess”. That’s all you can say even with the book as it is, trusting it insofar as it makes intuitive sense and aligns with your own experience. The facade of social science is meant to make it seem more authoritative than that, and that the author thinks that it makes the book in any way better is deeply worrying.
DIALOG:
Ha, [redacted], you’re pooping on poor Annie Duke so hard that I went and read “They saw a game” out of morbid curiosity. I’m normally similarly critical of this kind of thing (see my review of Katy Milkman’s behavior change book) but didn’t feel like it was particularly egregious here. In fact, I think she pretty much did treat the two studies as anecdotes, and they seemed like good ones. I may be being too credulous. Or just excessively benefit-of-the-doubt-giving. But I guess I think we should reserve our ire for when she says something actually wrong, which I’m not sure she has so far.
I’ll at least agree that acknowledgment of the replication crisis is important when citing social science studies. Just that these particular papers are transparently so much better than “power pose” style papers that the replication crisis mention would’ve been pretty perfunctory (like “obviously don’t believe social science studies, especially before the shit hit the fan on the replication crisis, but this one’s perfectly commonsensical”). So I guess the omission isn’t bothering me too much here. We’ll see if it gets worse as the book continues!
(The thing that galled me in Milkman’s book was when she went out of her way to say that a study from the 1800s replicated but just credulously cited a million other more recent things, including more than one power-pose-style thing.)
We started talking about this in the Discord a bit and it reminded me that I have a super dorky version of thinking in bets, in a sense, that I call “Danny’s Agonizing Guide to Agonizing Decisions”. I’m, ironically, torn about how tongue-in-cheek I mean it to be taken as.
- I thought power poses were good, actually.
Oh ho! I stand corrected. And duly unpowerfully. It seems I read about power posing not replicating and just blindly believed it. Silly human brains.
It seems there’s a study showing that power posing has a real, albeit small and inconsistent, effect after all. Of course that itself is just one study. So, let us embrace our uncertainty!
The Stubbornness of Beliefs
-
Once you believe something, it’s difficult to dislodge that belief. The mind starts noticing things that confirm the belief, and actively ignoring, and even discrediting information that contradicts the belief. (Basically confirmation bias)
-
When we encounter new information, the urge to protect what we already believe guides how we treat the new info - whether we accept or reject it. This irrational, circular information-processing pattern is called motivated reasoning.
-
This explains the rise of fake news and disinformation. The author draws a difference here, saying fake news is an intentionally false story. It’s not meant to change anybody’s mind at all – but aimed specifically to entrench and confirm the beliefs the target audience already has. ‘Disinformation’, unlike fake news, has some true elements to it, which is powerful (and dangerous) because it lends an air of credibility to the narrative.
-
On top of people’s propensity to seek sources that confirm our belief, things are made worse on the internet by algorithms that learn what we like, and show us more of that, trapping most of us in a “filter bubble.”
-
Why is it so hard to accept new information that contradicts our beliefs? Because most of us don’t want to be “wrong” - it feels bad to be wrong and it’s tied into our identity. It’s easier to ignore or discredit the new information than shift our opinion of ourselves from being “right” to being “wrong.”
Being smart makes it worse
Duke says (without offering any particular evidence) that the popular wisdom is that the smarter you are, the less susceptible you are to fake news or disinformation. She supports this by saying that part of being smart is about being able to process information well, understanding the quality of arguments, etc, so logically this should make you better at spotting the bad stuff.
Surprise! Turns out this isn’t true: smarter people are also better at producing more plausible reasons as to why their side is right, and the other side is wrong. In short, they are better at motivated reasoning.
Duke references several studies that indicate smart people may even be more susceptible to this, and (from my 30 minutes of research) these studies seem to replicate pretty well.
Duke also references studies showing that more numerate people made more mistakes interpreting data on emotionally charged topics than less numerate subjects.
However, this “motivated numeracy” replicates less well: see “A preregistered replication of motivated numeracy” for a good survey and a clean replication with 3154 MTurk participants. This showed some effects, but not as clear-cut as originally claimed, and the paper comments that: “An emerging conclusion in this literature is that motivated numeracy, or the reasoning account of identity-protective cognition more broadly, seems unlikely to generalize beyond a relatively narrow set of conditions”. It may be, for example, that more numerate people have stronger priors, and mix those in with their interpretations of the test questions in the replications.
Overall, I’m comfortable that Duke’s main point stands: motivated reasoning is a real thing, and simply being smart doesn’t inoculate you. However, motivated numeracy - that is, being more affected by being more numerate - is on much shakier ground.
The section concludes by saying this is perhaps just how evolution built us: we protect our beliefs, and being smart and aware of our capacity for irrationality alone doesn’t help us. I’m sure we’re about to discover that thinking in bets, however, could help - over to you @aad to show us how, with “Wanna bet?”
Wanna bet?
- Being challenged to bet on a belief is a signal for us: Our challenger is confident that our belief is inaccurate.
- There’s an ideal response to this signal: revisiting the evidence we have for our belief.
- This is very valuable response, since it moves us to a step in abstract belief formation process that was optional up to this point: vetting the reasons we have for the belief.
- (There’s a reference to “hearing is believing” section. [redacted]’s short and long summaries captured the relevant already. Still, I think it’s useful to lay this idea more explicitly now. The claim is that we form abstract beliefs in 3 stages: (1) hear something, (2) believe that and (3) later, and only maybe, actually vet the information.)
- If we could go around betting on everything all the time, we’d benefit from it precisely because of this. We’d be more likely to “temper our statements.”
- This is not practical.
- Instead, let’s train ourselves to view the world from this lens: We are already betting on our beliefs. We bet with our happiness, attention, money, time.
Redefining confidence
- We should question our beliefs because even “scientific facts” are updated, revised, and reversed over time.
- Instead of binary thinking (all-or-nothing; black or white; “I’m confident” or “I’m not confident”), we can specify how confident we are in a belief.
- Confidence can be specified on a zero-to-ten scale (or from 0% to 100%), where zero means we are certain our belief is not true and ten means we are certain our belief is true.
- The confidence number can reflect uncertainty about our knowledge (e.g., “I am 60% sure this movie won an Oscar”) or uncertainty about a mix of our knowledge and chance (e.g., “I am 30% sure this movie will win an Oscar”).
- We can also express confidence by declaring a range of plausible alternatives (e.g., “I think this movie will win zero to five Oscars”). Our range becomes tighter the more we know about a topic and the less luck is involved in the outcome. [1]
- Making our uncertainty explicit has many benefits:
- We change how we view the world by moving away from black-and-white thinking.
- It becomes easier to update our beliefs and be open-minded because “I was 58% but now I’m 46%” feels much less threatening than “I was right but now I am wrong.”
- It makes us more credible communicators because sharing a confidence number signals that we are thoughtful and self-aware, which makes it more likely that people will trust us. (I am not sure if this is how persuasion works.)
- It helps our listeners when updating their beliefs.
- It invites our listeners to share their beliefs, which in turn gives us the opportunity to update ours.
- Scientists publish results of experiments with p-values (akin to the confidence number) and confidence intervals (akin to the confidence range).
- Now that we have identified the target (getting comfortable with uncertainty, specifying our confidence, acknowledging that decisions are bets), thinking in bets is a tool to hit it somewhat better. This section concludes the chapter.
[1] We could obviously specify a confidence number for our range, but the paragraph reads like it is an either-or thing, so I assume Annie means specifying a range for which we have 9-10 confidence that it is accurate. Since ranges are important in Poker, maybe she will explain what exactly she means in more detail later.
Nick the Greek, and other lessons from the crystal lounge
- Classically, learning should occur when feedback is provided close in time to decisions and actions. But some people do not update their strategies or beliefs even when presented ample evidence of their deficiencies.
- Thus, our goal in this section is to lay out some of the obstacles that might exist to learning from our experiences.
Outcomes are feedback
- Experiences alone do not teach us anything—we need to actually identify what we should be learning from them
- The trouble is that nothing in life is done with complete information
- When we see the result of our actions, we in effect make another bet—we try to figure out what causes might have led to the outcome we observed, and update our beliefs accordingly.
Luck vs. skill: fielding outcomes
- Our first task when we see the result of our actions—whether they went well or poorly—is to decide whether that was because of something we decided to do (skill) or something outside our control (luck)
- This initial triage is a bet: do we ascribe the outcome to luck and ignore it, or ascribe it to a success or failure of our initial bet and thus update our beliefs?
Working backward is hard
(Recall that every decision can be characterized as starting with a belief, making a bet, and seeing the outcome unfold. What to learn from that outcome is another bet. If you decide the outcome was a result of your decision, not luck, then update your beliefs. That’s a Learning Loop.)
Wavy flashback lines to the 1990s, when conventional wisdom was that dietary fat was public health villain number one, I guess. Eating fat would clog your arteries and give you a heart attack. If you take, say, yogurt and add a bunch of sugar then the percentage of calories from fat goes down and you can call it LOW FAT and that sounds healthier. Oy.
One company (or brand of Nabisco I guess) that cynically exploited this confusion was SnackWell’s, which made sugary treats marketed as health food because they were FAT FREE.
SnackWell’s Phenomenon refers to people eating more of something because it has less of something they think is bad. If you’re gorging on SnackWells and gaining weight, it’s not obvious why. How do you know whether to ascribe the weight gain to the SnackWells? If you thought of the SnackWells as one of the healthiest things you were eating, you might blame other things.
Point being, outcomes don’t tell us what’s our fault or what we can take credit for. Life is a game of incomplete information, to use the game theory term.
Rat anecdote (literal rats, not rationalists): If you give a rat a lever to press to get a food pellet, even if they have to press it 10 times for 1 pellet, they’ll figure that out quick enough. If you disconnect the lever from the food, they’ll figure that out too, and give up on the lever. But add randomness (intermittent reinforcement) and it’s way harder to learn to press the lever (and, famously, especially hard to learn to stop). Of course humans are exactly like this with, say, slot machines in casinos.
PS: Refresher on Nick the Greek since the author keeps referencing him. He had wrong ideas about the value of unpredictable play in poker. Since the feedback loop in poker is imperfect, confirmation bias kept poor Nick from ever disabusing himself of his terrible poker strategies. If he won, he’d feel vindicated. If he lost, he’d write it off as bad luck. He failed to ever actually learn from experience.
Section: “If it weren’t for luck, I’d win every one”
Duke discusses the concept of self-serving bias in this section, exploring how we tend to attribute our successes to skill and our failures to luck. She uses several examples to illustrate this psychological phenomenon:
- Car Accidents: Duke references a humorous list of reasons for car crashes, originally compiled by her father, Richard Lederer. This list showcases how people distort reality to suit their purposes. Interestingly, statistics show that even in single-vehicle accidents, 37% of drivers blame someone else.
- Poker Players: The section’s title comes from a quote by Phil Hellmuth, a successful poker player. Duke uses this to demonstrate how even highly skilled individuals can fall prey to self-serving bias.
- Politicians and Students: Duke continues the discussion of self-serving bias, showing how it manifests itself in various contexts, from politics to academics.
Personal Reflections
- I think Fritz Heider’s description of people as “naive scientists” is apt in this context.
- Regarding driving ability, I offer an alternative explanation for why most people think they’re better than average drivers. It’s possible that people divide drivers into two groups (fine and terrible) and, knowing they’re not in the terrible group, assume they must be above average.
- On politics, I agree that voters tend to oversimplify what politicians can control. For example, I believe we should neither credit nor blame politicians entirely for macroeconomic indicators like inflation or unemployment rates.
Section: “All-or-nothing thinking rears its head again”
Duke attributes our self-serving thinking to a black-and-white mentality, where we see outcomes as either 100% right or 100% wrong. She emphasizes the importance of recognizing that most outcomes result from a combination of skill and luck.
Key Takeaways
- Assessing causes more accurately helps us make better decisions in the future.
- We should strive to be more self-confident while admitting that our successes are partly due to luck and our failures partly due to skill-related reasons.
- Duke suggests feeling good about our efforts to seek the truth rather than inflating our sense of skill.
Personal Insights
- I reflected on my experience with daily bridge tournaments, noting that I tend to attribute my rare successes to luck rather than skill. Duke’s perspective encourages me to consider a more balanced view.
- The book briefly mentions self-deprecating bias, which can be just as detrimental as self-serving bias in decision-making.
Duke wants us to be more self-confident and admit that we fail partially because of skill-related reasons. She says we should also admit that we succeed partly due to luck. She advocates feeling good about ourselves for trying to seek the truth instead of for hallucinated skill. Admitting that overcoming this natural bias is challenging, Duke hints at a workaround at the end of the section.
People Watching
If we’re so biased about our own results, perhaps we could learn from other people’s results instead? Learning from other people’s results is valuable because it gives us a lot more information than relying on our own experiences and is free or nearly free.
However it does not help with bias. While we are biased to view our own good outcomes as skill rather than luck, we do the opposite with our peers and consider their bad outcomes as due to their action and good outcomes as luck.
Our biases in how we judge others also affect our compassion for them.
Anecdote: The Bartman play: A number of fans tried to catch a fowl ball at the same time as a Cubs player tried to catch it, which would have been an out for the opposing team. One fan, Bartman, touched the ball and knocked it away, preventing the out. Everyone blamed Bartman for the Cubs not making the World Series, rather than considering the luck involved in both the other fans not reaching the ball before him and everything that went into losing that game, despite their 3 run lead at the time, and the next.
Anecdote: When Annie first learned to play poker, her brother gave her a list of good hands to play. When people won with hands not on the list, she assumed that they were just lucky rather than realizing that there were situations where things not on the list should be played, and she missed out on a lot of opportunities to learn by observation. Her bias was so strong that she didn’t think to ask her brother about why they might be playing those hands. (Eventually she learned that the list was just a good starting point for a novice.)
Other people’s outcomes reflect on us
Poker is zero-sum: anything one player loses another gains, and vice versa. So in poker thinking of our own successes as a matter of skill means thinking of our opponents failures as a matter of skill, as our success are our opponents’ failures. Likewise, for our failures to be a matter of luck, our opponents’ successes must be a matter of luck.
But we humans default to thinking of everything this way as a zero-sum game. If someone else is winning, we must be losing, the thought goes. We judge happiness in relative terms, comparing ourselves or those around us.
When you ask people if they’d prefer to earn $70,000 a year in 1900 (when the average yearly income was about $450) or $70,000 a year nowadays, the author claims that “a significant number” would choose the first option. Those who’d make that choice would prefer to be fabulously rich relative to those around them, even though that means missing out on Novocain, antibiotics, refrigerators, air conditioning, and smartphones.
This zero-sum thinking is a bad habit, and—just like the bad habit of decrying one’s terrible luck when one loses—is one we should aspire to change in ourselves. We should objectively judge luck and skill, giving others credit when it is due, without being held back by the instinctive notion that another’s success is our failure.
Reshaping Habit
-
Habits operate in a neurological loop consisting of 3 parts: the cue, the routine, and the reward. [This cue-routine-reward loop in habit formation is featured in other habit books like James Clear’s Atomic Habits as well, of course.] For example, the cue could be hunger, the action could be the routine of going in search of a cookie, and the reward is the resultant sugar high. Combined, they form the habit of eating cookies when hungry. In poker, the cue might be winning a hand, the routine taking credit for this outcome, and the reward a boost to our ego.
-
However, as we’ve seen in previous sections, bias operates here – when we get a good outcome, we’re pleased to credit our own brilliant decision-making skills – and when we don’t, we blame luck. We need to break this habit.
-
Charles Duhigg, in The Power of Habit suggests that the best way to change a habit is to respect the habit loop – keep the old cue, deliver the old reward, but insert a new routine. After all, our brains are built to both crave the reward of positive self-image updates, as well as to be in competition with our peers. It’s better to work with this instead of against it, keep both the cue and reward, and change the part that’s more plastic - the routine of what gives us the good feeling.
-
So we should try to get the reward (of feeling good about ourselves) from actions that benefit our long-term goals, and help build our decision-making skills – like looking critically at outcomes. Annie Duke talks about Phil Ivey, a poker player who, after he won a game, would analyze his moves and discuss where he could have done better, instead of revelling in his victory, as you might expect him to do. This is a successful example of someone who gets their “reward” from learning, trying to find mistakes in even good outcomes, improving their game. This is how you avoid bias (the tendency to interpret a good outcome as merely the inevitable result of your excellent decision-making).
-
Top performers of various fields work to avoid the self-serving bias that interferes with learning, and in place of the instinct to seek credit and avoid blame, get their “reward” from a routine of truthseeking instead.
-
We can use the fact that we’re inherently competitive, by changing the features by which we compare ourselves: more willing than others to admit mistakes, more willing to explore possible reasons for an outcome with an open mind. For example, being willing – even eager, to spot mistakes in your own play is something hard that you’re doing, and others probably aren’t – a good reason to feel good about ourselves.
Overall: We need to break the habit of accepting our successful outcomes as the result of our awesome decision-making, and throwing the blame of unsuccessful outcomes on luck. The best way to do this is to develop the habit of critically examining all outcomes. We’re fighting a little bit against nature here, so, to successfully make this a habit, it’s a good idea to try to get the reward (feeling good about yourself) from actions that actually benefit us, like examining the outcome dispassionately, and trying to see if there are mistakes you made (even if you won despite them). Once you reset the habit to feel the reward from this critical exercise, you’re better placed to avoid bias and actually learn from experience.
“Wanna bet?” redux
The last section ended by talking about the mindset shift we want to make: changing our habit loop so we feel we are doing better than others because we are identifying learning opportunities that others are missing. It’s too easy to look at our win and claim it as pure skill, rather than noticing all the other elements that were involved.
Treating outcome fielding [Remember, “outcome fielding” is an analogy to outfielders making a decision about where to throw the ball: here, we’re deciding which bucket to put an outcome into - was this skill, or luck?] as a bet can accomplish this mindset shift. There are a lot of possible contributors and alternatives that contribute to an outcome. If we imagine having to place bets across these, it pushes us into a more open-minded exploration of alternatives, as well as allocating them more objectively into the appropriate buckets.
One good way to do this is to take different perspectives: we tend to discount the success of peers, so a strategy for figuring out which way we would bet is to imagine one of our outcomes happening to a peer, or a competitor. It allows us to examine that great result more objectively, identifying elements over which we had little or no control, and finding the things we could have done even better.
There is a downside: you lose the high of claiming good outcomes as 100% skill. But that’s a trade we should take - we are in a better place when we don’t have to live in a black and white world. We can lose some of the sting of a loss when we treat it as an opportunity to learn. You are also more compassionate of others, noticing that bad things aren’t always their fault and good things aren’t always just luck.
The Hard Way
Adopting the habit of thinking in bets is challenging, especially when life throws unexpected, frustrating obstacles our way. Initially, it will be very difficult, but over time, it will become easier. It isn’t all that different than other habit changes. Despite the early struggles, this effort is worthwhile.
Of course, this approach won’t fix everything or completely eliminate issues like self-serving bias or motivated reasoning. These biases may still surface, but we’ll handle them better. Duke argues that even slight improvements in managing these biases can significantly enhance our learning experiences.
Poker serves as a good example. Each game offers numerous learning opportunities. If we capitalize on even 10% of them, we’ll be far ahead of the “Nick the Greeks” of the world. Even when facing a superior opponent, we could surpass them while still not utilizing all available learning opportunities.
Duke also suggests that this effect compounds. Recognizing one learning opportunity increases the likelihood of identifying similar ones in the future. It’s like using a navigation system that introduces a tiny error each time we consult it; over a journey, these errors accumulate, and our path deviates significantly from our intended course. Thinking in bets is akin to eliminating that small error.
So far, we’ve focused on the habits we want to reshape and how to reshape them. Now, we’ll explore how to make this process easier by seeking help.
Chapter 4: “Buddy system”
“Maybe you’re the problem, do you think?”
Annie Duke relates a media sensation from the “Late Show with David Letterman.” The guest shares stories from a section of her life, essentially a chain of events with a lot of drama going on.
Duke notes that the guest’s narrative characterizes the drama as out of her control. In the context of the book, this is ‘fielding the drama into the luck bucket’.
Letterman then challenges his guest’s perspective: the guest is the only constant in a long stretch of drama, so maybe she is the problem? So he’s suggesting that at least parts of the drama could be ‘fielded into the skill bucket.’
The interview then devolves into an unpleasant exchange, making its way into magazine headlines and news outlets.
Duke’s point is that while Letterman was perceptive and offered a potentially useful alternative, he violated the social contract of their exchange by doing so. This led to interview becoming an unpleasant exchange, and rendered Letterman’s otherwise potentially useful comment utterly ineffective.
Duke’s ultimate message is a reminder: Not everyone is interested in truth seeking in the sense we’re discussing, and perhaps it’s best not to pursue it indiscriminately, regardless of the situation we’re in.
When others are involved, mutual consent is key to effectiveness of our efforts in this regard.
The red pill or the blue pill?
- The decision to think in bets, focusing on what we can control (our decisions) and letting go of what we can’t (our luck), must be voluntary to be sustainable. (Like Neo in The Matrix, who consciously decides to unplug from the matrix by taking the red pill.)
- A group of like-minded people, a decision or learning pod, can help us see the world more objectively and make better decisions. Others can often spot our errors and blind spot biases better than we can.
- Annie was introduced to such a group by her brother early in her poker career. They taught her to focus on analyzing her decisions instead of complaining about bad luck or her opponents.
- Within a decision pod, we consciously change the social contract to be open-minded to disagreement, to take responsibility, and to admit when we are wrong, even if it makes us uncomfortable.
- A decision pod doesn’t have to be very big, and we certainly don’t want to force anyone to join. As long as there are three people (two to disagree and one to referee), the group can be stable and productive.
- The rest of the chapter discusses features of the agreement that make a decision-making pod productive.
- For example, after being eliminated from a tournament, it was still okay for Annie to complain about bad luck once in a while (with the implicit understanding that the objective analysis focusing on decisions will follow).
Not all groups are created equal
- A group can help us change habits, but groups can also create an “echo chamber” which will amplify confirmation bias
- For a group charter, Annie thus suggests three policies: (1) Focus on accuracy over confirmation, rewarding truthseeking and open-mindedness. (2) Accountability, and (3) openness to a diversity of ideas.
- Expanding on this “accountability” point—the idea is that people should have in their heads the idea that they need to be able to justify what they are saying to a (nonexistant) outsider, and that we can leverage the desire for approval for motivation in the direction of “truthseeking” rather than the direction of “blindly agree with what the other people in the group said”
- Again, further, we want to create a group where you feel approval when you admit mistakes, give credit only where it is do, find mistakes in good outcomes, etc.
“One Hundred White Castles… and a large chocolate shake”: how accountability improves decision-making
It sure was nice of y’all to get me cued up for the accountability section!
Ok, we have another wacky poker-buddies story about David Grey and Ira the Whale. They’re wagering whether Ira can eat 100 White Castle mini burgers. The odds are against it but he pulls it off. (If there’s a point to this story, we don’t find it out in this section.)
Pause to mention that betting is all about accountability: putting your money where your mouth is. Yay! Having lots of bets keeps you grounded in reality and reduces motivated reasoning. Truth-seeking! Scout Mindset! Litany of Tarski!
Here’s some more specific accountability for poker: precommit to a loss limit. If you lose that much, walk away for the night. The reason is that you might be playing poorly but deceive yourself that you’re just having a bad luck streak that you can play through. In Annie Duke’s case, just having her decision pod who she’d have to explain herself to was enough to stick to that predecided loss limit.
The group ideally exposes us to a diversity of viewpoints
Claim: Diversity is the foundation of productive group decision-making. Apparently John Stuart Mill (of Utilitarianism fame) was big on this in his second most influential book, On Liberty, though Wikipedia doesn’t seem to mention it.
(Lots of platitudes about diversity redacted, you’re welcome. Different viewpoints improve accuracy, etc.)
There’s a whole list of questions to ask yourself to improve your accuracy but they all seem so obvious. If you’re thinking at all about improving your accuracy you’re naturally going to ask questions like how you might be wrong, right?
Hmm, might I be wrong about that? Just in case, I’ll now repeat the list in my own words:
- Why/how could my belief be false?
- What other evidence for/against my belief might be out there?
- What other areas could inform a prior on this?
- What info might I have missed or minimized?
- If someone had a different belief, why would they and why might they be right?
- What are other perspectives/narratives for what has happened so far?
Maybe I like those after all? Book-brigading sure is great.
But just asking yourself those questions or imagining what someone with a different perspective would say only gets you so far. You’re swimming upstream against so many biases, like preserving a narrative.
So you need actual diversity in your decision pod. (None of this is about DEI-style diversity, btw, but I suppose that version is a proxy for the kind we do care about: diversity of opinion and perspective. Duke seems to not touch that question with a 10-foot pole, which is great. Maybe pretend this parenthetical doesn’t exist.)
Forming a decision pod from a group of poker players is great because it’s naturally diverse, being selected by liking poker rather than, say, political ideology. And of course the willingness of poker players to bet is helpful.
Here are some instances of trying to engineer diversity of opinion and dissent:
- The US State Department has a formal Dissent Channel, which is credited with at least one policy change
- The American Foreign Service Association has four separate awards to recognize constructive dissent and risk-taking
- The CIA has red teams
The final point in this section is that diversity in a group is hard to maintain. Duke didn’t say it but sociologists call this homophily: people gravitate towards those who are similar to themselves. It feels good to have our ideas echoed back to us. Echo chambers are so comfy.
Federal judges: drift happens
In this section, author Annie Duke explores ideological diversity in federal judicial panels. She refers to a large-scale study led by Cass Sunstein that examined thousands of appeals and even more votes. The study revealed some intriguing patterns:
- Democratic appointees voted with the plaintiff 43% of the time overall but only 10% when seated with two Republican appointees.
- Republican appointees voted with the plaintiff 20% of the time overall but 42% when seated with two Democratic appointees.
These statistics suggest that judges’ voting patterns can significantly shift based on the composition of the panel they serve on.
Personal Reflections
- I found it surprising that Democratic appointees voted even more conservatively than Republican appointees when outnumbered and nearly vice versa.
- I wondered about the nature of the cases, given that Democratic appointees were more likely to side with plaintiffs overall. This trend seems to contradict the party’s reputation for being “soft on crime.”
- The study concludes that diversity on these panels leads to better decisions, but I have some reservations about the interpretation of these numbers. In particular, I was suspicious that it looked like each side more or less flipped when outnumbered two to one by appointees of the other party.
Duke also mentions that in decades past, Supreme Court justices hired clerks from the opposite party, a practice that has largely disappeared since 2005. She discusses the “filter bubble” in modern media and how people tend to believe they’re rational while their opponents consume media in a metaphorical echo chamber.
Social psychologists: confirmatory drift and Heterodox Academy
Duke references Jon Haidt’s work on the lack of ideological diversity in social psychology and sociology. Key points include:
- Haidt found only one widely recognized conservative social psychologist in the field.
- Haidt founded the Heterodox Academy to combat ideological homogeneity in these fields.
- A paper Heterodox Academy researchers published in Behavioral and Brain Sciences (BBS) documented research into this trend.
Duke argues that ideological balance is crucial in social psychology because the field covers topics people of different ideologies disagree strongly about, like “racism, sexism, stereotypes, and responses to power and authority.” She asserts that research quality suffers if it comes from only one side of the political spectrum.
The BBS paper argues that even well-meaning academics tend to be biased towards their ideological side, favoring papers that align with their views.
Personal Insights
- Duke’s suggestion to follow people from the opposite side of the political spectrum on social media is something I’ve tried but found challenging and unenjoyable. I may be doing it wrong.
Duke concludes by emphasizing that the biases observed in judges and academics are universal. She implies that if these truth-seeking professionals are susceptible to such biases, the rest of us are likely even more prone to them.
Wanna bet (on science)?
Betting can be a tool to motivate people to seek correctness and express contrary opinions, because it makes their goal be winning the bet rather than fitting in with the group.
A study involving asking scientists whether they thought papers would replicate found that they were right 58% of the time when they performed traditional peer review and 71% of the time when they participated in a betting market.
Chapter 5: Dissent to Win
CUDOS to a magician
Robert K. Merton, known as the founding father of modern sociology, was very interested in the institutional influences on science, such as how geopolitical influences can spur scientific advancement and how science struggles to remain independent from those influences.
He wrote a paper in 1942 (and continued to perfect it until the final version in 1973) about the normative structure of science. (This paper was referenced in the BBS paper.) He laid out rules with the acronym CUDOS:
- Communism: data belongs to the group
- Universalism: apply uniform standards to claims and evidence, regardless of where they came from
- Disinterestedness: vigilance against potential conflicts that can influence the group’s evaluation
- Organized Skepticism: discussion among the group to encourage engagement and dissent.
Duke finds these broadly applicable to any truthseeking group, and that when people are starting to pursue confirmation rather than accurancy it is often because one of these was lacking.
Mertonian communism: more is more
Science has a norm (though honored often in the breach) that researchers are supposed to publish all their data. Feynman memorably espoused a very strong version of this: “a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results…”
That’s tough, but it’s an aspirational ideal. Within “decision pods” it is a good idea to get out there all the possibly relevant information, in a very broad definition of possibly relevant. Often it’s the very details that we have an urge to leave out because they make us uncomfortable that are the crucial details.
The Freedom of Information Act is a similar aspirational information-sharing mechanism, so that citizens can know what the government is doing, and thus can make up their minds about if it’s doing a good job. The first amendment freedom of speech and of the press are primarily about the importance of self-expression, but in addition they also similarly prevent censorship of information about what the government is doing and how well it’s doing it.
All truthseeking is in large part doing by agreement. Researchers agree to share their data, the government legislated FOIA to cast light on their own doings, and likewise within these decision pods we can voluntarily decide to share information. Keeping secrets isn’t intrinsically bad and in some situations is important, but choosing to share information in the specific context of a decision pod helps with productive truthseeking.
In group discussions, if we’re missing some details, it could be because someone didn’t think they’re relevant, or because they have a bias to encouraging the narrative in a flattering direction.
As Duke says Jonathan Haidt points out, and as we’ve been discussing recently, “we are all our own best PR agents, spinning a narrative that shines the most flattering light on us.”
Different people often give complety different accounts of an event, each influenced by their own perspectives, as is demonstrated in the film Rashomon, in which four separate people give completely different accounts of a scene they all observed.
No one version of a story will ever be completely accurate. One person’s telling of the story will always incomplete. So we should strive both to be as complete as possible in our sharing within the decision pod, but also when hearing others’ stories we should understand that we’re still not getting all the information and try to dig deeper.
In the earlier story about the CEO who blamed his company’s problems on firing the present is an example of this, because only when Duke dug down deep with follow-up questions did she unearth all the facts that lead to the conclusion that the firing had actually been quite reasonable.
Sharing lots of details gives people the opportunity to ask you good questions. Poker players describe hands to each other in what seems to be irrelevant nitpicks detail, but all this detail actually helps them. And because they share all the details each time, they avoid the possibility of leading the listener to the desired conclusion via partial and thus misleading information.
The author then tells an anecdote about how Vince Lombardi, who is apparently is a famous football coach, once held an audience (of other football couches) spellbound for eight hours describing a single football play in great detail.
We can encourage others within our decision pods to share information they otherwise would be reluctant to by given positive feedback every time someone shares information that might cast them in a bad light.
Universalism: Don’t Shoot the Message
-
“Don’t shoot the messenger” means not to take out your frustration with the message’s content on the person delivering it. An early example is from Plutarch’s Lives, about the King of Armenia (literally) shooting the messenger. From Wikipedia: “The first messenger, that gave notice of Lucullus’ coming was so far from pleasing Tigranes that, he had his head cut off for his pains; and no man dared to bring further information. Without any intelligence at all, Tigranes sat while war was already blazing around him, giving ear only to those who flattered him.”
-
Universalism is the converse of this principle. This section is about shooting the “message” (not the messenger) – reacting to an idea in different ways depending on whether you like the messenger or not.
-
When we have a negative opinion of someone, we close our mind to what they’re saying – and likely miss out on a lot of learning opportunities. Similarly, when we like the “messenger,” we tend not to examine what they’re saying too rigorously, and accept it too easily. Both are bad. Acceptance or rejection of an idea shouldn’t depend on who or where it comes from, but must instead be subjected to pre-established impersonal criteria.
-
The author talks about her personal experience in poker – when she started out, she was given a list of “good beginner moves” by her brother. She treated these as absolute truth (another example of shooting the message?), and mentally dismissed players who deviated from these rules as “bad” players – which meant she shut her mind to potentially learning from these players.
-
Once she realized she was doing this, she came up with a way to practice and reinforce universalism. When she had the impulse to label someone as a “bad” player, she’d force herself to find something they did well. This had the effect of making her think more deeply about their strategy, instead of dismissing it altogether and thereby losing out on learning from them.
-
Another exercise to reinforce universalism, is to deliberately seek out the opinions of, and read the arguments of, people on the opposite side of what you believe, and then find things that you actually agree with them on.
-
Yet another way to separate the message from the messenger is to imagine the same message coming from someone you either value more, or value less, or omitting where you heard the idea, to make sure it gets examined critically without bias.
Disinterestedness: we all have a conflict of interest, and it’s contagious
- Through the 1960s, scientists were at odds as to whether fat or sugar was the culprit in the increasing rates of heart disease. In 1967, a comprehensive survey by three Harvard scientists, published in the prestigious New England Journal of Medicine, firmly pointed the finger at fat as the culprit. This shifted the diets of hundreds of millions of people for decades. But it turns out, according to an article in JAMA Internal Medicine in September 2016, that the three Harvard scientists were paid by a trade group representing the sugar industry. Science relies on an assumption of disinterestedness: knowing about conflicts of interest is important data that we need to know.
- But our brains have built-in conflicts of interest: a natural desire to avoid admitting error and blame bad luck, or to take credit for good results as being the consequence of decisions we made.
- Feynman recognised that even physics, the hardest of sciences, has a demonstrable outcome bias: knowing the hypothesis under test can unconsciously bias the analysis. Techniques such as using a random variable in recording the results can allow “outcome-blind” analysis (e.g. introduce a random variable X, range 1-5, to code the outcome, and don’t tell those doing the analysis if X=1 or X=5 corresponds to an improvement in patient outcomes)
- We can apply this idea as we workshop decisions: sales teams can evaluate their strategy before they know if they won the deal. Traders can vet processes prior to options expiring. Even with chess games (my drug of choice), you can evaluate a series of moves before knowing who won.
- Beliefs, too, are contagious: if our listeners know what we believe to be true, they might unconsciously work harder to justify our beliefs. So we can try not sharing our personal opinion as we seek that of the group.
- Key to a successful group is reinforcing habits that are now perhaps associated with the Rationalist community, such as being able to successfully articulate and steel-man an opponent’s point of view. This can in turn give us a deeper understanding of our own position.
Organized Skepticism: Real Skeptics Make Arguments and Friends
- Duke emphasizes that skepticism involves recognizing that some of our beliefs are likely untrue. True skeptics actively seek out these potential inaccuracies.
- When we are precise about the confidence we have in our beliefs, we create an environment that reduces confrontational dissent. Instead, skepticism can foster cooperative exploration, where people express both their beliefs and the uncertainty surrounding them.
- Contrary to common associations with skepticism—such as disagreeability or cynicism—true skepticism often correlates with good manners, civil discourse, and friendliness.
- Duke suggests that skepticism should be encouraged and even institutionalized through practices like devil’s advocacy or red teams. These methods help create structures for challenging assumptions productively.
- On an individual level, inviting others to challenge our beliefs can be helpful. Doing this proactively ensures others don’t feel like they are being discouraging or negative.
- Engaging in skepticism, recognizing our own inaccuracies, and improving them is inherently difficult. We should make this process as easy as possible, ensuring that unproductive behaviors—such as cynicism or dismissiveness—don’t hinder constructive, collaborative exploration.
- It’s important to remember that even with a cooperative, exploratory attitude, skepticism can still feel confrontational to those who are not part of the skeptical effort. The David Letterman we’ve read about earlier example illustrates this.
Communicating with the world beyond our group
To apply the lessons outside a truth-seeking group, there are ways to avoid Letterman-like situations [1]:
- Lead by example, i.e., express uncertainty about your own views.
- Lead with assent, meaning you point out areas where you agree, hoping that the other party sees you as a good-faith actor when you later disagree.
- Ask for temporary agreement to engage in truth-seeking, i.e., explain the general framework and then ask for permission to pursue truth-seeking in that context.
- Avoid saying “no” or “but,” and adopt a “yes, and” mindset.
- Focus on how to improve the future instead of making judgments about the past. That will make people more agreeable.
That concludes the section and chapter. The next chapter is about time travel (or something).
[1] These points could appear like that in any random book about communication.
“Let Marty Mcfly…” and “Night Jerry”
- Annie suggests we combat akrasia by choosing to remember the past and imagine the future.
“Moving regret in front of our decisions”
- Regret is a strong emotion. But we wallow in regret after things go wrong, when it is too late to change them. Better to try to imagine them going wrong, and let this push us to consider the future when deciding on our actions.
- A concrete suggestion: before a decision, ask yourself what the consequences of each choice will be “in 10 minutes? in 10 months? in 10 years?”
A flat tire, the ticker, and a zoom lens
Anecdote! You’re changing a flat tire at night in the freezing cold and it suuuuucks. In the moment it feels like massive suffering. But zoom out. How much worse did your life get because of this flat tire? Maybe literally less than zero? You lost an hour or something and have a cool story to tell.
Similarly if you have your retirement fund in the stock market and you watch its value bouncing up and down each day (“watching the ticker”). Zoom the heck out and look at the big picture. It’s a retirement fund – the right granularity to look at it is more like decade-level.
So when you make decisions, zoom out to the right level when assessing them.
Review: A 10-10-10 strategy means considering what the the consequences will be in 10 minutes, 10 months, and 10 years. And also looking backwards: How would I feel today if I had made this decision 10 minutes ago? 10 months ago? 10 years ago? (Related idea: outside view vs inside view.)
Doing that discourages overreacting to the immediate moment, where you focus on offloading negative emotions (like by blaming them on luck) or sustaining positive ones (by taking credit for something that might be luck). Again, zoom out.
“Yeah, but what have you done for me lately?”
Abstract/Executive summary
People tend to focus on the latest result, and that’s irrational. If we’re gambling, we feel good when we’re up in the moment, even if we’re down overall. We should avoid this recency bias and focus on best practices, whether we’re doing well at the moment or not.
Detailed discussion
Duke explores how recent successes and failures influence our emotions and cloud our decision-making abilities. She introduces the idea of “watching the ticker,” a form of “resulting,” where we focus on the most recent results.
Duke describes two gambling scenarios: both end with breaking even, but one starts well and ends poorly, while the other begins badly but finishes strong. Despite identical net outcomes, Duke surmises that people typically feel better about the second scenario. She argues this pattern persists even when the first scenario has a better net result.
This pattern reminds me of Dan Ariely’s observations in “Predictably Irrational” about his burn treatment experiences: he remembered his treatment more positively when it ended gently, regardless of the overall intensity or duration of the experience.
Duke argues that this recency bias is irrational. Feeling good when things are going poorly overall (or vice versa) just because of recent results can lead to poor decision-making. She suggests that skilled decision-makers actively work to overcome these emotional responses to recent outcomes. A solution? Duke suggests trying to view your situation as if you’re observing yourself from one year in the future, a perspective I appreciate. I like the idea that in a year, today’s bad experience will be a good story.
Duke notes that “Poker players think about this a lot.” This focus makes sense. I imagine that poker players enjoy the emotional highs and lows of the game, making them susceptible to recency bias. Nevertheless, it occurs to me that mastering these emotions could become part of the challenge and appeal of the game itself.
I suspect the following sections will try to bridge the gap between professional gambling and everyday decision-making. Just as poker players try to distance themselves from momentary successes and failures, I anticipate Duke arguing that their techniques can work in our professional and personal lives.
Tilt
Abstract/Executive summary
When emotions from bad results lead to bad decisions, you’re “on tilt.” This feedback loop can occur in any context, not just poker. To avoid or recover from tilt, try taking the long view. Will this matter in the long run? If self-talk doesn’t work, enlist friends to help you maintain perspective.
Detailed discussion
Duke begins this section by discussing surfers and their extensive vocabulary for waves. This illustration reminds me of the “Rain God” passage from Douglas Adams’s So Long, and Thanks for All the Fish. She uses this exposition to characterize how specialists develop precise jargon to describe nuances that outsiders might miss. Surfers need specific terms for different types of waves so they can develop and discuss strategies for handling them; poker players have developed a vocabulary around the intersection of emotions and decision-making.
One important piece of poker player jargon is the word tilt, a term I encountered recently through Nate Silver, the poll aggregator and poker player. The word originates from pinball machines, which, since 1935, have had mechanisms to detect when players violently jostled them. When the sensors are triggered, these machines display “TILT” and shut down. In poker, being “on tilt” means you’ve become emotionally overwhelmed by recent outcomes, leading to poor decision-making.
Duke emphasizes that tilt is “the poker player’s worst enemy.” It creates a dangerous feedback loop: bad results lead to emotional decisions, which lead to worse results. The signs of tilt are both behavioral (yelling, cursing) and physiological (flushed cheeks, racing heart, rapid breathing). Although Duke uses poker, where the term originated, as her primary example, she emphasizes that tilt can occur in any context in which emotions interfere with decision-making.
To alleviate tilt, Duke suggests developing habits to recognize its signs and pre-committing to exit situations when you notice them. She offers various self-talk strategies, although I’m skeptical about their effectiveness when someone is already emotionally flooded. Perhaps the better approach is avoiding tilt altogether. Duke suggests enlisting a “truthseeking pod,” people who can help talk you down when you’re on tilt. They might help you take the long view: “Will this matter in the long run?”
Duke concludes the section by advocating a long-term perspective, noting, “It’s all just one long poker game.” This approach to decision-making reminds me of index fund investing, in which a successful strategy ignores short-term market fluctuations. Just as long-term investors outperform market timers, perhaps decision-makers who maintain emotional equilibrium outperform those who react to every fluctuation. Nevertheless, there’s no such thing as “index poker playing.” The gambling environment requires making many fast decisions and observing the results, increasing vulnerability to tilt.
Ulysses contracts: time traveling to precommit
We can do things to make it more likely that we will stick to decisions with a longer term perspective. None of them are impossible to circumvent in the moment and we can’t avoid all emotional irrational decisions, but even very simple things decrease the amount that it happens.
- Even just articulating beforehand what circumstances we would act in triggers deliberate thought about the decision in the moment. (For example investment advisors will determine in advance what situations they would want to buy or sell, rather than emotional reactions to a sudden change in the value of an investment.)
- You can do things that make the bad decision harder. (Such as when Odysseus (who the Romans called Ulysses) had himself bound to the mast to avoid acting on the Siren’s song. Hence the term Ulysses contract for this kind of thing.)
- You can do things that make the good decision easier. (For example if you are going to be waiting for a friend in a food court and want to avoid eating junk food, you can pack healthy snacks).
Decision swear jar
This is what we’ve all been waiting for, I guess, the commitment-contract chapter. The author gives ideas for things one can commit to not doing. This is framed in terms of a “swear jar”, which serves here as an ad-hoc version of Beeminder. What any of us would get out of this chapter is the list of ideas of what to commit to. These are all framed negatively—things to pay up for when you do, what in Beeminder-land we’d call a do-less goal.
The framing here is that you’d put some money in the “swear jar” whenever you find yourself expressing certain words, phrases, or thoughts; the idea being to bind the payment to concrete and distinct potential mistakes of thinking. This also helps you catch yourself in the moment and reflect—toss a dollar into the swear jar (or enter a Beeminder data point), and then rethink that last thought in light of what you know about that kind of thinking having the potential to lead you astray.
There is a good long list of things you might want to commit to do less of. I’ll give here a few sample examples from the chapter, but I’m not going to just copy them all down, so I recommend that anyone thinking of making Beeminder goals based off this book read through the full list.
- Expressing the illusion of certainty: pay up whenever you find yourself saying something like “I know”, “I’m certain”, “There’s no way that’s true”, etc, or expressing a numerical probability of 0 or 1 (0% or 100%.)
- Pay into the swear jar whenever you use the word “wrong” when in exploratory discussion; “wrong” is a conclusion, not a rationale.
- Pay up whenever you find yourself moaning about bad luck.
- Discouraging others from expressing their opinions: pay up when you respond to an opinion you ask for with “no” or “but…” or building on it without “yes, and…”
- Lack of self-compassion: pay into the swear jar whenever you say “how could I have been so stupid” or similar.
Reconnaissance: Mapping the Future
Just as a military mission relies on reconnaissance and won’t begin operation till they find out every detail about the terrain and situation, and map out every possibility that could result from various variables (like weather), we shouldn’t make decisions without considering the range of futures that could result from any of the decisions we make.
First, we imagine the “range of potential futures”, identifying as many of the possible outcomes as we can – known as scenario planning. (i.e. what are all the scenarios that could possibly occur?)
Then we make our best guess at the probability of each of those futures occurring. People hesitate to do this because they can’t be sure of the probability – but that’s the point. It’s not about approaching future predictions from a point of perfection, but about acknowledging that we’re making a guess. By doing so, we move away from a dangerous 100% certainty that it will turn out this way or that.
Poker players do this kind of scenario planning all the time - they consider each of the opponent’s possible responses, and estimate the likelihood of each, before making their move.
Scenario planning is great when done with a group, since diverse viewpoints help identify more scenarios, deeper into the tree, and help the group estimate the probability of their occurrence better.
Scenario planning has many benefits:
- It makes explicit the fact that the future is inherently uncertain, which leads to a more realistic view of the world.
- It helps us plan our moves in advance, which means we can avoid being reactive. If we’ve identified a scenario where we’re susceptible to irrationality, we can try to address it preemptively with a Ulysses contract, for example.
- Anticipating the possible futures keeps us from emotions – both unproductive regret or underserved euphoria – when a certain future happens.
- It makes us less likely to fall prey to both resulting and hindsight bias.
Scenario planning in practice
Expected Value
This section starts by introducing the concept of Expected Value [As a reminder, Expected Value = (probability of outcome) x (payoff if outcome happens)], which will be well known to many of us here. Duke gives an example of an organisation that was applying for grants: initially, they would focus efforts on the grants with the highest headline value. But many of those had low probability of success, so their EV was worse than grants lower down the potential value list. By refocusing on the EV rather than on the headline value, they had more realistic budgets, could focus on ways of increasing probabilities, and were less likely to fall prey to hindsight bias or resulting, because they were thinking in terms of success probabilities all along.
The important of Reconnaissance of the future
More sophisticated scenario planning goes deeper into the decision tree: in the example that opened the book, of the Seahawk coach’s Superbowl decision, the two initial choices for Pete Carroll were run or pass. Each of those in turn could lead to different future outcomes. Importantly, calling a pass was likely to give Seattle three plays to score, as for example an incomplete pass or a penalty would stop the clock [As a Brit, it’s not completely clear to me if a sack in the last 2 minutes of a game would stop the play clock. I think an intentional grounding would, though, which is another possible outcome of a pass decision.]. Only by looking deeper into the decision tree do these important (and not unlikely) outcomes become visible.
Backcasting: working backward from a positive future
Directly trying to forecast the future, starting from now and moving into the future can distort our view. We have a sharp/detailed image of what’s in front of us, and a blurry/low-resolution image of future.
This can lead to a view where we end up giving disproportional attention to the problems of today, instead of preparing for problems of the future. Combined with an implicit (and tempting) assumption like “things are like this now, and they will stay this way”, this can be hazardous.
This section talks about an idea that I’d bet will be familiar to many of the brigadiers, which is supposed to remedy that.
Imagine you want to go on a huge journey. Instead of planning from the first step forward, start by picturing the journey completed and work backward from there.
Duke focuses on one approach to “working backwards”, known as “backcasting.” In this approach we imagine we realised the desired outcome, and we think about how we got there.
Here’s why it can be effective:
-
A Deeper Look into the Decision Tree: Starting at the end lets us focus on the big picture, then work toward today. This approach helps us form a deeper decision tree.
-
Spotting Key Challenges: Backcasting helps identify low-probability events that could impact success. By doing this, we can decide whether to (try and) increase the chances of these events happening or recognize our goal as ambitious.
-
Preparing for Setbacks: By mapping out key steps, we can prepare responses for potential setbacks and commit to how we’ll handle them.
Example: Weight Management Goals
Let’s say you’ve hit your weight management goals after six months. To understand how you got there, you’d ask questions like: What actions helped you reach your goal? How did you avoid unwanted habits? How did you adjust physical activity? And how did you stick to your food plan?
Premortems: working backward from a negative future
If backcasting is the process of imagining a positive future and the paths to it, then a premortem is the opposite, as it involves imagining a negative future and the paths leading to it. A premortem is an implementation of the Mertonian norm of organized skepticism.
Only by considering both the negative and positive event spaces can we see the world objectively, which enables us to make better decisions in the long run.
Returning to the weight loss example, we might anticipate scenarios such as feeling obligated to eat cake at a birthday party, encountering free cookies and bagels at a conference, and lacking time and motivation to go to the gym. We can then precommit to positive actions in such scenarios. (This reminds me of another self-help book where the author recommended a trigger-action plan, such as “whenever the waiter asks me if I want dessert, I order unsweetened tea instead.”)
Popular self-help literature often focuses on the benefits of positive visualizations. However, studies by Gabriele Oettingen, author of “Rethinking Positive Thinking,” have shown that study participants who imagined negative outcomes for weight-loss goals lost 24 pounds more than participants who only engaged in positive visualizations.
Being a “heckler” and pointing out ways in which things can go wrong might go against a “positive attitude” culture in professional settings. Therefore, it is important to clearly set expectations and explain how a premortem is helpful in reaching positive goals by illuminating the negative event space.
Dendrology and hindsight bias
When we look to the future, we hopefully see a branching tree of possibilities—a series of points where one thing might happen with high probability, or perhaps another thing will happen with low probability. In the end, we will live out one of these futures.
When we look back at the past, it’s very easy to forget that the present we live in was just one of these tiny branches, and to feel like it was somehow inevitable. After all, we can see the chain of causation leading to it, and it all makes sense—never mind that each of those things could have gone differently, and we would have gotten a different chain of events that also makes sense.
The book concludes by imploring us to try to always keep this in mind. When something bad happens, it’s always going to seem like it was obvious in retrospect—we can see the chain of events that led to it. But the only way we’ll be able to learn from our mistakes—and reach peace with things when we didn’t really make a mistake at all—is by thinking back to each decision we made and asking: “is it really true that I should have known things would turn out this way”?
Appendix: Book Brigade notes for How To Win Friends and Influence People by Dale Carnegie
Notes by Daniel Reeves and Bethany Soule, written as a dialog with each other
Chapter 1: “If You Want to Gather Honey, Don’t Kick Over the Beehive”
We start with fun anecdotes about vicious criminals who thought of themselves as the good guys. The point being, no one ever thinks they’re wrong and bad. Criticizing people backfires and makes them defend themselves and double down. All you do is incur resentment. There are more fun anecdotes of former presidents and Benjamin Franklin and Mark Twain who all figured this out. One of Lincoln’s favorite quotes was “Judge not, that ye be not judged.” Lincoln would write scathing letters to his generals or whoever and then intentionally not send them. Carnegie suggests a motto of “What Would Lincoln Do” and that whenever you’re tempted to admonish someone, pull a five-dollar bill out of your wallet and meditate on Lincoln’s picture.
“Any fool can criticize, condemn, and complain – and most fools do. But it takes character and self-control to be understanding and forgiving.”
The chapter ends with a parenting story that was apparently wildly popular called “Father Forgets”. I guess it’s still popular. It’s from the perspective of a dad who is just awful: yelling at his kid all the time, embarrassing him in front of his friends, total lack of affection, etc. But then the kid spontaneously hugs him good night and he has a come-to-Jesus moment and resolves to be non-awful.
Now I’ve probably made you curious to read it, though my prediction is that it’s not worth your time.
PS: One last anecdote from the author’s own life that might be interesting. He once saw a letter that had “dictated but not read” at the bottom and as a young person was very impressed by how Busy and Important someone must be to send a letter that way. He later had reason to write a letter to a bigshot in his field and thought he’d seem impressive if he put “dictated but not read” at the bottom of his letter. The bigshot replied with a sentence like “you are a jackass and I’m not answering your letter”. Carnegie 100% agreed that it was a jackass move but at the time it just made him despise the bigshot. To the point that when the bigshot died like a decade later, all he could think about was his resentment at the rebuke.
In conclusion, don’t criticize, condemn, or complain.
Chapter 2: The Big Secret of Dealing With People
Thesis: The only way to get someone else to do something is to make them want to do it.
So what do people want? According to Freud – sex and greatness. According to philosopher John Dewey – feeling important. How you get your feeling of importance determines your character.
Then he spends a couple pages discussing different ways that people get a feeling of importance. Anecdotes include his father deriving importance from his prize pigs (the pigs did not care), criminals deriving importance from infamy, rich people doing philanthropy (to get their name attached to things). Then he spends some time dwelling on invalids and the mentally ill. “Some people became invalids in order to win sympathy and attention…”, with an anecdote about Mrs. McKinley, and another anecdote about a woman in mental hospital who was dissatisfied with her life, but got all of her wishes in the delusions of her psychotic break.
Interesting assumptions about “insanity” and stuff that an enterprising comp lit or like, social anthropology student could write a whole thesis on in this: “If some people are so hungry for a feeling of importance that they actually go insane to get it, imagine what miracle you and I can achieve by giving people honest appreciation this side of insanity.” But I’ll skip that.
Okay, so the secret to motivating people? Praise. We go into listing a bunch of the titans of the industrial revolution who were lavish in their praise of their underlings and associates. We get anecdotes about Charles Schwab (first president of Andrew Carnegie’s US Steel Company, then went and fixed up / rebuilt Bethlehem Steel as well), Andrew Carnegie himself, the first John D. Rockefeller, Florenz Ziegfeld (of the Ziegfeld Follies), Stevie Wonder, Queen Victoria / Benjamin Disraeli, and others. Some of these are stories of the famous person being encouraged (to greatness?) by praise from superiors, teachers, etc.
I’ll include this quote from Schwab which (Dale) Carnegie claims will “all but transform your life and mine if we will only live them.”
I consider my ability to arouse enthusiasm among my people the greatest asset I possess, and the way to develop the best that is in a person is by appreciation and encouragement.
There is nothing else that so kills the ambitions of a person as criticisms from superiors. I never criticize anyone. I believe in giving a person incentive to work. So I am anxious to praise but loathe to find fault. If I like anything, I am hearty in my approbation and lavish in my praise.
– Charles Schwab
There’s a caution in here to distinguish between flattery and appreciation. Flattery is insincere praise. Praise only works (in the long run?) if your appreciation is sincere. I feel like there’s a mindfulness tie-in in here. What he’s suggesting in this chapter is about more than the praise, it’s about actually finding appreciation for stuff that other people are doing. Like the anecdote about Rockefeller, whose business partner lost a crap-ton of money in a poor investment, and Rockefeller didn’t berate him for the loss – it was a (literally) sunk cost at that point, but instead praised the guy for making good decisions about cutting losses and only losing 40% of the original investment.
I’m now also thinking of the How To Talk thing about praise – be concrete and specific. Which I think is to a similar purpose.
So in summary: basically be an optimist? Notice the things that the people around you are doing that improve your life, and then express appreciation for it.
Principle Number Two:
Give honest and sincere appreciation.
Chapter 3: “He Who Can Do This Has the Whole World with Him. He Who Cannot Walks a Lonely Way”
- Bait the hook to suit the fish. Or a less Slytherin-sounding version: focus on what other people want. Or to ramp up the Slytherin again: “First, arouse in the other person an eager want.”
- Example (I wonder if this was in the original 1936 edition): Instead of preaching to your kids not to smoke, talk about how cigarettes would keep them from making the basketball team or winning the hundred-yard dash.
- Cute story of Andrew Carnegie wagering that he could get his college-aged nephews to reply to a letter (their mother couldn’t get them to reply no matter how frantically she wrote to them): He wrote a chatty letter that casually mentioned he was enclosing a five-dollar bill, which was not in fact enclosed. Back came the replies, “Dear Uncle Andrew, thank you for the kind note and…” – you can guess the rest.
- Another story about a kid throwing a fit not wanting to start kindergarten the next day. The family started fingerpainting and told the kid he couldn’t join them without going to kindergarten to learn how. Tada, the kid woke up early the next morning anxious about being late. Is this compatible with how-to-talk?
- In conclusion, before trying to persuade someone to do something, think about how to make them want to do it.
- Savvy-sounding strategy that Carnegie used to argue against a price hike that a company tried to spring on him: He told the manager that it makes total sense to raise prices and “I’d do the same if I were you”. Then he said, let’s make a list of advantages of disadvantages of the price hike. This was all 100% from the manager’s perspective but the real arguments were “you’ll lose my business” plus reasons that Carnegie was especially valuable as a customer.
- How Henry Ford put it: “If there is any one secret of success, it lies in the ability to get the other person’s point of view and see things from that person’ angle as well as from your own.”
- I just remembered that this is conventional startup wisdom for webcopy. Don’t list features, talk about what the customer can achieve with your product.
- Ironic thing reminiscent of the last paragraph of [redacted link to doc] about whether the money-hating rule is karmically ROI-positive: “the rare individual who unselfishly tries to serve others has an enormous advantage”. I guess it’s also a bit like the concept of heaven and hell – trying to selfishly motivate people to be selfless.
- Some of these anecdotes are examples of Tom Sawyer’s trick: make people want to do the thing you want them to do.
- Make people think something was their own idea. Plant a seed and let them cook and stir the idea themselves. They’ll come to regard it as their own. (Related anecdote about getting a kid to eat breakfast by having her help cook it.)
Review of the principles so far from the end of Part 1:
- Don’t criticize, condemn, or complain
- Give honest and sincere appreciation
- Arouse in the other person an eager want
Part 2: Six ways to make people like you
Chapter 1: Do This and You’ll Be Welcome Anywhere
The basic thesis of this section is “showing interest in other people makes them like you and care about you.” That’s why everyone loves dogs. They are always interested in what you are doing and eating and excited to see you.
Claims people are fundamentally selfish and get all caught up in being interested in themself. This is kind of like the thing where I, a sufferer of social anxiety, have to remind myself that no one else in the room is paying attention to me at all, so I don’t have to freak out and be embarrassed that I just tripped on the carpet or whatever. No one is going to start pointing and laughing at me. Probably no one noticed. And even if they did they’d generally fundamentally only care as far as it affected them. People aren’t interested in you, they are interested in themselves. You can’t force people to be interested in you, and trying to get them to like you doesn’t make them like you.
Quotes Alfred Adler:
It is the individual who is not interested in his fellow men who has the greatest difficulties in life and provides the greatest injury to others. It is from among such individuals that all human failures spring.
Harsh.
Anecdotes
- He had a dog named Tippy who he loved. The dog was a dog, and therefore incredibly loyal and excited to see him all the time. The dog was tragically killed by lightning (what?!). Be like a dog – enthusiastic and super into other people – and they will like you.
- He once spent an evening hanging out in the dressing room of some at-the-time famous Broadway performer – Howard Thurston – who was a beloved and very financially successful magician. Thurston said that most magicians view the audience as rubes there to be conned, but not him. He is grateful for their patronage and thinks every time before he goes on stage how lucky he is to be there, and how the audience makes it possible for him to make his living doing this thing he loves.
- Teddy Roosevelt was loved by everyone including his servants. His valet wrote a book about him. Among the stories the valet told were various stories about how the President was solicitous of all his servants as if they were his friends, knew everyone’s name, even when visiting the white house after his term.
- Some salesman who almost lost an account with a druggist who was going to switch to some major retailer as supplier, but then the owner of the drugstore changed his mind on the recommendation of his staff, because the staff really like the salesman, cuz he knew all their names and always stopped to chat.
- Found out the birthdays of all his friends and wrote them in his calendar and always sent a card.
- Another salesman who couldn’t get an interview with the president of some company, but overheard the president asking the secretary about stamps for his son’s collection. So the salesman went back to his office, collected up a bunch of unique stamps from their international correspondence / customers, and then returned and said he’d brought stamps for the guy’s son. That got him an hours long interview where they chatted about stamps etc, until the president then brought up the business thing himself.
Summary
Roman poet Publius Syrus says “We are interested in others when they are interested in us.”
The interest must be sincere, but it’s a two way street, and both sides benefit.
Part 2, Chapter 2: A Simple Way to Make a Good First Impression: 😃
This is mostly an 8-page paean to the power of smiling. Sometimes it goes a bit over the top: “A smile is rest to the weary, daylight to the discouraged, sunshine to the sad, and Nature’s best antidote for trouble.”
So, yeah, you’re welcome for this abridgement. There are various stories about people resolving to smile at their spouses and coworkers and how instantly that transformed their lives. I’m not disagreeing with this, to be sure. It’s powerful.
On to pragmatics, Carnegie strongly endorses fake-it-till-you-make-it aka act-as-if aka the Mormon trick. He doesn’t use any of those terms but says this:
Action seems to follow feeling, but really action and feeling go together; and by regulating the action, which is under the more direct control of the will, we can indirectly regulate the feeling, which is not. […] If our cheerfulness be lost […] act and speak as if cheerfulness were already there.
Scooped by a century. One last quote worth pondering from this chapter:
People rarely succeed at anything unless they have fun doing it. “I have known people,” [some executive] said, “who succeeded because they had a rip-roaring good time conducting their business. Later, I saw those people change as the fun became work. The business had grown dull. They lost all joy in it, and they failed.”
Part 2, Chapter 3: If you don’t do this you are headed for trouble
Learn and remember people’s names.
Basically there are a bunch of cute anecdotes about famous and successful people who made an effort to do this. It’s fun to read. But also I don’t really need to be sold on the idea. Learning people’s names is great. Learn their names and use them.
There are kind of a couple tips on how to do this, but it mostly amounts to: ask and write it down as soon as you can. Finding a way to anki people’s names would be great.
Part 2, Chapter 4: An Easy Way to Become a Good Conversationalist
Spoiler: it’s to listen. So many anecdotes. And slogans: “To be interesting, be interested”. Also browbeating: “[People who] bust right in and interrupt in the middle of a sentence [are] bores, that is all they are – bores intoxicated with their own egos, drunk with a sense of their own importance”.
In the middle of this chapter I was pleased to see an anecdote that amounted to advocating upside-down support. Here’s a business owner talking about dealing with an unhappy customer:
I listened patiently to all he had to say. I was tempted to interrupt, but […]. When he finally simmered down and got in a receptive mood, I said quietly: “I want to thank you for coming to Chicago to tell me about this. You have done me a great favor, for if our credit department has annoyed you, it may annoy other good customers”.
It continues with a heaping of empathy and even models Beeminder’s take on competitors (or maybe the dog-eat-dog post is the better one to link to here?):
I told him that I understood exactly how he felt and that, if I were in his shoes, I should undoubtedly feel precisely as he did. Since he wasn’t going to buy from us anymore, I recommended some other woolen houses.
The punchline is the customer not only did a 180 but named his firstborn child after the company. No tattoo though!
In conclusion, be a good listener, ask questions the person will enjoy answering, encourage them to talk about themselves.
Discussion question: What if both people have read this book?? Are they like the north-going Zax and the south-going Zax? Ok, probably they will muddle through and take turns or something.
Part 2, Chapter 5: How to interest people
This was a short one. Just a couple anecdotes, that feel pretty similar to a lot of the other principles so far. Interest people by being listening to their interests.
Has a handful of anecdotes about people researching someone ahead of time, finding out their interests, and going to talk to them about that instead of the Business proposition, and then once they’ve buttered the person up, gotten them talking etc, they essentially think you’re really great and because they just got to talk about themself and what they like for 45 minutes, and then they want to do Business things with you.
Talk in terms of the other person’s interests.
I guess this is a fine “in” for business relationships, but what about personal relationships? I feel like there’s some common wisdom these days about how this is terrible dating advice or whatever, with stories in the popular culture (rom coms, sit coms, etc), about a person pretending to be super into what the other person is into and then winding up in a stupid place romantically because of it?
I guess this is a bit of GGG-ness involved. Like being genuinely interested in the other person’s interest (“listening to people talk about stuff they’re really excited about”) is different than feigning interest in that thing yourself in order to impress them or whatever?
So business-wise, maybe this comes back to being a good listener, and doing your research? Like you don’t have to pretend to be a stamp collector because the person is into stamps, you just ask them about the thing they’re interested in and let them talk about it?
Part 2, Chapter 6: How to Make People Like You Instantly
Ask yourself, “what is there about this person I can honestly admire?”.
Claims:
- The desire to feel important, the craving to feel appreciated, is the deepest urge in human nature. It is responsible for civilization itself.
- The feeling of doing something for someone (like making them feel important and/or appreciated) without them being able to do anything in return is a “feeling that flows and sings in your memory” long afterwards.
Also a discussion of the Golden Rule and its history and incarnations in different religions and philosophies. Zoroaster, Confucius, Lao-tse, Buddha, Hinduist teachers, Jesus.
That’s really all. Everything else is over-the-top anecdotes about how so-and-so expressed appreciation and made someone feel important and that person immediately wrote their family out of their will and gave their whole estate to the appreciation-giver. I may be mildly exaggerating, but it’s basically like that. One story I amused myself by assuming it was describing a gay love affair with the juicy parts carefully redacted. George Eastman (of Kodak fame) and James Adamson. It makes more sense that way. Adamson has a meeting with Eastman to pitch a contract and is warned to get to the point immediately because Eastman is so busy. Instead he ignores the point of the business meeting, runs his hands along the fancy wood of the office, flatters the fuck out of Eastman, one thing leads to another, and they go back to Eastman’s place for “lunch” and are “close friends” for the rest of their lives. It starts at the bottom of page 101 if you want to see if I’m exaggerating.
This ends Part Two. Review of the 6 ways to make people like you:
- Get yourself to be genuinely interested in other people. Try Act As If for this?
- Smile.
- Use people names.
- Listen. And get people talking about themselves.
- Focus on and learn about other people’s interests. Seems like part of principle 1?
- Make the other person feel appreciated and important – and do so sincerely.
I’m not sure why 1, 4, and 5 aren’t all facets of the same thing. Listen and show interest in the person and what they care about. Other-focus. Maybe spotlighting them? Then it incorporates 6 as well.
Part 3: How to win people to your way of thinking
Chapter 1: How to win an argument
Short answer: don’t get into arguments. People are bad at changing their mind and they get defensive and if you argue with them they’ll just dig their heels in and you will get nowhere.
What can you do?
- Welcome the disagreement. Maybe the other person is right? Maybe you were about to make a huge mistake? Maybe you are going to learn something, or at least get to understand and know the other person better? What can you be thankful for in the disagreement?
- Distrust your first impression. Probably your initial reaction is defensiveness, so probably you should distrust it, try to remain calm, try to see the defensiveness.
- Control your temper.
- Listen first. Don’t resist or defend or debate before you’ve actually heard them out.
- Look for areas of agreement.
- Be honest. By which he means look for things where you can honestly admit errors on your part, and apologize for them. That disarms the other person and reduces defensiveness.
- Promise to think over your opponent’s ideas and study them carefully.
- Thank your opponents sincerely for their interest. Anyone who takes the time to disagree with you is interested in the same things you are! Think of them as someone who really wants to help you.
- Postpone action. Schedule a new meeting so you both have actual honest time to think it over. In preparation for that meeting ask yourself the hard questions like
“Are they right? Partly right? Where is the merit in their argument? What price will I pay if I win?” etc etc.
Finally, a quote from some opera singer:
“My wife and I made a pact a long time ago… When one yells, the other should listen – because when two people yell there is no communication, just noise and bad vibrations.”
Principle 1
The only way to get the best of an argument is to avoid it.
Chapter 1 sounds like a reprise of Scout Mindset. I especially like the tip of disarming the other person by finding errors of your own you can honestly admit and apologize for. (One of points in chapter 2 is that that will inspire the other person to match your fair-/broad-mindedness and want to admit that they too may be wrong.) Thanking debate opponents for helping find the truth is great too.
Also the opera singer strategy. Like taking turns being upset and when it’s not your turn, only listen.
Part 3, Chapter 2: A sure way of making enemies – and how to avoid it
This continues the soldier-mindset theme.
First tip: “If you’re going to prove anything, don’t let anybody know it. Do it so subtly, so adroitly, that no one will feel that you are doing it.”
Lots of clever quotes from famous people like Alexander Pope:
Men must be taught as if you taught them not
And things unknown proposed as things forgot.
If someone is wrong, be all diplomatic, like “Well, I thought otherwise but I may be wrong and if I am, I want to change my mind. Let’s examine the facts…”
I’m worried that that ends up being a ritual you perform when you think someone is wrong and they end up seeing through it and bristling the same as if you blurted “au contraire!”. I guess that’s where the subtlety and adroitness come in.
Another nice example of upside-down support:
Our dealership has made so many mistakes that I am frequently ashamed. We may have erred in your case. Tell me about it.
And the standard reason intelligence and correctness correlate so poorly: “most of our so-called reasoning consists in finding arguments for going on believing as we already do”.
Ben Franklin has advice! He forbade himself words like “certainly” and prefaced everything with “I conceive” or “I apprehend” or “I imagine” (probably “I feel like…” works too). And diplomacy again: when someone is wrong, first comment on a way or a sense or a circumstance in which they’re right. By saying things circumspectly, people feel less contradicted. And it’s less embarrassing for you when you’re wrong.
Then we got a yarn anecdote. Some executive concocted a better incentive system for employees of a yarn company. When they led by explaining how wrong the current system was, they were met with defensiveness. They tried again by asking lots of leading questions and get the stakeholders to basically develop the new system themselves. Then they loved it.
And a lumber anecdote. Some lumber customer was upset about the quality of the lumber they bought. Turns out it was the customer’s fault for buying the wrong grade. The way to make the customer see that was to “ask questions in a very friendly, cooperative spirit” and when they were warmed up enough, delicately incept the idea in the customer’s that some of the rejected pieces of lumber were within the grade they purchased. Sounds like hard work. But believable that that’s the only way you’d get someone in that situation to admit they were wrong.
Finally, throw in a weirdly juxtaposed pair of anecdotes about how fair-minded MLK and Robert E Lee are, and we’re done.
Concluding principle: Show respect for the other person’s opinions. Never say “You’re wrong.”
Part 3, Chapter 3: If you’re wrong, admit it.
Another Robert E. Lee anecdote in this chapter. And several others which all amounted to: “look how much respect you get from others when you take responsibility for your errors. Or even for other people’s errors!” Many of the anecdotes were like “look, this person avoided punishment by admitting error”.
This is kinda like that support principle, perhaps? (have we had occasion to bring this up in this review before?).
There are two cards, the “everything fine” card and the “omg this is terrible card”. If you, in customer support, take the “everything fine” (i.e., stop complaining, it’s not so bad, calm down, etc), then the other person will take the “omg this is terrible card” and wind up yelling for your supervisor. But if you take the “omg this is terrible” (i.e. oh no! we fucked up! how can i fix it? this is awful!, etc), then the other person has to take the “everything is fine” card, and you thus de-escalate the situation.
Principle number 3: When you are wrong concede quickly and emphatically.
Part 3, Chapter 4: A Drop of Honey
This starts with an somewhat implausible-sounding anecdote about Rockefeller. Apparently the workers of one of his “fuel and iron companies” were striking for higher wages and it turned into a literal war with some strikers being shot dead by the company. So in comes Rockefeller and starts visiting all the workers in their homes and then gives this inspiring speech and “the strikers went back to work without saying another word about the increase in wages for which that had fought so violently”. Those silly, feisty strikers.
(Carnegie talks about Lincoln a lot. Fun fact: Lincoln was more recent history to Carnegie than Carnegie is to us. (Lincoln was elected 76 years before Carnegie published this book, which was 88 years ago for us in 2024.))
Anyway, all these anecdotes are to drive home the point that before you try to convince someone of anything, first convince them that you are their sincere friend. Friendliness begets friendliness. Exhibit good will and enthusiasm.
A couple anecdotes were about getting landlords to decrease rent or pay for damage. Reminded me of [redacted]. The advice was first gush about about how amazing they and their property are.
Also the old fable of the wind and the sun competing to get a coat off a man. The harder the wind blows the tighter the man clutches the coat. When it’s the sun’s turn the sun just shines on him till he’s warm enough to take the coat off on his own. In conclusion: don’t let your counterparty go into soldier mindset.
Principle 4: Begin in a friendly way.
Part 3, Chapter 5
Begin in a friendly way, and then *get them saying “yes, yes” immediately.
It’s a little like anchoring maybe? In fact, I wonder if it works if you get their “yes” about something entirely unrelated to the disagreement? I guess the previous chapter might claim so. Just be friendly first etc.
A “No” response, according to Professor Overstreet, is a most difficult handicap to overcome. When you have said “No,” all your pride of personality demands that you remain consistent with yourself. … Once having said a thing, you feel you must stick to it.
No terribly interesting anecdotes in this section. But he does frame the Socratic method as “keep asking questions that the other person agrees with until almost without realizing it, his opponent found themselves embracing a conclusion they would have bitterly denied a few minutes previously.”
Also he refers to Socrates as “the gadfly of Athens”, which is kinda hilarious, though also he quotes it when he says it, so apparently it’s not attributed to him.
Ends the chapter with a Chinese proverb, and a little bit of anachronistic praise of the Chinese (“the age-old wisdom of the Orient”, etc):
He who treads softly goes far.
(Aside, it’s interesting how his comments about Chinese culture are totally praiseful, but would absolutely count as racist nowadays. I guess because it’s stereotyping the entire culture and sorta white-washing it or something? Anyway.)
Principle 5: Get the other person saying “yes, yes” immediately.
Part 3, Chapter 6: The Safety Valve in Handling Complaints
Tiny chapter here with a handful of anecdotes:
- Some businessguy got laryngitis before a big sales pitch. He just nodded while the one person at the client company who’d he’d previously been talking to did the talking. He kept wanting to jump in but couldn’t, but it became clear that it would’ve backfired. All the bigwigs in the meeting just gradually talked themselves into saying yes to the pitch.
- A teenager story pretty reminiscent of how-to-talk. The mom kept punishing the kid but eventually she kinda gave up when the teenager snuck out or whatever and just was like “why?”. Magically the teenager started gushing about how the real treasure was the friends we made along the way or whatever, or she just needed to be listened to.
- Job interview pro tip: Learn about the prospective employer, praise them, and get them talking about how great they are. Let them do all they talking and they’ll think you’re amazing.
Also don’t brag and stuff. I think that was about it.
Principle 6: Let the other person do a great deal of the talking.
Part 3, Chapter 7: How to Get Cooperation
Don’t give people the solution, ask questions and let them come to the conclusion themselves.
Anecdotes:
- An manager of car salesmen called a meeting and asked the salespeople what they expected from him. Then what he could expect from them. Basically after getting to brainstorm all the stuff they wanted the boss to do, they then assigned themselves a bunch of responsibilites etc, and because they’d come up with them themselves they were enthusiastic in complying or something?
- A designer trying to sell their designs to someone who always rejected them had the idea of leaving a series of unfinished drawings with the client and asked “do me a favor and look these over and tell me how you would want them finished”. Then collected the drawings and the suggestions, finished them with the buyer’s input, and sold them all.
- A hospital administrator needing to buy new x-ray equipment is inundated with sales calls from manufacturers and has no idea how to choose. So one manufacturer asks for the guy to come give advice about their new machines, and then the administrator is all flattered and feels important, goes to look over the machines, and orders a bunch for his hospital.
- Colonel House was an advisor to President Wilson, said “I learned the best way to convert him to an idea was to plant it in his mind casually, but so as to interest him in it – so as to get him thinking about it on his own account.” So like, he suggests a thing casually to the president, and the president appears to disapprove, but a few days later he trots it out as if it were his own idea and House thinks “great. let him have credit, just so long as he’s taking my counsel” or whatever.
So yeah, plant ideas gently, don’t worry about taking credit for a good idea if you’re getting your way, let people sell themselves on your products by letting them try it out for themself, or asking them e.g. “how would you improve this?” Also something to the effect that people will more readily follow rules they come up with themselves? Like when the elementary teacher at the beginning of the school year asks the class to come up with the classroom rules for the year, instead of just telling them what the rules are.
Principle 7: Let the other person feel that the idea is their own.
Part 3, Chapter 8: A Formula That Will Work Wonders for You
- Ideological Turing Test. Simulate the other person and their motives accurately. Before an interview, have a clear idea of what you’re going to say and how you predict they’ll respond.
- Praise and appreciate people. Anecdote about wife who’s into weeding, husband thought it was a waste of time. He joined her for an afternoon and found ways to praise and appreciate how hard she worked at it.
- Anecdote: the bossy fire marshall. Instead of yelling at the teenagers for making a fire he was all friendly and explained how other, less responsible teenagers didn’t do such-and-such and caused fire damage. Favorite sentence: “They had saved their faces.”
- Anecdote about someone way late on their car payments. They were super friendly and apologetic with the collector person and let them pour out their troubles with even worse delinquent customers. Voila, “take all the time you need, money-schmoney” etc.
Principle 8: Try to honestly see things from the other person’s point of view.
Part 3, Chapter 9: What Everybody Wants
I guess the idea here is don’t just see it from the other person’s perspective, but let them know that you do? Demonstrate to them? Give them sympathy.
“I don’t blame you one iota for feeling as you do. If I were you I would undoubtedly feel just as you do.” Like, definitionally, if you were that person of course you would feel like they feel.
Don’t hold it against people if they’re bigoted assholes, because it’s not really their fault, they’re the product of their environment. Or put in my own words: every person has their own story, and nobody thinks that they are the villain in the story. So don’t assume you know their story, and don’t attribute their motivations to inherent personality traits, as opposed to a response to some specific stimulus.
Anyway, anecdotes:
- DC once gave a radio broadcast talking about Louisa May Alcott and misspoke, attributing her place of origin to New Hamphsire, rather than Massachussets. Then he got a bunch of flak for it, including a nasty letter from some woman who was terribly offended at his mistake. So he decides to try his hand at converting her hostility into friendship, and he calls her up to thank her for her letter. He is very sympathetic to her, and apologizes for the blunder and she falls all over herself apologizing for writing to him in anger. “Because I had apologized and sympathized with her point of view, she began apologizing and sympathizing with my point of view.”
- an elevator repairman presenting a client with a day long closure of their elevator, when the client is like “dude! i can’t afford a closure of more than a couple hours!”, the repairman is all “I know, it’s going to be such a huge pain for you to have it closed for this long, it’s going to cost you so much business, but if we don’t do it now it could cause even more damage and be closed for days, which would be an even worse loss to your interests”
- a piano teacher has a new student with very long fingernails which get in the way of playing piano. Instead of threatening or demanding the student cut the nails, she praises the girl’s nails and says what a sacrifice it would be to cut her beautiful nails, but she really believes that it would help the girl’s playing, so the student should think it over. And the student decides to cut the nails.
- a prima donna opera basso who would complain to his manager that he couldn’t possibly sing in the morning, and the manager would be all “oh no! we absolutely must cancel the show, what’s the loss of a couple thousand dollars when you might be ill!” and every time the singer complains the manager is like “well of course you couldn’t perform”, and this works like reverse psychology i guess and the singer performs anyway?
So basically, tell people what they want to hear? Or with less cynicism, when you demonstrate your sympathy and understanding of their point of view, that builds trust and they are more likely to listen to your advice, or try to reciprocate with sympathy for your POV.
Principle 9: Be sympathetic with other person’s ideas and desires.
Part 3, Chapter 10: An Appeal That Everybody Likes
A person has two reasons for doing a thing: the one that sounds good and the real one. You can help them think of the former, i.e., appeal to their nobler motives.
Anecdote with an unhappy renter who was going to break their lease. “Years in the renting business have taught me something about human nature, and I sized you up in the first place as being a man of your word.” And then the landlord offers a gamble. He’s so sure the renter will do what’s fair he precommits to accepting the renter’s decision as final. Lo and behold, the renter decides not to break the lease.
Another clever one: Instead of “can you remove that picture of me; I don’t like it” say “my mother doesn’t like it”. You’re not some kind of monster who disrespects motherhood, are you? (That part is implied.)
Then there’s a think-of-the-children one, kinda similar.
Getting a famous person to do work for you by offering to make a donation to their favorite charity. A lot of sneaky stuff in this chapter!
Finally, another anecdote where a car dealership gets customers to adjust repair bills in the business’s favor by flattering the crap out of them (they’re the world’s expert on their own car) and then trusting them to do what’s most fair.
This has given me the idea to try [redacted].
Anyway, the nice version of this chapter is assume the other person (thinking of them as an upset customer) is sincere, honest, willing and anxious to do what’s right and pay what they owe, once they’re convinced it’s fair. Most people are indeed like that. They’ll react well if you make them feel you consider them honest, upright, and fair.
Principle 10: Appeal to the nobler motives.
Part 3: Chapter 11
Dramatize your case. Be a salesperson. Think like an ad exec. Be a Mad Man. Then gives several examples of kind of obnoxious salesy things. (A salesperson coming into the grocery store and dumping a bagful of pennies on the floor to demonstrate how they’re losing pennies on every transaction and should buy the salesperson’s thing! Sounds super annoying from the perspective of the person being sold to, if you ask me.)
Anyway, the most realistic-sounding one (to my ear), or “most potentially useful example of this” was an employee getting ignored by their boss, so they wrote out a form-letter with a “self-addressed stamped envelope” kind of deal, so all they had to do to reply was fill in the date & time. A little like sending a note to the boy you like in class that’s like “I like you ☐ yes ☐ no” or whatever.
Principle 11: Dramatize your ideas.
Part 3, Chapter 12: When Nothing Else Works, Try This
Stimulate competition. The first anecdote was pretty good. There were two shifts in an underperforming mill. At the end of a shift, the manager put the number of heats (whatever that is; a KPI) written in massive numerals in chalk on the floor. The next shift comes in and are like “what’s that?”. They find out and, with no other prompting, are all motivated to show up the other shift. And then vice versa again back and forth forever.
The next anecdote was basically like Biff goading Marty McFly to do anything he wants by calling him a chicken.
And the next one sounded similarly like naked manipulation: “I don’t blame you for being scared. It’s a tough spot. It’ll take a big person to do [thing I want you to do].”
The end of the chapter talked about how the biggest motivator for people at work is the work itself, whether it’s exciting and interesting and a chance for self-expression. People are motived to excel and to win. And to feel important.
Principle 12: Throw down a challenge.
Recap of the principles from Part 3 in my own words:
- You don’t win arguments; first just listen and reflect. Scout mindset.
- Respect the other’s opinions and never say “you’re wrong”.
- When you’re wrong, be quick to admit it.
- Start with lots of friendliness.
- Get the person saying “yes” off the bat.
- Let the person do most of the talking.
- Let the person feel the idea is their own. Inception.
- Honestly see things from the other’s perspective.
- Say “if I were you I’d feel the same way you do”.
- Appeal to their nobler motives; assume they’re fair, honest, etc.
- Add some drama to make your point memorable.
- Stimulate competition, goad them.
Part Four
Be a leader: How to change people without giving offense or arousing resentment
Chapter 1:
If you must find fault, this is the way to begin
This introduces the idea of the compliment sandwich. But Carnegie is making it open face. All the good stuff right out there on top, with some healthy whole grain bread underneath.
A barber lathers a man before he shaves him.
lol…
He’s got examples of doing it very baldly: Calvin Coolidge says to one of his secretaries “That’s a pretty dress you’re wearing this morning and you are a very attractive woman. Now, don’t get stuck up. I just said that to make you feel good. From now on, I wish you to be more careful with your punctuation.”
… the quite slytherin: A guy’s subcontractor is holding up the whole contract’s delivery, so he goes to visit the subcontractor’s factory, and gets a tour of the whole factory, praising the whole way. Then they go out to lunch, and chat like friends. So the guy still hasn’t brought up the thing that brought him out to the factory in the first place. Then after lunch the subcontractor brings it up and is like “well obviously i know why you’re here, but i didn’t think it would be so pleasant.” and then he fulfills the subscontract on time.
And finally, this gem of insight within an anecdote about an underperforming bank teller: “Once she realized I had confidence in her, she easily followed my suggestions and mastered this function.” I suspect this is a great way to think of it, when applicable; demonstrating that you have confidence in, or think well of your subordinate puts you on their side, so the criticism is less harshly felt, or more collaborative or something.
Principle 1: Begin with praise and honest appreciation.
Part 4, Chapter 2: How to Criticize – and Not Be Hated for It
First anecdote is people in a steel mill smoking under a no-smoking sign. Big bossman walks up, hands them each a cigar, and says “I’ll appreciate it, boys, if you will smoke these on the outside”
This one sounds like playing martyr: Bossman checks on a retail store and the clerks are chitchatting amongst themselves, ignoring a customer at the counter. Bossman waits on the customer himself and then nonchalantly hands the purchase to the clerks to wrap up.
Then there’s one about a mayor in Florida who claimed he had an open-door policy but his staff kept trying to helpfully shield him from people trying to get time with him. Rather than try to explain to his staff that they should do that less, he physically removed the door to his office.
Here’s one that seems obvious enough now: Simply replace BUTs with ANDs. Don’t say “you did awesome but [criticism]”. Say “you did awesome and [criticism framed as a way to be even awesomer]”.
The next one is philosophically interesting. Someone had contractors remodeling her house. They were good but they left a mess in the yard each night. So she and her kids clean up after after them, stack up all the materials, whatever, and then she lies through her teeth to the foreman: “I’m really pleased with the way the front lawn was left last night; it’s nice and clean and doesn’t offend the neighbors.” So of course they get the message and leave it that way from then on.
(It’s an exaggeration (technically a lie?) to say she lied through her teeth. What she said was technically true. She liked the way the yard was left. Passive voice. But by Gricean implicature, she was communicating that the workers left it that way. So I claim it technically counts as deception. Justifiable deception? Discuss.)
Penultimate anecdote is an Army Reserve sergeant getting reservists to cut their hair military-style even though they maybe don’t technically have to until they’re called to duty or something. Basically he guilts them into it. “You’re leaders, representing the Army Reserves, blah blah blah.”
Last one: Pastor writes a terrible sermon, spouse says “this would make an excellent article for The North American Review”. I.e., explicitly praising him but implicitly saying it doesn’t work as a sermon. Smart.
Principle 2: Call attention to people’s mistakes indirectly.
Part 4, Chapter 3
Principle 3: talk about your own mistakes before criticizing others.
It humbles you and stuff, and makes the other person like you more, and seem less like you’re lording stuff over them.
None of the anecdotes here seemed particularly profound and shrug. Pretty straightforward and common sense, right?
“Lord knows I’ve messed up this word a thousand times, you misspelled it too.”
All the better if your self criticism is sincere and relatable and relevant.
Part 4, Chapter 4: No One Likes To Take Orders
Tiny chapter with just 2 anecdotes.
Make suggestions and ask questions instead of giving orders.
- “You might consider…”
- “Do you think XYZ would work?”
- “Maybe if we…”
Save the person’s pride, give them a feeling of importance, encourage cooperation rather than rebellion. Also stimulate their creativity. And the thing from before about making them feel like the decision was theirs.
(I think there’s a space-heatering danger though. If you never give directives then your subordinates try to read your mind and infer them.)
Anecdote 1: Person parked their car illegally and idiotically. Demanding they move their car worked in the short term but turned the person into an enemy basically. Correct approach is like “if you moved your car I think other cars would be able to get in an out”.
Anecdote 2: Machine shop gets a huge order that’s going to require overtime and stuff. Instead of cracking the whip, bossman gathers all the employees and describes the big opportunity they have in accepting the huge order but it sadly doesn’t seem realistic. Then the employees rally and have lots of ideas for how they can pull it off. Clever clever.
Principle 4: Ask questions instead of giving direct orders.
Part 4, Chapter 5
Principle 5: Let the other person save face.
Don’t attack people in front of their peers. I mean, don’t attack people in general, but demonstrating respect and appreciation are good.
Anecdote 1: an accounting firm has to let go workers seasonally after the tax rush. when they just say “hey, we don’t need you any more. tax rush is over.” the person leaves kinda bitter, and then you have to find a whole new set of people next time. if instead you give them an honest and kind evaluation of their work and say you appreciate them etc, then they are more understanding, feel they’ve been a good worker, and are likely to remember you more fondly even though they’ve just been downsized. better working relations going forward, and if they are available next tax season, they will want to work with you again.
Anecdote 2: a boss dresses down a manager for a mistake in front of all his peers, and he soon after quits and goes to get a job with their competitor where he’s doing just fine, thanks. a different person in a different job has the experience of admitting a mistake in front of their peers, and their boss giving them space to save face “people often make mistakes on new projects, i’m sure you only messed up because of inexperience, and not incompetence. i’m sure you won’t make mistakes in the future once you’re more experienced.”, and the person goes away feeling confident and determined to not let the boss down in the future.
So yeah, don’t be a jerk. Give people an opportunity to save face.
Part 4, Chapter 6: How to Spur People On to Success
The secret is to treat them like dogs. Specifically, dogs in obedience school. You lavish praise on every tiny improvement.
Five anecdotes: First, a famous opera singer whose teacher said he couldn’t sing as a student but his mother (really?) said he was so good, so he became world famous, QED. Next is Charles Dickens who sent his manuscripts to a million publishers and they all hated him but then one of them published him and that turned him into Charles Motherfucking Dickens because, you guessed it, praise. (Interlude in which Carnegie emphasizes his own sheer credulity by citing debunked ideas of B.F. Skinner. Except I think Carnegie predated most of Skinner’s work so this may have been added later.) Anyway, similar anecdote about H.G. Wells, who was ready to kill himself so his old schoolteacher said he was talented, et voila. Penultimate anecdote: praise your kids for small improvements instead of yelling at them. Ok, this one sounds fair enough. Finally, a talented employee in a print shop was a hothead about to get fired but the boss explained in heartfelt terms what exactly made him talented. Then the employee was great.
Dale emphasizes that this isn’t about flattery. You have to prove you really mean it. And that’s true of the whole book. It’s not a bag of tricks, it’s a way of life. It works when it comes from the heart.
But just to bring the credibility back down, it basically cites the canard about humans only using 10% of their brains. Except not literally so maybe it gets a pass. The point is there’s a lot of unrealized potential in people that you can draw out by praising and inspiring them.
Principle 6: Praise the slightest improvement and praise every improvement. Be “hearty in your approbation and lavish in your praise.”
Part 4, Chapter 7: Give a Dog a Good Name
Is that a reference to some saying? Anyway, this chapter is about praising a person for something you want them to do more of, like, instead of yelling at them to do more of it I guess? If you have someone’s respect, and then you praise them for having a certain ability, whether or not they actually do, then they’ll try to live up to that praise, and get better at the thing you praised them for. This is a lot like the dog-obedience school thing, is it not?
Anecdotes:
- A mechanic’s work started going downhill after being real good for many years. The boss praises his past work and is like “Because you’re such a good mechanic, I thought you’d want to know that lately your work has been subpar.” and then the mechanic goes back to their old level of performance.
- Shakespeare quote: “Assume a virtue if you have it not.”
- Some old rich woman praises a servant girl from the hotel, and the girl believes her and starts taking better care of her appearance, and having more confidence and within a couple months she has a marriage proposal from the cook’s son.
- “Hey, I know you’re so good at changing your mind, so I’m sure you’ll hear me out when I make these arguments to you about xyz.”
- A dentist notices that his office cleaner hasn’t been getting the small details done, so he writes her a note thanking her for being so great, and saying “I know it’s a lot to get done in the short time you have, so if you ever want to take an extra half hour to really scrub the corners and polish the furniture up to your standards, I’ll understand and even pay you for the extra time.” and then she does the detail work without even using any overtime.
- A 4th grade teacher knows she’s got a “problem kid” in her class this year, his reputation precedes him. So she starts out the year saying “I know you’re such a good leader, and I’m counting on your help to [do things].” and he eagerly falls into line and is a model student in her classroom.
So basically, people generally want to please, so Principle 7: Give people a fine reputation to live up to.
Give a Dog a Good Name — Is that a reference to some saying?
Apparently it’s referencing “give a dog a bad name and hang him”.
This is a lot like the dog-obedience school thing, is it not?
The dog obedience school thing was chapter 6. This chapter seems a little advanced for literal dogs.
Part 4, Chapter 8: Make the Fault Seem Easy to Correct
This one seems thoroughly in the water supply already but we can run through the four anecdotes.
Dancing
Dance teacher flattered and praised the guy and it made him want to be better.
Dale plays bridge
Same story. “Bridge is just memory and judgment so it will be so easy for you.”
Bridge teacher
Guy’s girlfriend is a bridge teacher and she tells him what a genius he is at bridge. He goes on to become a best-selling author of books on the game of bridge.
Flashcards for times tables
Kid was in a nasty car accident and was deemed to have brain damage and performed poorly at school. Dad did mathghost I mean flashcards with the kid and every time he set a PR for getting through the flashcards they’d hug and dance a jig. The boy grew up to win a Nobel prize in particle physics. Or at least started making honor roll or something.
Principle 8: Use encouragement. Make the fault seem easy to correct.
Part 4, Chapter 9: Making the other person happy to do what you suggest
Ways to make the other person happy about doing what you suggest involve considering what benefit they will get from doing it, and suggesting the thing in a way that is non-confrontational. Possibly also adding incentives to make it more attractive? This chapter feels less cohesive than a lot of the others.
Anecdotes
- Telling someone they don’t get to do a job they were hoping to do, but implying that they’re too important to do it, so that they don’t feel put out or passed over.
- Framing it such that “you’d be doing me a favor if…”
- Assigning titles; Napoleon did this with special military titles, and medals, and calling his army “Le Grande Arme”
- Adding incentives: kid doesn’t do his chore of picking up pears before the mower mows the lawn. Dad offers an incentive – I’ll give you a $1 a bushel – but also a negative incentive – and charge a $1 for every pear I find out in the yard.
Keep in mind:
- Be sincere. Don’t promise something you can’t deliver. Forget about the benefits to yourself and concentrate on the benefits to the other person.
- Know exactly what it is you want the other person to do.
- Be empathetic. Ask yourself what it is the other person really wants.
- Consider the benefits the other person will receive from doing what you suggest.
- Match those benefits with their wants.
- When you make your request, put it in a form that will convey to the other person the idea that they will personally benefit, and how.
Principle 9: Make the other person happy about doing the thing you suggest.
In A Nutshell
Be a leader
A leader’s job often includes changing your people’s attitudes and behavior. Some suggestions to accomplish this:
- Begin with praise and honest appreciation
- Call attention to people’s mistakes indirectly
- Talk about your own mistakes before criticizing the other person
- Ask questions instead of giving direct orders
- Let the other person save face
- Praise the slightest improvement and praise every improvement.
- Give the other person a fine reputation to live up to
- Use encouragement. Make the fault seem easy to correct
- Make the other person happy about doing the thing you suggest
PS: Soon after we finished this book brigade, CGP Grey posted a podcast about this book.