Thinking, fast and slow

See Book reading for more information about books.

The Two Systems
Our brain function with two systems. System 1 operates automatically and quickly with little to no effort. It functions with little to no sense of control. System 2 requires a voluntary effort to activates. It is slower but responsible of the more complex tasks our brain handle.

Will power is needed to activate system 2 but also to suppress system 1. But will power is a limited resource meaning, as time goes on, temptation become stronger. To preserve motivation, one needs to avoid unnecessary loads of system 2 and temptations.

System 2 is also incredibly lazy. Most mathematical riddles revolve around the fact that our system 2 is too lazy to compute and rely on system 1 intuitive answer. The problem of the bat and the ball is a perfect example. If a Bat and a Ball cost 1.10$ and a bat cost 1$ more than the ball, how much does a bat cost? The intuitive answer is wrong yet our brain is too lazy to check the easy math and most people get it wrong.

Priming system 1
System 1 can be a victim of priming. If you believe already in a conclusion to be true, you are more likely to believe in the arguments that appear to support it. System 1 is conclusion first, argument second.

You can also prime yourself. By faking happiness, like forcing yourself to smile, you trick your brain to have a more positive view of the world. But is also true for the opposite, having/forcing a frown give you a more negative view of the world. Thinking in a certain way encourage the corresponding behavior and the behavior encourage the corresponding thinking. This has repercussions in management, daily life, beliefs, etc. For example, money primed subjects are more reluctant to be involved with others, depend of others, and accept demand from others.

The fact that System 1 can be primed and that priming has consequences over our reactions is an example of the weaknesses of system 2. To quote the book directly, "system 1 provides the impressions that turn into beliefs and is the source of the impulse that often become your choices and your actions. It contains the model of the world that instantly evaluates events as normal or surprising.

Norms, surprises, and causes
System 1 creates a vision of the world based on the generalization of what we experience. This vision, which defines what we call "normal", creates a set of expected situations. Events that contradict said expectations are what surprises us. If a set of shared expectations create norms.

Jumping to conclusions
Jumping to conclusions is an efficient time-saver when mistakes are unlikely with little to no consequences. It is way less efficient in the opposite situation. System 1 is useful but can lead to common biases. Our brain do not care so much about the quality or the quantity of the information available. It wants to create a compelling and consistent story that makes sense, even with only half the information. We are even blinded to the fact that we are missing information. Actually, the less information you have, the easier it is to fit them all in one narrative.
 * Confirmation bias: System 2 looks for confirmation instead of refutations
 * Halo effect: positive impressions of a person or product in one area can positively influence one's opinion or feelings in other areas, even if you have never observed them.

When organizing a meeting, asking participants to write a short essay about their opinion before anyone can talk is an effective way to prevent people from being influenced by assertive and early speaker. It protects from priming, confirmation bias, and halo effect.

Our system 1 needs only 1/10th of a second to make a judgment about someone. System 2 tries to stay coherent with the norms and beliefs built by system 1 and will seek to confirm what you already believe even when you are not aware of it. Now that you are primed, you can either confirm or disprove your first impression, but your lazy system 2 has an easier time doing the former than the latter. If you already like something, you are likely to amplify its benefits and minimize the risk, even when presented with contradictory evidence.

The law of small numbers
Extrem outcomes are more frequent is small samples and large samples gave more average results. It is easier to draw a handful of red balls when you draw only 3 balls that when you draw 10. We put an exaggerated faith in small samples meaning that we give more value to the content of a message than to its validity.

We prefer a simpler view of the world and we tend to see causation explications when, in fact, there is only chance involved.

Anchors
The illusion of understanding and of validity

the illusion of understanding is hindsight bias. Everything makes sense with hindsight, skills, events outcomes. We often forget the importance of luck and the complexity of the systems involved. Luck is also negative as it diminishes one's performance and skills.

We are prone to articulate a skillful vision of ourselves based on past achievements. We attribute success and build stories based on cherry-picked evidence, forgetting all the other alternatives available to us and the role luck had in shaping our outcome. The real materialization of skills is consistent results with low variation. If your success rate is close to random, then skills are not involved.

Experts, and people who invest a lot of knowledge into solving an issue, tend to value the quality of their results based on the number of efforts provided, not the veracity of the prediction. They tend to have a harder time to double-check their assumptions and to practice self-doubt. People who have a more nuanced view of the world tend to make slightly better predictions.

The science of availability
When asked to guess the probability of an event, people's judgment is influenced by how easy it is to think about: One's perception can be influenced by how easy, or hard it is to remember an occurrence. For example, if you ask 3 ways to improve a product, people will find it easy and fast to come up with those 3 ways. Hence, they will believe that improvement is indeed necessary. If you ask for 10 ways, people will probably never find 10 and will take forever. This time, and even if they found 6,7,8 ways, because they cannot find 10, they will be less likely to believe the product need improvements. This bias leads people to: This pattern does not work when people feel personally involved with the task or when personal health is involved. The more they can remember the better they feel. And it does not work either when an explanation as to why it is so difficult is offered.
 * Salient examples
 * Frequent examples
 * Dramatique examples
 * Personal examples
 * Believe they used a product less when asked to remember more instances of usage
 * Less confident in a choice when asked to list more supporting arguments
 * Less confident about avoidability when asked to imagine more alternatives
 * Less impressed when listing more advantages

Availability, emotion, and risk
We give more importance to information that is readily available, often available, appeal to our emotions, and especially fear. Availability cascade occurs when a newsworthy story self fueled itself into public debate. First, media cover à wide range of stories until one bite. Once a story got some traction, people will want more information, feeding the story. With enough traction, a story can become a national issue that will lead to policies being issued. Experts and dissidents get little attention or hostile one. This leads to a waste of public resources that are used to answer an irrational fear with no real evidence.

Tom's  w specialty
P(A) knowing B is different than P(B) knowing A. P(A) is different than P(A) knowing B. Bayesian statistics are incredibly unintuitive for humans. We are not trained/wired to consider base rate and new information when we make a judgment. We should revaluate our judgment when presented with new information and we should take into account the base-rate in our judgment.

Causes trump statistics
People's unwillingness to learn the particular from the general is only matched by their capacity to learn the general from the particular. If you wanna teach something to someone, start with the particular and then move to the general.

Regression to the mean
An outstanding perf is usually followed by a worst one and a catastrophic one by a better one. Exceptional results in either direction are exactly that, exceptions. They are usually followed by a regression to the mean, no matter what you do. Exceptions will happen and they will regress back to normal. This observation has serveral consequences:
 * Talent (or lack of) is not linked to the capability of obtaining spectacular results, everyone can be lucky. Talent shows itself is the capability to perform better than average over a long period of time.
 * When forecasting, lower the results of high performers and increase the ones of low performers. The further the deviation, the stronger the adjustment.

Intuition vs. formula
Algorithms are better than experts when it comes to predicting outcomes. Most procedures should be algorithmic. Using standard questions on key characteristics to test. Resulting in a final recommendation or a score to interpret. For example, when hiring, define some key skills, find a standard way to evaluate them (on a scale from 1 to 5), add them and only consider the candidates that reach a certain threshold. You can add intuition to the mix but also as a grade using the same scale.
 * Expert overcomplicate problems by adding too many unrelevant variables.
 * They can judge the same evidence twice differently.

The outside view
The outside view is used to get the big picture. Compare how you are doing to others in the same situation. It will prevent you from falling into the planning fallacy: making a plan closer to a best-case scenario than to reality (under evaluating time and money required).
 * 1) Get a reference case
 * 2) Identify a cost allocation base ($/miles, hours/page of a report, $/m² of a construction project, etc.)
 * 3) Generate a prediction for your project
 * 4) Use case-specific elements to adjust the prediction

The engine of capitalism
Society favors bold and clear statements over uncertainty, yet we tend to overestimate our capacity to forecast the future. Hence, we make bold statements that are often wrong while vague ones are often true. Instead of trying to predict a clear future, we can use a premortem approach.

When you are almost done taking a decision but have not committed yourself yet, imagine 1 year into the future, the decision was implemented as decided but the outcome is actually a disaster. What steps lead to this disaster.

Premortem allows you to overcome part of the confirmation bias.

Bernouilli's errors
When we are facing a choice, our decision depends on: Our brain functions using relative measures. We think in percentage increases rather than flat value. The utility of 1 extra unit is greater when you have 0 than when you already have 9. Knowing the reference point is extremely important when trying to understand a decision. The addition of this reference point is called the prospect theory.
 * The outcome
 * The probability
 * Our reference point

Furthermore, when all the outcomes are bad people tend to be less risk-averse. They will cling to the best case scenario regardless of the probability. If you have to choose between losing half your money for sure and a bet with a 75% chance of losing everything and 25% of losing nothing. People will tend to choose the bet, even if the expected value is lower than the sure choice.

The endowment effect
There is a gap between how much someone is willing to pay for something and the minimum price at which I am willing to sell the same item. For example, I would be ready to pay up to 300€ for a concert ticket but I would not sell it for less than 800€. The fact that the same item can have two values, two utility for me, makes no sense according to the utility theory.

This phenomenon is called the endowment effect. We give more value to the things we owned that we would pay to acquire them. When you own an item, you consider the pain of losing it. When you don't, you imagine the pleasure of acquiring it. The gap between the two values is a manifestation of the risk aversion.

It works particularly well for items that we rarely trade or the one we do not intend to sell when we buy them. There is a difference between the items we "held for use" and the ones we "held for exchange". "Held for exchange" items, like goods in a store or trading items are only perceived for their value and therefore, do not suffer from endowment effect.

Bad events
We favor bad news over good news because, if there are both necessary for or survival, our loss aversion makes us more sensitive to the former. What can kill me is more important than what may save my life. Bad events stay longer in our minds and are processed faster. Failing objectives, receiving criticism, bad emotion, etc leave a stronger memory that their respective positive version. Once again, all due to loss aversion. The losses in these cases seem way bigger than the reciprocal gains.

Loss aversion also explains why defenders are fiercer than attackers. Losing ground is a bigger deal than gaining the same ground. Compare it to business negotiations where one tries to defend his interest and one tries to gain an advantage.

The fourfold pattern
Understand where you stand and where other parties stand during a negotiation may help identify who has the upper hand. For example in case of a lawsuit in the first row, the winning wide, being risk-averse, will settle for less than the expected value while the losing side will only accept a settlement higher than its expected value. The losing side has the upper hand in the negotiation.
 * Top left cell, inheritance or certain winning law suit
 * Top right cell, certain losing law-suit or terminal disease
 * Bottom left cell, the lottery effect.
 * Bottom right cell, where we buy insurance

Rare events
We are unequipped to deal with rare events either when estimating probabilities or when weighting them in our decisions. We either underestimate the probability of events we never experienced before or for single events mixed in regular experiences (one bad dish in a daily restaurant) or we overestimate unlikely events linked to vivid examples (past natural disasters, terrorism, etc.).

Narrow framing and broad framing
Risk policies and bets are a prime example of narrow and broad framing. It is the difference between one coin flip where you can lose 100€ or win 200€ and 200 bets where you can lose 1€ and win 2€. When possible, and wisely used, broad framing gives a better appreciation of reality. It reduces the optimism of the planning bias and loss aversion. Sales rep. might hesitate to make a risky choice while their director would encourage all of them to take the risk.

Frame and reality
The framing of an issue is nearly as important as the reality of the issue: Both scenarios have the same expected gain, but the second one is generally more appealing. The loss of 5€ is framed as a cost, and for our brain, costs are not losses.
 * 95% chance of winning 95€ and 5% of loosing 5%
 * paying 5€ for a 95% chance of winning 100€ and 5% of winning nothing.

In the same vein, if I offer you 50€ and then tell you that I made a mistake and I need some money back. I can frame the same scenario differently. Even if the outcome is the same, the perception is completly different and people tend to prefer the 1st framing.
 * You can keep 20€
 * You loose 30€

Finaly, stating that 90% of patients survive an operation or 10% of patient died is, once again the same information butwith opposite  reception