Please find our summary of his paper below: (Emphasis ours)
You gain more by not being stupid than you do by being smart. Smart gets neutralised by other smart people. Stupid does not.
The key highlight of the introduction is that it is important to keep a checklist.
A checklist is a key instrument to avoid making mistakes. An investor can beat the market reliably if he/she diligently follows the checklist no matter what. A checklist is a method to keep on beating the market with same level of skills but using those skills consistently. Consistency is the key.
Another idea that is spoken about in the introduction is that investing is a loser’s game. It means that in a game of investing, a player who makes fewer mistakes wins.
The author takes the example of a guy who wrote about an ordinary tennis game. In an ordinary tennis game opposite to a professional tennis game the person who makes the least amount of errors often wins. The same analogy can be traced when we take the example of stock markets. In stock markets your opponent is a collective wisdom of the market.
The key learning here is it is often easier to succeed by making fewer mistakes than it is by being more brilliant.
The author gives us a very important lesson in the passage where he says we overestimate our abilities to achieve and sometimes it might lead to our downfall.
He takes the example of a study conducted by Roger Buehler (A professor of psychology), in which he asks college students to tell him the date by which they were 99 percent confident that they would be done with assignment. For example, if he asks on Monday, the students might respond that they were nearly certain to be done with their academic project by Friday.
When Buehler checked on the predictions, he found that only 45% of the students had completed the tasks on the designated date.
This is a natural way of thinking about plans and predictions, which psychologists call the “inside view”. We gather lots of information, consider the specifics of the situation, and combine the two to create a scenario for the future.
But there is another way to think about plans and predictions that doesn’t come true naturally to us but is more robust. Psychologists call this the “outside view”. The outside view considers the problem as an instance of a larger reference class.
Basically, the outside view imposes a fundamental question: “What happened when others were in this position before?
Research shows that the inside view often yields predictions that are too optimistic, revealing a form of overconfidence. The outside view generally tempers that overconfidence and provides a much stronger foundation for thinking about how the future might unfold.
As a case in point, scientists asked venture capitalists (VCs) to describe a transaction that they were working on, including an estimate of the expected rate of return. The average expected rate of return was about 30 percent. The researchers then asked the VCs to consider two other deals that they deemed comparable. The rate of return for those deals was 20 percent. After having exposed the VCs to the outside view, albeit a small sliver of the total number of deals, 80 percent of the VCs revised down the expected rate of return for the focal deal.
When given a choice, start with statistics and then allow your intuition a say.
Solutions to problem 1:
a) Choose an appropriate reference class: The goal is to find a reference class that is large enough to be statistically useful but sufficiently narrow to be applicable to the decision you face. Take patterns in Mergers and acquisitions for companies. There are a lot of data on M&A, what leads to value creation (cash deals at modest premiums with substantial synergies) and value destruction (stock deals at large premiums with modest synergies).
b) Assess the distribution of outcomes: Not all outcomes follow a normal, bell shaped distribution. For example, of the roughly 2900 initial public offerings in technology since 1980, a small fraction of the companies have created the vast preponderance of the value. So while this is a relevant reference class, the outcomes are heavily skewed.
c) Make a prediction: With data from the reference class and knowledge of the distribution, make an estimate. At this juncture you should be ready to consider a range of possibilities and outcomes.
Let’s say you want to make a point estimate for the total shareholder return for the S&P 500 Index in 2014, something dozens of strategists actually do. Your forecast would appeal to the reference class, which is the past results for S&P 500 returns, and would examine the shape of distribution of those returns.
d) Asses the reliability of your prediction and adjust as appropriate: Statisticians have a term they call “reliability”, which measures the correlation of the same metric over different periods of time. In cases where correlation is low, indicating low reliability, it is appropriate to regress your estimate to the mean substantially.
Mistake #2: Failure to consider a sufficient range of alternatives.
Solution: Conduct a pre-mortem
The “pre-mortem” popularized by a psychologist named Gary Klein, is a technique that uses prospective hindsight as a tool for decision making. Rather than using the past as a guide for the present, the pre-mortem goes from the future to the present.
Before you actually make a decision, launch yourself into the future, say one year from now, and pretend that you made the decision. Now assume the decision turned out poorly, and you must document the reasons for the failure.
Jay Russo and Paul Schoemaker, leading researchers on decision making, provide an example of how this process works.
How likely is it that a woman will be elected the leader of your country in the first election after the next one? Think about all the reasons why this might happen. For specificity, provide a numerical probability.
Now consider another version of the question, which uses prospective hindsight:
Imagine that the first election after the next one has occurred and a woman has been elected the leader of your country. Think about all the reasons why this might have happened. Then provide a numerical probability of this actually occurring.
Russo and Paul find that the second version of the question generates a greater number of paths to the event, as well as higher probability, than the first one. For instance, in one version of an experiment, they found that the subjects who used prospective hindsight generated 25 percent more reasons than those who did not use the technique.
Gary Klein recommends a six-step process to do a pre-mortem:
1. Prepare: Participating team members should be relaxed with paper and pen in hand, and should be familiar with the decision that the group is contemplating.
2. Imagine a fiasco: Klein recommends considering a worst case scenario. He suggests that you should consider the outcome embarrassing and devastating to the point where people of the team are unwilling to speak to one another. He then suggests that while this crystal ball is good enough to see the failure, it is too shoddy to make out the causes.
3. Generate reasons for the failure: He asks each team member to spend three minutes writing down all the reasons behind the failure.
4. Consolidate the lists: After everyone is done, the facilitator goes around the room and asks each team member for one item from his or her list and records it on a white board. The process is not complete until every item on everyone’s list is captured on the board.
5. Revisit the plan: The team is now in a position to revisit the main concerns regarding the prospective decision. If there isn’t sufficient time, the team can arrange another meeting to address ways to tackle the other problems.
6. Periodically review the list: Klein suggests periodically taking out the list in order to keep the specter of failure fresh in the minds of the team members.
Mistake#3: Underestimating or under-appreciating an opposing point of view.
Solution: Create a red team to challenge your mind-set.
The author talks about how sometimes a mindset can lead to ignorance or outright denial of other knowledge. Mindset is important but the rigidity associated with it often leads to more problems.
To counter such situation the author suggests a solution, the solution talks about a concept known as Red-teaming. Red-teaming is a technique to offset the rigidity of mind-sets. The idea is an old one that comes from military strategy.
A red team attacks and blue team defends. In this case, the blue team would be assigned to defend the mind-set that underpins the firm’s portfolio. The Red team would be a smaller number of people within the analytical team who would be charged with contesting the mind-set.
These are some helpful guidelines in setting up a red-team, blue team exercise.
1. Identify the main uncertain factors or key drivers (variables) that will determine an outcome.
2. Pinpoint working assumptions (Linchpin premises) about how the key drivers will operate
3. Advance convincing evidence and reasoning to support the linchpin premises
4. Address any indicators or signposts that would render the linchpin premises unreliable
5. Ask what dramatic events or triggers could reverse the expected outcomes
The primary barrier to updating beliefs is what psychologists call confirmation bias. This bias says that we are more likely to seek information that confirms our belief than information that dis confirms it. It also says that when we face ambiguous information, we naturally interpret in a way that is favorable to our belief.
Methods to improve decisions (Five common mistakes and How to address them) by Michael J. Mauboussin - Part 2 Here