This article serves as an introduction to a fundamental concept in the philosophy of risk taking: antifragility, which itself can be seen as a special case of response asymmetry. These concepts deal with the question of how to act intelligently in environments that are characterized by low probability, high impact events (so-called Black Swans).
The term Antifragility has been coined in this form by the writer Nassim Taleb, a former trader and financial practitioner. Taleb defines Antifragility as the opposite of fragility: it is the property of benefiting from randomness and unexpected events. It is different from robustness, which is simply the resilience of a system against randomness. As an example, you might consider a coffee cup as fragile, an old church as robust and your body as antifragile (at least to a certain extent). The coffee cup will break if you smash it just a bit too hard, the church still stands today even after hundreds of years of stormy weather, and your body will even benefit from small random perturbations (such as weightlifting). In fact, Taleb makes the point that while most natural systems are antifragile, most artificial things are fragile or robust at best. Natural in this sense does not only include systems that occur in nature, but most systems that were designed by an evolution-like trial and error process.
In more formal terminology, we can think of these concepts in terms of response functions that describe the response of a system to random variations or stimuli. If the normal state of the system is described by the value of the response function for a stimulus strength of zero, the left side of the response distribution will represent a “negative” response, while the distribution on the right will represent a “positive” response. Positive and negative in this context are interpreted depending on the context. For a decision problem, it will describe the “return” or “payoff” of the decision process. For a living organism, it will describe the health of the organism. For an object, this is its physical integrity. A fragile object will have a response function that is skewed to the left, negative side. For most stimuli, the system is fine (e.g. a coffee cup or your corporate job), but if the stimulus strength exceeds a certain threshold, it will break completely (i.e. the cup brakes or you get fired). In contrast, a robust system’s response will be rather proportionate to the stimulus. Nothing too bad can happen, but also nothing very good. An antifragile system will have a response function that is skewed to the positive side. If there is no variation, the system will remain unchanged or decay slowly. With increasing stimuli variability, the system will respond more and more positively. For both the fragile and the antifragile system, the response or return of the system is thus asymmetric and non-linear.
The concept of antifragility can also be applied to the payoff of decision strategies: most of the time, such a strategy will yield small or slightly negative returns (e.g. the destiny of most startups), but with a small probability, there is a positive outcome with very large impact (a new Google is born)! This concept is also called “asymmetry of outcomes” and will be of high importance in answering the question of how an antifragile system can be constructed or how to decide in an antifragile fashion.
The importance of knowing the response of a system to random variations is related to a second concept introduced by Taleb. In his book “The Black Swan”, he argues that most complex environments are not governed by medium sized events, which are distributed according to a Gaussian like distribution, but by so-called Black Swans. These are rare events of high impact that can be represented as tail events in a fat-tailed distribution. In these environments particularly, it is important to know the response of a system or the outcome of a decision strategy if subject to strong random variations. Optimally, we want to design a system or choose a strategy that is protected from the impact of these random events, or can even profit from their occurrence. Due to their very rare nature, Black Swans are virtually impossible to predict, which makes it hard to design a system based on the accurate prediction of their occurrence. It is for example very hard to set the premium of an insurance contract by trying to estimate probability of every large damage the insurance might have to cover. But we can design a contract in such a way that we are protected against the exposure to such a Black Swan if it occurs, even without knowing its exact probability (e.g. by setting a maximum amount that is covered, or by employing lawyers to make sure we do not have to pay everything in such a case). This concept of “Black Swan Exposure” is central to any approach trying to deal with Black Swans, ranging from lifestyle choices to the design of a robust or antifragile system.
Black Swans do not always represent catastrophes. It is also possible to be exposed to positive Black Swans. A writer for example has a very low probability of their book becoming a bestseller. But if it does, they will receive immense fame and money. This usually does not happen if you are for instance a civil servant. Your salary will be safe and regular, but there is virtually no chance to become a billionaire.
An interesting question in this context is how antifragility can be measured. Let us for this purpose have a look at a system that seems fairly optimized to its environment: life. Biological systems have been designed by a long evolutionary trial and error process and have therefore experienced events in a large range of magnitudes. If they had been proven to be vulnerable to these events, the systems would not exist anymore. Artificial systems on the other hand are designed according to the best knowledge of the creator, which does not necessarily know the rare, critical events able to destroy the system. The design of the system therefore depends on a prediction that is usually based only on the creator’s experience. While the natural systems got the experience for “free”, by simply surviving in its environment up to this point, the creator of an artificial system has to consider all possible cases that might occur, which is usually not possible.
On a similar line of thought, Taleb makes an argument why risk analysts should rather consider the systems and its response to extreme events rather than trying to calculate the probability of these events. He argues that it is simply easier to find out if a system is fragile or antifragile than trying to predict the events it might be exposed to. This can be done by investigating how the system behaves if it is exposed to stress. If the response is non-linear, asymmetric, and skewed to a negative response, the system is likely fragile. It might therefore fail if it is exposed to a stressor slightly stronger than what the system was designed to resist. If the system responds well to stressors even slightly more violent than what we expect, we can be optimistic that the system will resist even in the case of such an unpredictable event.
How can the concept of response asymmetry be applied to real life decision making? Based on what we discussed, we can define a rule for systems with positive response asymmetry:
If you can decide to do something that requires little effort and carries a low risk, but has a low probability of a potentially very high return, simply do it.
You might wonder about the emphasis on “simply do it”. First of all, it emphasizes the heuristic character of the decision rule. The reason for this is that it is indeed often plain comfort preventing us from creating opportunities. Think for example about the application you did not send because you told yourself you had no chance of getting accepted. Think then – on the other hand – about the small decisions you made that turned out to have a huge, unforeseen impact in the end. In both these cases, the negative outcome is strictly limited and the potential benefit large. The asymmetric return structure of the problem can be exploited by acting according to the rule above. In other words, we increase our exposure to positive Black Swans. In a similar sense, we can also define a kind of “opportunity killer avoidance” rule:
If you can get a safe return, which however prevents you from potentially very high returns, it might be rational to avoid it.
In this case the situation is not as black and white as in the previous example. Depending on your preferences, you might for example prefer a steady income that deprives you from the chance of becoming rich.
We can also apply what we discussed to reduce our exposure to negative Black Swans, for decision problems with negative response asymmetry:
If you can decide to do something that certainly gives you little returns, but has a potentially very high loss, even if the probability seems very low, it is better not to do it. This is in particular true if you cannot estimate how low the probability actually is.
This involves in principle all activities that give you a short return in exchange for the risk of death or serious drawbacks, such as extreme sports or drug consumption. For instance, how would you measure the probability of failure for “climbing Mount Everest in Birkenstock for the first time (without oxygen)”?
We can also define an insurance like rule:
If you can invest a little effort (or money) into something that protects you in the case of a potential very high loss, you should go for it.
Of course, most people know that on average you pay more for insurance than what your expected loss is, since this is how they make money. Is it therefore irrational to buy insurance? No, since the point is again about Black Swan Exposure. This is the reason why you should always be well insured in cases where the potential loss is high or can even ruin you (e.g. liabilities or health insurance). In all other cases, the insurance is pretty useless. If you book a journey for 500 Euros and the insurance to get your money back in case you cancel costs 50 Euros, it is usually irrational to book it. You know that on average you lose more money than you can protect with the insurance, and the maximum loss you can have is limited to the price of the journey (which usually does not pose an existential threat). In those cases, the insurance offers are simply clever marketing strategies playing with our fear and loss aversion. At the same time, insurance companies are well aware about hedging large losses. Just have a look at your insurance contract, and you will probably find that the maximal amount for many treatments is set at a high, but finite value.
It is often said that the modern world is becoming more and more unpredictable and the general quality of information is decreasing. Thinking in terms of response asymmetries and consequences, instead of probabilities, offers the possibility to design systems and make decision without relying on perfect information. Instead of trying to distinguish valuable information from noise, we can simply try to behave so that the difference does not matter.
Johannes Thiele
References:
If you are interested in knowing more about the topics covered by the article:
- N.N. Taleb, “Antifragile”, 2012
- N.N. Taleb, “Fooled by Randomness”, 2001
- N.N. Taleb, “The Black Swan”, 2007
This article deals with the concept of randomness. If you want to know why teaching randomness is important, check this article.