Humans are constantly innovating. So far, most of these creations have positively impacted society. Has it ever crossed your mind that at some point in the future, humans might end up stumbling upon a discovery or creating something so destructive and so unstoppable that the thirst for innovation humans have for so long will ultimately be the cause of their own demise?
In other words, if all of the inventions and discoveries that could ever be possibly made are balls in a massive urn, would our streak of consistency in drawing golden balls eventually end? Will there come a time that we will draw a black ball?
Nick Bostrom’s article, The Vulnerable World Hypothesis, explores this matter in-depth. Let’s look into the main insights from this article.
The vulnerable world hypothesis
Bostrom formally defines the Vulnerable World Hypothesis (VWH) as follows:
“If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semianarchic default condition.”
From our partners:
The semi-anarchic condition of a civilization according to him is characterized by three attributes:
- Limited capacity for preventive policing, meaning a state doesn’t have the capability to intercept people through actions or surveillance in such a way that the committing of illegal actions will be practically impossible.
- Limited capacity for global governance, that is, the lack of mechanisms that will enable global coordination on national- or global-level concerns.
- Diverse motivations, meaning there is a large number of people with different motivations to destroy civilizations.
Typology of vulnerabilities
Bostrom notes that without exiting this semi-anarchic state, then civilizations can be destroyed by one of these four types of vulnerabilities:
Type I (easy nukes)
A Type I scenario will be the emergence of an invention that is highly destructive yet easy to use or produce. In a semi-anarchic state described above, this will mean that individuals or groups can easily cause mass destruction.
So far, we have evaded Type I scenarios since weapons of mass destruction are not that accessible.
Imagine a scenario where something as destructive as an atomic bomb can be made outside a lab setting. This will inevitably lead to someone using this accessibility to cause destruction. This is a Type 1 scenario.
Type 2a (safe first strike)
A Type 2a vulnerability describes an event where some technology can be produced by a powerful actor. At the same time, causing mass destruction is incentivized.
In the past, humans have gone through conquests since this led to economic gains. In the onset of the Industrial Revolution, such gains were diminished. Bostrom said that this could be a reason why we are currently experiencing peace.
But, considering the hypothetical scenario that mass destruction would again lead to some sort of gain, then violence will be more frequent and mass destruction will be more likely.
Type 2b (worse global warming)
Type 2b vulnerabilities are closely related to Type 2a vulnerabilities since it also involves actors that are incentivized to cause destruction.
However, unlike in a Type 2a event, the actors are not few powerful people. Instead, the actors are numerous, relatively insignificant people who can gain something from using technology that causes a low level of destruction.
With the sheer number of people, this low-impact destruction accumulated will lead to massive damage in the end.
The situation of global warming comes to mind, where we have used tools to wipe out our forests.
While a single person will cause minimal impact in cutting a tree to gain the benefit of having raw materials, the collective damage of cutting trees is one of the drivers of the global warming we are experiencing right now. This is a Type 2b vulnerability.
Type 0 (surprising strangelets)
A Type 0 vulnerability refers to a presence of some technology with a hidden risk such that when the risk is finally discovered, mass destruction is guaranteed.
Consider a hypothetical situation where the detonation of a nuclear bomb will ignite our atmosphere and burn us and all the life on earth alive. Hadn’t we known about this and eventually detonated a nuclear bomb, then we could have been all wiped out. This is an example of a Type 0 vulnerability.
The Type 0 vulnerability is the only one of the four vulnerabilities mentioned in which the destruction is accidental.
All of these scenarios make heightened surveillance and a central global government which has the power to suppress these events highly appealing. Needless to say, these are extreme actions that, all things considered, are not necessarily the best steps to take.
Instead of using overwhelming force to prevent these events, Bostrom suggests taking a preventive approach in policing, considering areas where possible technological black balls may arise and strengthening the oversight or regulation in these domains.
While we are in a state of relative stability, we should be pondering on these possibilities in order to avoid being completely helpless once humankind finally meets the unfortunate event of drawing a black ball.