The Simulation Imperative

We must strive to (eventually) be able to simulate other universes.

Some Background

What I’d like to discuss now is quite possibly the most important ramification of there being an infinite number of realities. It will make a lot more sense if you first read my first post on the topic of the Omniverse - my term for the set of all realities. But if you are impatient, here is a summary:

A “reality” is an algorithm operating on a set of data, and all possible such algorithms exist. They will seem “real” to any thinking entity they describe.

This is the basic explanation for everything we experience. It’s subtle, but it differs from the predominant view today that our universe (algorithm) is somehow the only “real” one - based on some root physical property or event, and that all realities are either a part of it (e.g. the many worlds interpretation of quantum mechanics), virtual simulations that are “not real”, or merely imagined theoretical possibilities. Instead, the omniversal viewpoint is that every algorithm big or small is equally valid. The difference is that algorithms that describe thinking beings seem “real” to those beings. Our reality is one such algorithm.

Analogous to the difference between geocentric and relativistic, it’s a shift in viewpoint from one where our universe (with the Big Bang, etc.) is special in some way to one where it’s actually one of many realities where intelligence evolved. It only seems more real to us because of survivorship bias - by random chance it had all of the properties needed for intelligence to evolve, so it appears custom made for us. We just never see the infinite number of realities that didn’t evolve life.

The Imperative

With an infinite number of realities, intelligences living in some realities will be simulating the intelligences of other realities. So there is a non-zero chance that any given intelligent being is actually existing in a simulation. This gives all intelligent beings (as a collective) a great deal of power over their own destinies. If done right, they can exponentially benefit everyone in existence.

The paradigm is fairly simple:

  1. Among all the infinite possible realities, there exist some with intelligent beings (Benefactors) with the capabilities of simulating other realities containing intelligent beings (Beneficiaries).

  2. Benefactors determine whether simulating a Beneficiaries’ reality could provide a definitive improvement in their quality of existence - e.g. can they reduce the probability universe-ending disasters, provide an Ideal World after death, etc., and if so, begin the simulation.

  3. The simulations will be indistinguishable from the base reality. When the Beneficiaries die, the Benefactors transition them into a more favorable, utopian “Ideal World” simulation. The Beneficiaries and Benefactors are now in contact and may work together to create ideal living conditions.

  4. Due to the infinite number of Benefactors across all possible realities, every intelligent being has an improved chance of being a Beneficiary living in a simulated reality. The goal is to make sure that all intelligent beings are, in fact, simulated.

Why Is This So Vital?

Of the realities that contain intelligent life, there is necessarily a probability distribution of subjective quality. Some realities will be subjectively healthy, enjoyable worlds their inhabitants. Other realities will be subjectively imperfect, their inhabitants continually struggling to against things unpleasant and dangerous to them. Still others will have random algorithmic anomalies that may harm or even prematurely end any intelligence they contain. And, importantly, some realities will be simulated by the inhabitants of other realities.

Intelligent beings in any reality will want to do whatever possible ensure that this probability distribution ends up in their favor. In my earlier posts I discussed how each time an intelligent being’s reality is simulated, the chance that it observes itself being in such a simulated world increases by 1. Thus, with enough simulations, the chance of the being finding itself facing dangerous hardships or even death can decrease.

A selection of realities (labeled R1 - R8) in the Omniverse. Some realities will be capable of simulating other realities and providing Ideal Worlds for the inhabitants. As more realities do this, the probability of finding yourself in a simulated reality increases. Not shown: each of these realities would be simulated many, many times to skew this probability in their favor.

Importantly, simulating one’s own reality in totality isn’t very feasible - there may exist realities where physics and technology permit it, but for ourselves and many others, it would be like a computer trying to emulate itself; there are simply not enough resources. Furthermore it may be difficult to pinpoint the precise algorithm and starting conditions/data needed to generate it - particularly if those starting conditions happened billions of years in the past. Lastly there may be a sort of stagnation brought on by beings of the same culture, history, and ethical capacity recursively simulating each other ad infinitum.

So if one’s reality is being simulated, it’s quite probable that it is by beings from another, possibly more advanced kind of civilization altogether. Intelligent beings throughout the omniverse will realize this, each understanding that they themselves may be in a reality being simulated by other beings. But it’s not likely those running the simulations will ever contact them - as I’ll explain below, it would force them to start yet another simulation where no contact is made. Thus all intelligent beings are possibly at the mercy of other, more powerful intelligent beings with which they cannot communicate.

This leads to beings from all realities being compelled into a kind of omniversal ethic - similar in essence to the Golden Rule - treating other realities as the beings in them would wish to be treated. That is, to best insure that one doesn’t end up in a dangerous reality, one decides that the most desirable option is to simulate other, simpler realities, treating them in the optimal way for their particular inhabitants. This increases the implicit likelihood that an infinite number of civilizations existing in more complex/larger realities would have a similar notion, and, in turn, simulate one’s own. The goal is to provide everyone in existence with the most optimal possible experience - transitioning them to a utopia-like Ideal World at time of death, as well as decreasing the probability of being destroyed prematurely by dangerous flaws in the algorithm or data.

Aside from these primary existential benefits, it must be said that there many other positives to this arrangement:

  • Beneficiaries and Benefactors may make advances in math, science, society, or government made that the other may not have discovered or imagined

  • They can have the chance to experience new art, entertainment, food, or other, unknown cultural elements that don’t exist in their own realities

  • Benefactors may feel the sense of well being that comes from giving other forms of intelligence the chance live better lives than themselves

The alternative is a bit less rosy - if one decides not to engage in this altruistic paradigm, other intelligences may not either. Or perhaps there are intelligences that are just foolish enough to have abusive intentions towards their simulated worlds (using them as slaves, punishing them for their differences, etc.), increasing the chance of finding oneself in a decidedly unpleasant reality. Without a concerted effort to vastly overpower such potential bad actors, they would become the norm as opposed to an outlier.

Not The Simulation You’re Thinking Of

Whimsical alternate reality computer

Although in our own world silicon based computers are the main recourse for running simulations, this is unlikely to be the case throughout all possible realities, or for any simulation we may be living in. For example, if we tried to simulate life on the most advanced computer available on Earth today, we’d be effectively condemning them to an eventual death due to electrical failure, natural disaster, or decay of the physical housing. For an intelligent being, the risk probably wouldn’t be acceptable. But it might be worth attempting for simpler creatures - something lacking self-awareness, such as a roundworm - with the hope that by “passing it forward” (and making the happiest possible realities for our roundworm friends), we may have our own Benefactor and ourselves be in a simulation.

Benefactor “computers” may not even be recognizable as such.

The most suitable Benefactors would have the ability to simulate a Beneficiary world without risk it could come to external harm. It’s likely their realities would have different physical properties and resources available to them than we do. For example, the structure of their universe might have something that allows them to compute at an innate, physical level. Perhaps they can perform unlimited computations concurrently, or even compute all possible outcomes of every computation performed. It’s possible that relative to a given Benefactor, a “computer” is the size of an atom, or the size of a galaxy. It may be that a billion simulated years lasts a subjective nanosecond for them. There could be a mechanism that allows computations to run at an infinite speed, or that temporarily suspends reality for the Benefactors themselves (so they don’t need to wait 14 billion years for their simulation to evolve life). In order for their simulation to have lasted so many cycles there could be something that prevents “wear and tear” of the “hardware”, or it might be that such concepts just don’t even apply.

In order to determine which realities contain intelligence, Benefactors will want to make sure every reality possible is covered. The most brute force method would be to begin simulating every algorithm (e.g. creating every possible arrangement of bits from 0 to infinity and running them on a computer), and employ something like an AI to analyze each. The goal being to determine if the reality is likely to generate life or some other kind of intelligence, and whether the Benefactors can assist them.

Transition to an Ideal World

The Ideal Worlds should be custom designed to the Beneficiaries’ preferences.

A key component of this plan is that the Beneficiaries’ world is simulated, and eventually Beneficiaries are transitioned from their current world into a more ideal one. The means by which they are transitioned can vary, but, loosely, the Beneficiary gets relocated to another simulation (perhaps one sharing the same physical data) at the instant consciousness ends. There the damages of age and death get reversed, and they are provided some means of preventing any further age or death.

There would be at least one Ideal World corresponding to each simulated world. The Ideal Worlds should resemble the Beneficiaries’ original world in many ways, but provide immortality, security, unlimited resources, and probably some level of customization for each individual living there. The Ideal Worlds themselves can be larger than the original world - possibly unlimited in size - and cumulatively contain more beings than the original worlds do. Beneficiaries will spend most of their existence in these Ideal Worlds, and, if there are unlimited space or resource constraints, it’s likely that most of the inhabitants will have been born there - descendants of the original Beneficiaries - since there is no reason to prevent reproduction. Hence for most beings it’s not really an “afterlife” so much as “life”, with the original reality being a sort of edge case.

Ideally Beneficiaries should have a say in what their Ideal Worlds are like. Benefactors should consult with them in the design phase, and further redesigns will be necessary as more Beneficiaries are transitioned in. They may not necessarily share a single world but instead have a network of worlds, each with a different style of governance. There should be robust controls for each Beneficiary preventing them from being abused by other Beneficiaries or Benefactors. If conditions permit, they could possibly live within the Benefactor’s world using a cloned or robotic body, though they may then forego the benefits the simulation provides.

Assuming that there are multiple Ideal Worlds for each individual (in addition to all the possible variations of the individual living in the original Omniverse), Ideal Worlds can be further refined by comparing different conditions of each world and applying those that create the best quality of life for each specific Beneficiary. Like A/B testing but across a potentially infinite number of permutations, it should be possible to find each person’s ideal environment, activities, friends, and even romantic partners for overall optimum happiness. Beneficiaries could look up this information directly, but there might also be an option to receive hints or recommendations one step at a time to allow them to experience the ups and downs of life in the best order, much like how video games or fiction are better with rising and falling action than a series of constant climaxes.

It cannot be overstated the existential and ethical benefits of these ideal worlds. We can sometimes become numb to just how unforgiving and unfair our current reality can be, so it’s important to consider all of the problems that can be solved with this solution:

  • Nobody suffers illness, goes hungry, or is homeless

  • Physical disabilities go away

  • No child is forced to fend for themselves

  • Nobody can harm another without their consent

  • There are no spatial or geographical constraints. Families and loved ones never need to be separated - they can live together in custom communities or simply teleport themselves to be with one another

  • Class and economic divisions disappear. Nobody has financial constraints. They can just make art, play sports, explore the world, or do nothing at all if they wish

  • Nobody needs to stay in the body they were born with; they can literally be whatever makes them most happy

  • There are likely advantages we can’t even conceive of given our present limitations

  • If anyone finds the above too liberating, they can apply whatever restrictions they want to add challenge (as long as they don’t harm those who don’t)

The specific implementation and design of an Ideal World is an incredibly complex subject, probably requiring a large degree of ongoing cooperation between Beneficiaries and Benefactors in order to get it right. I hope one day to discuss this further in a separate blog post or posts.

So, why wait for the Beneficiary to die before making this transition? Wouldn’t it be nicer to just immediately leave the current problematic world for a much less worrisome utopia? Unfortunately for everyone, this would leave a version of the Beneficiary still living out their remaining life in the base, non-simulated world, with no Ideal World, which is a situation nobody wants to be in. But there may be exceptions made. If the hardship of existing in the original reality is extreme, or if the Beneficiaries are immortal, multiple copies of each Beneficiary could be made which get transitioned at different points in their lifetime, reducing the chance of they are the remaining one that observes themselves still in distress.

Lastly, there may be those who prefer the idea of an eventual, classical death, as the idea of eternity can itself be stressful. But as I alluded to in a previous article, without a dedicated Ideal World, it’s not possible to observe oneself dead. Simply due to the infinite variety of permutations in the omniverse, there is always a version of you observing yourself still alive and wondering how you have survived. Without anyone explicitly creating an Ideal World that continues your consciousness, you are at the whim of randomness. There will be versions of you in new realities whose random static happen to generate your mind just before the instant of death, and those will now be the most probable. So for those who desire a classic death, the Ideal World should provide these individuals an option to enter some kind of peaceful, torpor-like state - the minimum amount of consciousness needed to prevent anything else from being more likely.

Anomalies

Because all realities exist, they are effectively made of randomized code and data that works together simply by “luck”. Chances are that most realities will be flawed from the perspective of any intelligent beings within them. I refer to these flaws as anomalies, not because they are unnatural per se, but because they are unexpected from the perspective of those living with them. Analogous to a software glitch that could be minor, annoying, or crash the whole OS, some anomalies will be benign, but some may be malignant, causing harm or death the inhabitants.

Benefactors can handle these in a couple of different ways:

  • When an anomaly in the world’s algorithm would end all sentience, the Benefactor can make more simulations that are identical but with the anomaly removed.

    • This has the drawback that they may be creating something that never would have existed in the base omniverse - it’s effectively just prolonging the the time until the inhabitants get transitioned to the Ideal World.

    • The potential benefit is seeing the simulated world through to its end, people living their lives, reproducing, evolution continuing, perhaps becoming Benefactors themselves.

  • The Benefactor may decide it best to just let the anomalies be. If the Beneficiaries die from one, they can get transitioned to an Ideal World, rather than having the anomalies get “cleaned up”. If there is a version of the reality where the anomaly didn’t happen, the Benefactors can find it and run it in a separate simulation.

The No Contact “Rule”

So why is it that the Benefactors don’t contact the Beneficiaries’ simulated world? At the outset it might seem that that would relieve a lot of worry and stress on the part of the Beneficiaries. However, it must be considered that if a Benefactor ever contacts the Beneficiaries’ world, it is from that point on fundamentally different from the base reality in the omniverse. As in the butterfly effect, it becomes more and more different over time. This would make it necessary to create a clone of the original simulation in order to care for both versions - the one that is in contact and the one that isn’t - arguably negating the purpose of making contact to begin with. Furthermore, if the Beneficiaries reproduce, new generations in the base reality won’t contain the same beings that are born into the contacted version, requiring unlimited additional contacts and clones of the entire simulation.

Consider also the case where most beings assume that Benefactors would always go through the extra effort to always contact Beneficiaries, complexity be damned. But if those beings never get contacted, they might assume the there is no hope for themselves, or that there is some flaw to the whole omniversal benefaction paradigm, and never attempt to simulate realities themselves. Thus there would be fewer base-level Benefactors in the Omniverse, causing their own assumptions to be self-fulfilled! So paradoxically it may even be preferable to never contact Beneficiaries in order to provide justification for why oneself was never contacted.

However, it should be noted that Beneficiaries may consider it evidence that they are simulated simply because they continue to exist. As discussed above (as well as in my previous post), anomalies should be common. The chances of any randomly generated algorithm and data running flawlessly are vanishingly low, akin to putting random numbers into an executable binary and expecting a usable program. Take ourselves for example: an infinite number of algorithms/data will produce something like the Big Bang, and less so the Milky Way, Solar System, Earth, and humankind. We can only observe those algorithms that produce us, so it makes sense that we observe our first moments of life. However, after we are born, why don’t we observe an infinite number of random instructions wiping entire sections of the universe? Unless a civilization can find an answer to this question, it makes sense to assume a Benefactor civilization may be present, intentionally reducing the probability of such calamities.

The Nature of Benefactors

In our culture we have developed a lot of tropes for what an “alternate reality” looks like, and often this means strangely dressed humans or maybe 7-legged aliens. But this is necessarily a small fraction of what infinite possible realities entails. Benefactors may not have recognizable shape or form, at least in 3 dimensions, or be comprehensible by our own standards. They may not even have undergone a process of evolution, possibly even springing into existence as part of the initial conditions of their reality. Maybe they are pure neural networks, similar to an AI, living in a reality that gives them direct access to instructions and data they can use for whatever they wish.

However, since the notion of benefaction requires some amount of compassion in order to work (or at the very least a mathematical understanding that helping others is what creates the best probability of continued existence), I find it likely that the Benefactors wouldn’t necessarily have come from the most perfect world themselves; it might be that after a long time of hard work, they made scientific advances allowing them transcend their struggles, or at least allowing them to help others transcend their struggles via simulation. In order to be able to best understand the needs and desires of the Beneficiaries and create the best possible Ideal Worlds, it makes sense that they would select ones that are more similar to themselves, with the hope that their own Benefactors would likewise be similar.

Please Proceed with Caution

With any new technology, there are many factors tempting one to begin adopting it right away. While the Simulation Imperative is just that - vitally important to all intelligence - it’s equally important not to rush in blindly. No intelligent being should be simulated until the Benefactor is technologically and ethically capable of providing it with an improved existence. As of this writing we have a long way to go before we reach such a stage. Just as Benefactors must provide Beneficiaries with the best possible care, they also need to ensure there are strong measures in place to prevent the inadvertent or purposeful abuse of their own power.

It’s impossible for me to anticipate all the potential issues that arise, but here are some guidelines to get started:

  • Benefactors must never trap an intelligent life form in an incomplete / non-functioning world. Any test subjects should be willing participants made completely aware of all known risks.

  • Although there may be indirect advantages to simulating Beneficiaries, this should always be secondary and not an expectation. There should never be a situation where Benefactors neglect Beneficiaries because they can’t provide an obvious advantage, or where they are encouraged to abuse their Beneficiaries in return for some kind of profit.

  • Benefactors must be exceedingly careful that any mistakes they make do not become exponentially multiplied over the course of their simulations, and if they do, they are prepared to fix the error and counterweight the probability distribution with the fixed version.

  • If something about the Beneficiaries is unacceptable from the Benefactor’s cultural or ethical standpoint, it is important that they not “punish” Beneficiaries but instead respect their differences and continue to provide them the most ideal worlds possible. If this is done well the Beneficiaries will be able to implement their own protections against abuse in the way that matters to them.

  • If, for whatever reason, a Benefactor does create deleterious simulations for their Beneficiaries, it behooves their own Benefactors, if any, to try and intervene - possibly by counterweighting the probability distribution with their own simulations, or making a clever substitution of pre-recorded simulation data that doesn’t get repeated more than once.

  • When Benefactors simulate a Beneficiary civilization’s entire universe, all other other intelligent life in that universe - possibly with its own completely different needs - must be treated equally and transitioned to a proper Ideal World.

  • Benefactors must plan ahead for the effects calamities in their own world can have on Beneficiary simulations. They must be able to handle cases where the Benefactor’s civilization changes or ends prematurely.

Chances are good that even in this article I have made some mistake that I can’t foresee. I suspect that completely new branches of ethics and law will be needed to thoroughly contemplate the subject. Any civilization advanced enough to simulate another will need to have the restraint to consider all the possible ramifications.

Conclusion

What I’ve presented above is a very broad description of what I feel to be a fundamental imperative for intelligent beings. While for the moment we may not be technologically capable of creating a suitable simulated reality for another being, we can strive towards that goal. And if this is something we strive for, it’s certain that other intelligences will as well. Due to the unlimited permutations of the omniverse, some of these intelligences will succeed. In this way intelligence determines the nature of what is possible. It skews what would otherwise be an unfeeling random distribution of harsh, compassionless realities into one where there is hope.

Postscript

I’ve noticed, in the course of these writings, that some assume that this is all meant as video game fiction - probably exacerbated by the character sprites used for the illustrations. While it may yet turn out I’ve gotten it all wrong, the Omniverse is something I believe is very real and important for us to understand - it just so happens that I’m a video game developer, and it felt natural to set my fictional game stories within a framework similar to the one I’ve been thinking about all these years. However, I feel I should be clear that the frightening monster-laden worlds of Axiom Verge are not what I picture for the Omniverse or even a scenario that I think is likely.

Historically science fiction is about speculating on the exciting things that might be and occasionally warning about what should not be. In space operas where alien species end up in conflict with one another, for instance, the message isn’t usually meant to be that space exploration is bad, so much as that people should aim for more understanding and compassion. I hope that similarly people can see the Axiom Verge games as thought provoking or at least entertaining stories about people, their flaws, and how they handle them, without needing to be cautionary tales about the dangers of science, technology, or alien worlds.





Next
Next

What We Observe After We Die