Dr. Vopson's proposed second law of infodynamics mathematically models reality as information, providing empirical evidence for ancient simulation hypotheses that existence stems from universal consciousness.
Now, if the computers are powerful enough, the simulated universes can probably have computers in them, and those computers will eventually be able to simulate universes, too
There’s one minor problem with this step of this idea. Where are those simulated computers running? For example, let’s say I spin up a virtual machine on my computer. Then, inside that machine I spin up a sub-virtual machine. The processing power to run that sub-machine doesn’t just appear out of nowhere. The processing power is still all coming from the top level machine. A bit less efficiently than just running a second VM on the top level machine.
This would be the same for universe simulations. Let’s say Universe A simulates Universe B. Now Universe B tries to simulate universe C. But, in order for Universe B to run that simulation, Universe A actually has to run that simulation. The simulation doesn’t get run for free. If anything, it’s probably less efficient for Universe A to simulate Universe B simulating Universe C. So, Universe A would make better use of resources to just run the Universe C simulation themselves and just let Universe B see the results and think they are the ones running it.
No matter how deep the universes nest, every simulation must be run by the resources of the top level universe. Either directly or at several levels of abstraction. There’s no getting around that. Now, it could be that the top level universe has a lot of resources and can run sub-universes pretty efficiently. But there will never be any more sub-universes than the top level universe can run be itself.
There’s one minor problem with this step of this idea. Where are those simulated computers running? For example, let’s say I spin up a virtual machine on my computer. Then, inside that machine I spin up a sub-virtual machine. The processing power to run that sub-machine doesn’t just appear out of nowhere. The processing power is still all coming from the top level machine. A bit less efficiently than just running a second VM on the top level machine.
This would be the same for universe simulations. Let’s say Universe A simulates Universe B. Now Universe B tries to simulate universe C. But, in order for Universe B to run that simulation, Universe A actually has to run that simulation. The simulation doesn’t get run for free. If anything, it’s probably less efficient for Universe A to simulate Universe B simulating Universe C. So, Universe A would make better use of resources to just run the Universe C simulation themselves and just let Universe B see the results and think they are the ones running it.
No matter how deep the universes nest, every simulation must be run by the resources of the top level universe. Either directly or at several levels of abstraction. There’s no getting around that. Now, it could be that the top level universe has a lot of resources and can run sub-universes pretty efficiently. But there will never be any more sub-universes than the top level universe can run be itself.