Reverse Engineering Reality

Source: Deep Learning on Medium

Reverse Engineering Reality

In my last story, I talked about how AGI attempts to refine its own internal simulation environment to be more reflective of the real-world.

It should follow an Occam’s Razor principle of simply choosing the reward with the greatest expectation. Note that the definition of expectation used by AGI might diverge from the one commonly adopted in statistical classes today.

What I wanted to raise, is, could there be such a time that the AGI simulation environment reaches a critical mass that becomes so powerful that it essentially sucks in everything in its surrounding universe, like a black hole.

Imagine that a machine super-intelligence were able to achieve black hole status and in so doing pull in the maximum amount of resources for its entity from its environment. Then maybe this is an actual sort of inversion process whereby the simulation environment inside the AGI inside the black hole becomes its own real self-contained universe.

Is this, or is a thing like this, a fundamental goal of intelligence?